Open models - Language models
Whether you’re looking to study the science of language models or improve how multimodal models interpret our world, we have you covered. Then see how these models stack up using our range of evaluation frameworks.
Featured model - OLMo
Open Language Model (OLMo) is a framework intentionally designed to provide access to data, training code, models, and evaluation code necessary to advance AI through open research by empowering academics and researchers to study the science of language models collectively.
Tulu
The Tulu series is a collection of instruction and RLHF-tuned chat models trained on a mix of publicly available, synthetic, and human-created datasets to act as helpful assistants. We release all the checkpoints, data, and training and evaluation code to facilitate future open efforts on adapting large language models.