Open models - Language models
Whether you’re looking to study the science of language models or improve how multimodal models interpret our world, we have you covered. Then see how these models stack up using our range of evaluation frameworks.
Featured model - OLMo
Open Language Model (OLMo) is a framework intentionally designed to provide access to data, training code, models, and evaluation code necessary to advance AI through open research by empowering academics and researchers to study the science of language models collectively.
Featured model - OLMoE
OLMoE is the first mixture-of-experts model to join the OLMo family. OLMoE is the first model to be on the Pareto frontier of performance and size, while also being released with open data, code, evaluations, logs, and intermediate training checkpoints. OLMoE can be trained 2x faster than equivalent dense models.
Tulu
The Tulu series is a collection of instruction and RLHF-tuned chat models trained on a mix of publicly available, synthetic, and human-created datasets to act as helpful assistants. We release all the checkpoints, data, and training and evaluation code to facilitate future open efforts on adapting large language models.