More than open
Our approach - More than open
Ai2 believes in the power of openness to build a future where AI is accessible to all. Open weights alone aren’t enough – true openness requires models to be trained in the open with open access to data, models, and code.
We are open first
Open access to pre-training data is a must have for understanding model behavior. We cannot take a scientific approach to AI creation without access to and understanding of the training data.
We train AI models in the open. We share design decisions, learnings, and documentation along the way so that anyone can understand how our models are trained.
We openly share training code, model weights, checkpoints, and evaluation benchmarks to enable anyone to reproduce or build upon our models.
We are setting open standards. We provide open evaluation tools, benchmarks, and safety toolkits so that the community can develop safe and trustworthy AI together.
Our open models
OLMo 2
Meet OLMo 2, the best fully open language model to date, including a family of 7B and 13B models trained up to 5T tokens. OLMo 2 outperforms other fully open models and competes with open-weight models like Llama 3.1 8B.
Tülu 3
Tülu 3 is a leading instruction following model family, offering fully open-source data, code, and recipes designed to serve as a comprehensive guide for modern post-training techniques.
Molmo
Molmo is a family of open state-of-the-art multimodal AI models. Our most powerful model closes the gap between open and proprietary systems across a wide range of academic benchmarks as well as human evaluations. Our smaller models outperform models 10x their size.