Skip to main content ->
Ai2

Open technologies - Open models

Whether you want to study the science of language models or improve how multimodal models interpret our world, we can help. Our benchmarking tools also let you evaluate the capabilities and safety of the latest models.

Language models

Explore our series of truly open language models, pushing the boundaries of model research and development.

Multimodal models

We’re a leader in the creation of powerful, multipurpose general models that can operate across a variety of input and output modes.

Evaluation frameworks

Evaluating models for performance and safety is a rapidly growing area of research. Check out our collection of open-source tools for a variety of language model evaluation tasks.

Featured model - OLMo

Open Language Model (OLMo) is a framework intentionally designed to provide access to data, training code, models, and evaluation code necessary to advance AI through open research by empowering academics and researchers to study the science of language models collectively.

Featured evaluation framework - Safety Tool

This evaluation suite offers open-sourced code for easy and comprehensive safety evaluation on generative language models and safety moderation tools.