Skip to main content ->
Ai2

Ai2 blog

November 2024 - Tülu 3 opens language model post-training up to more tasks and more people

Tülu 3 is a leading instruction following model family, offering fully open-source data, code, and recipes.

November 2024 - Tülu 3: The next era in open post-training

A technical deep-dive into Tülu 3, with the model "recipe", data, and more.

November 2024 - Scientific literature synthesis with retrieval-augmented language models

Ai2’s & UW’s new retrieval-augmented LM helps scientists navigate and synthesize scientific literature.

November 2024 - How many Van Goghs does it take to Van Gogh? Finding the imitation threshold

Meet MIMETIC^2: Finding the number of images required by text-to-image models for imitation of a concept.

October 2024 - Hybrid preferences: Learning to route instances for human vs. AI feedback

We introduce a routing framework that combines inputs from humans and LMs to achieve better annotation quality.

October 2024 - Applying theory of mind: Can AI understand and predict human behavior?

"Theory of Mind" is the ability to understand that others have their own thoughts and beliefs.

October 2024 - Ai2 at COP 16: Harnessing AI and conservation tech to protect our planet

We're heading to UN Biodiversity COP to showcase how open, collaborative AI can galvanize communities.

October 2024 - Investigating pretraining dynamics and stability with OLMo checkpoints

We use data from our open pretraining runs to test hypotheses about training dynamics in OLMo checkpoints.

September 2024 - Molmo

A family of open state-of-the-art multimodal AI models