Skip to main content ->
Ai2

Ai2 blog

August 2024 - Say hello to Ai2’s new logo

Introducing Ai2’s new brand and website!

Lj Miranda / October 2024 - Hybrid preferences: Learning to route instances for human vs. AI feedback

We introduce a routing framework that combines inputs from humans and LMs to achieve better annotation quality.

Yuling Gu / October 2024 - Applying theory of mind: Can AI understand and predict human behavior?

"Theory of Mind" is the ability to understand that others have their own thoughts and beliefs.

Jordan Steward / October 2024 - Ai2 at COP 16: Harnessing AI and conservation tech to protect our planet

We're heading to UN Biodiversity COP to showcase how open, collaborative AI can galvanize communities.

Will Merrill / October 2024 - Investigating pretraining dynamics and stability with OLMo checkpoints

We use data from our open pretraining runs to test hypotheses about training dynamics in OLMo checkpoints.

Niklas Muennighoff / September 2024 - OLMoE: An open, small, and state-of-the-art mixture-of-experts model

Introducing OLMoE, the first model to be on the Pareto frontier of performance and size, released with open data.

Yuling Gu / August 2024 - Digital Socrates: Evaluating LLMs through explanation critiques

Digital Socrates is an evaluation tool that can characterize LLMs' explanation capabilities.

August 2024 - Open research is the key to unlocking safer AI

Ai2 presents our stance on openness and safety in AI.

Faeze Brahman and Sachin Kumar / July 2024 - Broadening the scope of noncompliance: When and how AI models should not comply with user requests

We outline the taxonomy of model noncompliance and then delve deeper into implementing model noncompliance.

Nouha Dziri / June 2024 - The Ai2 Safety Toolkit: Datasets and models for safe and responsible LLMs development

Introducing the Ai2 Safety Toolkit, featuring an automatic red-teaming framework and a lightweight moderation tool.

June 2024 - PolygloToxicityPrompts: Multilingual evaluation of neural toxic degeneration in large language models

New research on AI prompt toxicity, revealing insights into neural toxic degeneration across diverse languages.

May 2024 - Data-driven discovery with large generative models

We believe AI can assist researchers in finding relevant preexisting work to expedite discoveries. Here's how.

1-9Next