Ai2 blog
November 2024 - OLMo 2: The best fully open language model to date
Our next generation of fully-open base and instruct models sit at the Pareto frontier of performance and training efficiency.
November 2024 - Tülu 3 opens language model post-training up to more tasks and more people
Tülu 3 is a leading instruction following model family, offering fully open-source data, code, and recipes designed to serve as a comprehensive guide for modern post-training techniques.
November 2024 - Tülu 3: The next era in open post-training
A technical deep-dive into Tülu 3, with the model "recipe", data, and more.
August 2024 - Open research is the key to unlocking safer AI
Ai2 presents our stance on openness and safety in AI.
Faeze Brahman and Sachin Kumar / July 2024 - Broadening the scope of noncompliance: When and how AI models should not comply with user requests
We outline the taxonomy of model noncompliance and then delve deeper into implementing model noncompliance.
Nouha Dziri / June 2024 - The Ai2 Safety Toolkit: Datasets and models for safe and responsible LLMs development
Introducing the Ai2 Safety Toolkit, featuring an automatic red-teaming framework and a lightweight moderation tool.
June 2024 - PolygloToxicityPrompts: Multilingual evaluation of neural toxic degeneration in large language models
New research on AI prompt toxicity, revealing insights into neural toxic degeneration across diverse languages.
May 2024 - Data-driven discovery with large generative models
We believe AI can assist researchers in finding relevant preexisting work to expedite discoveries. Here's how.
Piper Wolters / April 2024 - SatlasPretrain Models: Foundation models for satellite and aerial imagery
We’re excited to announce SatlasPretrain Models, a suite of open geospatial foundation models.
Jordan Steward / April 2024 - Restoring Bahamas’ seas: CBS Mornings spotlights Skylight’s support
Through a mix of innovative partnerships and Skylight’s AI, learn how the country is protecting its ocean life.
April 2024 - OLMo 1.7–7B: A 24 point improvement on MMLU
Introducing an updated version of our 7 billion parameter Open Language Model, OLMo 1.7–7B.