Ai2 blog
September 2024 - OLMoE: An open, small, and state-of-the-art mixture-of-experts model
Introducing OLMoE, the first model to be on the Pareto frontier of performance and size, released with open data.
August 2024 - Digital Socrates: Evaluating LLMs through explanation critiques
Digital Socrates is an evaluation tool that can characterize LLMs' explanation capabilities.
August 2024 - Open research is the key to unlocking safer AI
Ai2 presents our stance on openness and safety in AI.
July 2024 - Broadening the scope of noncompliance: When and how AI models should not comply with user requests
We outline the taxonomy of model noncompliance and then delve deeper into implementing model noncompliance.
June 2024 - The Ai2 Safety Toolkit: Datasets and models for safe and responsible LLMs development
Introducing the Ai2 Safety Toolkit, featuring an automatic red-teaming framework and a lightweight moderation tool.
June 2024 - PolygloToxicityPrompts: Multilingual evaluation of neural toxic degeneration in large language models
New research on AI prompt toxicity, revealing insights into neural toxic degeneration across diverse languages.
May 2024 - Data-driven discovery with large generative models
We believe AI can assist researchers in finding relevant preexisting work to expedite discoveries. Here's how.
April 2024 - SatlasPretrain Models: Foundation models for satellite and aerial imagery
We’re excited to announce SatlasPretrain Models, a suite of open geospatial foundation models.