Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Answer, Assemble, Ace: Understanding How LMs Answer Multiple Choice Questions

Sarah WiegreffeOyvind TafjordYonatan BelinkovAshish Sabharwal
2025
ICLR

Multiple-choice question answering (MCQA) is a key competence of performant transformer language models that is tested by mainstream benchmarks. However, recent evidence shows that models can have… 

LLM-SR: Scientific Equation Discovery via Programming with Large Language Models

Parshin ShojaeeKazem MeidaniShashank GuptaChandan K Reddy
2025
ICLR

Mathematical equations have been unreasonably effective in describing complex natural phenomena across various scientific disciplines. However, discovering such insightful equations from data… 

On Linear Representations and Pretraining Data Frequency in Language Models

Jack MerulloNoah A. SmithSarah WiegreffeYanai Elazar
2025
ICLR

Pretraining data has a direct impact on the behaviors and quality of language models (LMs), but we only understand the most basic principles of this relationship. While most work focuses on… 

Generalization v.s. Memorization: Tracing Language Models' Capabilities Back to Pretraining Data

Antonis AntoniadesXinyi WangYanai ElazarW. Wang
2025
ICLR

The impressive capabilities of large language models (LLMs) have sparked debate over whether these models genuinely generalize to unseen tasks or predominantly rely on memorizing vast amounts of… 

Holistically Evaluating the Environmental Impact of Creating Language Models

Jacob MorrisonClara NaJared FernandezJesse Dodge
2025
ICLR

As the performance of artificial intelligence systems has dramatically increased, so too has the environmental impact of creating these systems. While many model developers release estimates of the… 

Skilful global seasonal predictions from a machine learning weather model trained on reanalysis data

Chris KentAdam A. ScaifeN. DunstoneOliver Watt-Meyer
2025
arXiv

Machine learning weather models trained on observed atmospheric conditions can outperform conventional physics-based models at short- to medium-range (1-14 day) forecast timescales. Here we take the… 

Understanding the Logic of Direct Preference Alignment through Logic

Kyle RichardsonVivek SrikumarAshish Sabharwal
2025
Proceedings of ICML 2025

Recent direct preference alignment algorithms (DPA), such as DPO, have shown great promise in aligning large language models to human preferences. While this has motivated the development of many… 

CodeScientist: End-to-End Semi-Automated Scientific Discovery with Code-based Experimentation

Peter JansenOyvind TafjordMarissa RadenskyPeter Clark
2025
arXiv

Despite the surge of interest in autonomous scientific discovery (ASD) of software artifacts (e.g., improved ML algorithms), current ASD systems face two key limitations: (1) they largely explore… 

A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

William MerrillAshish Sabharwal
2025
arXiv

Recent theoretical results show transformers cannot express sequential reasoning problems over long input lengths, intuitively because their computational depth is bounded. However, prior work… 

ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning

Bill Yuchen LinRonan Le BrasKyle RichardsonYejin Choi
2025
arXiv

We investigate the logical reasoning capabilities of large language models (LLMs) and their scalability in complex non-monotonic reasoning. To this end, we introduce ZebraLogic, a comprehensive…