Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Generalization v.s. Memorization: Tracing Language Models' Capabilities Back to Pretraining Data

Antonis AntoniadesXinyi WangYanai ElazarW. Wang
2025
ICLR

The impressive capabilities of large language models (LLMs) have sparked debate over whether these models genuinely generalize to unseen tasks or predominantly rely on memorizing vast amounts of… 

Holistically Evaluating the Environmental Impact of Creating Language Models

Jacob MorrisonClara NaJared FernandezJesse Dodge
2025
ICLR

As the performance of artificial intelligence systems has dramatically increased, so too has the environmental impact of creating these systems. While many model developers release estimates of the… 

MIB: A Mechanistic Interpretability Benchmark

Aaron MuellerAtticus GeigerSarah WiegreffeYonatan Belinkov
2025
arXiv

How can we know whether new mechanistic interpretability methods achieve real improvements? In pursuit of meaningful and lasting evaluation standards, we propose MIB, a benchmark with two tracks… 

CodeScientist: End-to-End Semi-Automated Scientific Discovery with Code-based Experimentation

Peter JansenOyvind TafjordMarissa RadenskyPeter Clark
2025
arXiv

Despite the surge of interest in autonomous scientific discovery (ASD) of software artifacts (e.g., improved ML algorithms), current ASD systems face two key limitations: (1) they largely explore… 

A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

William MerrillAshish Sabharwal
2025
arXiv

Recent theoretical results show transformers cannot express sequential reasoning problems over long input lengths, intuitively because their computational depth is bounded. However, prior work… 

ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning

Bill Yuchen LinRonan Le BrasKyle RichardsonYejin Choi
2025
arXiv

We investigate the logical reasoning capabilities of large language models (LLMs) and their scalability in complex non-monotonic reasoning. To this end, we introduce ZebraLogic, a comprehensive… 

Understanding the Logic of Direct Preference Alignment through Logic

Kyle RichardsonVivek SrikumarAshish Sabharwal
2024
arXiv

Recent direct preference alignment algorithms (DPA), such as DPO, have shown great promise in aligning large language models to human preferences. While this has motivated the development of many… 

The One RING: a Robotic Indoor Navigation Generalist

Ainaz EftekharLuca WeihsRose HendrixKuo-Hao Zeng
2024
arXiv

Modern robots vary significantly in shape, size, and sensor configurations used to perceive and interact with their environments. However, most navigation policies are embodiment-specific; a policy… 

Paloma: A Benchmark for Evaluating Language Model Fit

Ian MagnussonAkshita BhagiaValentin HofmannJesse Dodge
2024
NeurIPS

Language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domains$\unicode{x2013}$varying distributions of… 

DISCOVERYWORLD: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents

Peter JansenMarc-Alexandre CoteTushar KhotPeter Clark
2024
NeurIPS Datasets and Benchmarks

Automated scientific discovery promises to accelerate progress across scientific domains. However, developing and evaluating an AI agent's capacity for end-to-end scientific reasoning is challenging…