Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

William MerrillAshish Sabharwal
2025
NeurIPS

Recent theoretical results show transformers cannot express sequential reasoning problems over long inputs, intuitively because their computational *depth* is bounded. However, prior work treats the… 

Exact Expressive Power of Transformers with Padding

William MerrillAshish Sabharwal
2025
NeurIPS

Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there… 

Language Modeling by Language Models

Junyan ChengPeter ClarkKyle Richardson
2025
NeurIPS

Can we leverage LLMs to model the process of discovering novel language model (LM) architectures? Inspired by real research, we propose a multi-agent LLM approach that simulates the conventional… 

Open-ended Scientific Discovery via Bayesian Surprise

Dhruv AgarwalBodhisattwa Prasad MajumderReece AdamsonPeter Clark
2025
NeurIPS

The promise of autonomous scientific discovery (ASD) hinges not only on answering questions, but also on knowing which questions to ask. Most recent works in ASD explore the use of large language… 

MoNaCo: More Natural and Complex Questions for Reasoning Across Dozens of Documents

Tomer WolfsonHarsh TrivediMor GevaReut Tsarfaty
2025
TACL

Automated agents, powered by Large language models (LLMs), are emerging as the go-to tool for querying information. However, evaluation benchmarks for LLM agents rarely feature natural questions… 

AstaBench: Rigorous Benchmarking of AI Agents with a Holistic Scientific Research Suite

Jonathan BraggMike D'ArcyNishant BalepurDaniel S. Weld
2025
Preprint

AI agents hold great real-world promise, with the potential to revolutionize scientific productivity by automating literature reviews, replicating experiments, analyzing data, and even proposing new… 

Signal and Noise: A Framework for Reducing Uncertainty in Language Model Evaluation

David HeinemanValentin HofmannIan MagnussonJesse Dodge
2025
arXiv.org

Developing large language models is expensive and involves making decisions with small experiments, typically by evaluating on large, multi-task evaluation suites. In this work, we analyze specific… 

DataDecide: How to Predict Best Pretraining Data with Small Experiments

Ian MagnussonNguyen TaiBen BoginJesse Dodge
2025
ICML

Because large language models are expensive to pretrain on different datasets, using smaller-scale experiments to decide on data is crucial for reducing costs. Which benchmarks and methods of making… 

DataDecide: How to Predict Best Pretraining Data with Small Experiments

Ian MagnussonNguyen TaiBen BoginJesse Dodge
2025
arXiv.org

Because large language models are expensive to pretrain on different datasets, using smaller-scale experiments to decide on data is crucial for reducing costs. Which benchmarks and methods of making… 

MIB: A Mechanistic Interpretability Benchmark

Aaron MuellerAtticus GeigerSarah WiegreffeYonatan Belinkov
2025
ICML

How can we know whether new mechanistic interpretability methods achieve real improvements? In pursuit of meaningful and lasting evaluation standards, we propose MIB, a benchmark with two tracks…