Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

William MerrillAshish Sabharwal
2025
NeurIPS

Recent theoretical results show transformers cannot express sequential reasoning problems over long inputs, intuitively because their computational *depth* is bounded. However, prior work treats the… 

Exact Expressive Power of Transformers with Padding

William MerrillAshish Sabharwal
2025
NeurIPS

Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there… 

Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning

Shuyue Stella LiJimin MunFaeze BrahmanMaarten Sap
2025
COLM

Large language models (LLMs) often fail to ask effective questions under uncertainty, making them unreliable in domains where proactive information-gathering is essential for decisionmaking. We… 

HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions

Xuhui ZhouHyunwoo KimFaeze BrahmanMaarten Sap
2025
COLM

AI agents are increasingly autonomous in their interactions with human users and tools, leading to increased interactional safety risks. We present HAICOSYSTEM, a framework examining AI agent safety… 

ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data

Tong ChenFaeze BrahmanJiacheng LiuHanna Hajishirzi
2025
COLM

Language models (LMs) can memorize and reproduce segments from their pretraining data verbatim even in non-adversarial settings, raising concerns about copyright, plagiarism, privacy, and… 

AstaBench: Rigorous Benchmarking of AI Agents with a Holistic Scientific Research Suite

Jonathan BraggMike D'ArcyNishant BalepurDaniel S. Weld
2025
Preprint

AI agents hold great real-world promise, with the potential to revolutionize scientific productivity by automating literature reviews, replicating experiments, analyzing data, and even proposing new… 

FlexOlmo: Open Language Models for Flexible Data Use

Weijia ShiAkshita BhagiaKevin FarhatSewon Min
2025
arXiv.org

We introduce FlexOlmo, a new class of language models (LMs) that supports (1) distributed training without data sharing, where different model parameters are independently trained on closed… 

Signal and Noise: A Framework for Reducing Uncertainty in Language Model Evaluation

David HeinemanValentin HofmannIan MagnussonJesse Dodge
2025
arXiv.org

Developing large language models is expensive and involves making decisions with small experiments, typically by evaluating on large, multi-task evaluation suites. In this work, we analyze specific… 

DataDecide: How to Predict Best Pretraining Data with Small Experiments

Ian MagnussonNguyen TaiBen BoginJesse Dodge
2025
ICML

Because large language models are expensive to pretrain on different datasets, using smaller-scale experiments to decide on data is crucial for reducing costs. Which benchmarks and methods of making… 

DataDecide: How to Predict Best Pretraining Data with Small Experiments

Ian MagnussonNguyen TaiBen BoginJesse Dodge
2025
arXiv.org

Because large language models are expensive to pretrain on different datasets, using smaller-scale experiments to decide on data is crucial for reducing costs. Which benchmarks and methods of making… 

1-10Next