Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Diverging Preferences: When do Annotators Disagree and do Models Know?

Michael J.Q. ZhangZhilin WangJena D. HwangValentina Pyatkin
2025
ICML

We examine diverging preferences in human-labeled preference datasets. We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes -- task underspecification,… 

MIB: A Mechanistic Interpretability Benchmark

Aaron MuellerAtticus GeigerSarah WiegreffeYonatan Belinkov
2025
ICML

How can we know whether new mechanistic interpretability methods achieve real improvements? In pursuit of meaningful and lasting evaluation standards, we propose MIB, a benchmark with two tracks… 

SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior

Jing-Jing LiValentina PyatkinMax Kleiman-WeinerSydney Levine
2025
ICML

The ideal AI safety moderation system would be both structurally interpretable (so its decisions can be reliably explained) and steerable (to align to safety standards and reflect a community's… 

OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens

Jiacheng LiuTaylor BlantonYanai ElazarJesse Dodge
2025
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

We present OLMoTrace, the first system that traces the outputs of language models back to their full, multi-trillion-token training data in real time. OLMoTrace finds and shows verbatim matches… 

Language Modeling by Language Models

Junyan ChengPeter ClarkKyle Richardson
2025
arxiv

Can we leverage LLMs to model the process of discovering novel language model (LM) architectures? Inspired by real research, we propose a multi-agent LLM approach that simulates the conventional… 

Language Modeling by Language Models

Junyan ChengPeter ClarkKyle Richardson
2025
NeurIPS 2025

Can we leverage LLMs to model the process of discovering novel language model (LM) architectures? Inspired by real research, we propose a multi-agent LLM approach that simulates the conventional… 

Critical Batch Size Revisited: A Simple Empirical Approach to Large-Batch Language Model Training

William MerrillShane AroraDirk GroeneveldHanna Hajishirzi
2025
arXiv.org

The right batch size is important when training language models at scale: a large batch size is necessary for fast training, but a batch size that is too large will harm token efficiency. To… 

Holodeck: Language Guided Generation of 3D Embodied AI Environments

Yue YangFan-Yun SunLuca WeihsChristopher Clark
2025
Computer Vision and Pattern Recognition

3D simulated environments play a critical role in Embodied AI, but their creation requires expertise and extensive manual effort, restricting their diversity and scope. To miti-gate this limitation,… 

Multi-Attribute Constraint Satisfaction via Language Model Rewriting

Ashutosh BahetiDebanjana ChakrabortyFaeze BrahmanMaarten Sap
2025
TMLR

Obeying precise constraints on top of multiple external attributes is a common computational problem underlying seemingly different domains, from controlled text generation to protein engineering.… 

ACE2: accurately learning subseasonal to decadal atmospheric variability and forced responses

Oliver Watt‐MeyerBrian HennJeremy McGibbonChristopher S. Bretherton
2025
NPJ Climate and Atmospheric Science

Existing machine learning models of weather variability are not formulated to enable assessment of their response to varying external boundary conditions such as sea surface temperature and…