Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

MIB: A Mechanistic Interpretability Benchmark

Aaron MuellerAtticus GeigerSarah WiegreffeYonatan Belinkov
2025
ICML

How can we know whether new mechanistic interpretability methods achieve real improvements? In pursuit of meaningful and lasting evaluation standards, we propose MIB, a benchmark with two tracks… 

Language Modeling by Language Models

Junyan ChengPeter ClarkKyle Richardson
2025
arxiv

Can we leverage LLMs to model the process of discovering novel language model (LM) architectures? Inspired by real research, we propose a multi-agent LLM approach that simulates the conventional… 

Answer, Assemble, Ace: Understanding How LMs Answer Multiple Choice Questions

Sarah WiegreffeOyvind TafjordYonatan BelinkovAshish Sabharwal
2025
ICLR

Multiple-choice question answering (MCQA) is a key competence of performant transformer language models that is tested by mainstream benchmarks. However, recent evidence shows that models can have… 

DiscoveryBench: Towards Data-Driven Discovery with Large Language Models

Bodhisattwa Prasad MajumderHarshit SuranaDhruv AgarwalPeter Clark
2025
ICLR

Can the rapid advances in code generation, function calling, and data analysis using large language models (LLMs) help automate the search and verification of hypotheses purely from a set of… 

LLM-SR: Scientific Equation Discovery via Programming with Large Language Models

Parshin ShojaeeKazem MeidaniShashank GuptaChandan K Reddy
2025
ICLR

Mathematical equations have been unreasonably effective in describing complex natural phenomena across various scientific disciplines. However, discovering such insightful equations from data… 

On Linear Representations and Pretraining Data Frequency in Language Models

Jack MerulloNoah A. SmithSarah WiegreffeYanai Elazar
2025
ICLR

Pretraining data has a direct impact on the behaviors and quality of language models (LMs), but we only understand the most basic principles of this relationship. While most work focuses on… 

WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild

Bill Yuchen LinYuntian DengK. ChanduYejin Choi
2025
ICLR

We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs) using challenging, real-world user queries. WildBench consists of 1,024 tasks carefully… 

Understanding the Logic of Direct Preference Alignment through Logic

Kyle RichardsonVivek SrikumarAshish Sabharwal
2025
Proceedings of ICML 2025

Recent direct preference alignment algorithms (DPA), such as DPO, have shown great promise in aligning large language models to human preferences. While this has motivated the development of many… 

CodeScientist: End-to-End Semi-Automated Scientific Discovery with Code-based Experimentation

Peter JansenOyvind TafjordMarissa RadenskyPeter Clark
2025
arXiv

Despite the surge of interest in autonomous scientific discovery (ASD) of software artifacts (e.g., improved ML algorithms), current ASD systems face two key limitations: (1) they largely explore… 

A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

William MerrillAshish Sabharwal
2025
arXiv

Recent theoretical results show transformers cannot express sequential reasoning problems over long input lengths, intuitively because their computational depth is bounded. However, prior work… 

1-10Next