Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Leveraging In-Context Learning for Language Model Agents

Shivanshu GuptaSameer SinghAshish SabharwalBen Bogin
2025
NeurIPS • Workshop on Multi-Turn Interactions in LLMs

In-context learning (ICL) with dynamically selected demonstrations combines the flexibility of prompting large language models (LLMs) with the ability to leverage training data to improve… 

A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

William MerrillAshish Sabharwal
2025
NeurIPS

Recent theoretical results show transformers cannot express sequential reasoning problems over long inputs, intuitively because their computational *depth* is bounded. However, prior work treats the… 

Exact Expressive Power of Transformers with Padding

William MerrillAshish Sabharwal
2025
NeurIPS

Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there… 

Language Modeling by Language Models

Junyan ChengPeter ClarkKyle Richardson
2025
NeurIPS

Can we leverage LLMs to model the process of discovering novel language model (LM) architectures? Inspired by real research, we propose a multi-agent LLM approach that simulates the conventional… 

Open-ended Scientific Discovery via Bayesian Surprise

Dhruv AgarwalBodhisattwa Prasad MajumderReece AdamsonPeter Clark
2025
NeurIPS

The promise of autonomous scientific discovery (ASD) hinges not only on answering questions, but also on knowing which questions to ask. Most recent works in ASD explore the use of large language… 

MoNaCo: More Natural and Complex Questions for Reasoning Across Dozens of Documents

Tomer WolfsonHarsh TrivediMor GevaReut Tsarfaty
2025
TACL

Automated agents, powered by Large language models (LLMs), are emerging as the go-to tool for querying information. However, evaluation benchmarks for LLM agents rarely feature natural questions… 

Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning

Shuyue Stella LiJimin MunFaeze BrahmanMaarten Sap
2025
COLM

Large language models (LLMs) often fail to ask effective questions under uncertainty, making them unreliable in domains where proactive information-gathering is essential for decisionmaking. We… 

HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions

Xuhui ZhouHyunwoo KimFaeze BrahmanMaarten Sap
2025
COLM

AI agents are increasingly autonomous in their interactions with human users and tools, leading to increased interactional safety risks. We present HAICOSYSTEM, a framework examining AI agent safety… 

ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data

Tong ChenFaeze BrahmanJiacheng LiuHanna Hajishirzi
2025
COLM

Language models (LMs) can memorize and reproduce segments from their pretraining data verbatim even in non-adversarial settings, raising concerns about copyright, plagiarism, privacy, and… 

AstaBench: Rigorous Benchmarking of AI Agents with a Holistic Scientific Research Suite

Jonathan BraggMike D'ArcyNishant BalepurDaniel S. Weld
2025
Preprint

AI agents hold great real-world promise, with the potential to revolutionize scientific productivity by automating literature reviews, replicating experiments, analyzing data, and even proposing new… 

1-10Next