Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Answer, Assemble, Ace: Understanding How LMs Answer Multiple Choice Questions

Sarah WiegreffeOyvind TafjordYonatan BelinkovAshish Sabharwal
2025
ICLR

Multiple-choice question answering (MCQA) is a key competence of performant transformer language models that is tested by mainstream benchmarks. However, recent evidence shows that models can have… 

LLM-SR: Scientific Equation Discovery via Programming with Large Language Models

Parshin ShojaeeKazem MeidaniShashank GuptaChandan K Reddy
2025
ICLR

Mathematical equations have been unreasonably effective in describing complex natural phenomena across various scientific disciplines. However, discovering such insightful equations from data… 

DISCOVERYWORLD: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents

Peter JansenMarc-Alexandre CoteTushar KhotPeter Clark
2024
NeurIPS Datasets and Benchmarks

Automated scientific discovery promises to accelerate progress across scientific domains. However, developing and evaluating an AI agent's capacity for end-to-end scientific reasoning is challenging… 

Paloma: A Benchmark for Evaluating Language Model Fit

Ian MagnussonAkshita BhagiaValentin HofmannJesse Dodge
2024
NeurIPS

Language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domains$\unicode{x2013}$varying distributions of… 

The Art of Saying No: Contextual Noncompliance in Language Models

Faeze BrahmanSachin KumarVidhisha BalachandranHannaneh Hajishirzi
2024
NeurIPS Datasets & Benchmarks

Chat-based language models are designed to be helpful, yet they should not comply with every user request. While most existing work primarily focuses on refusal of"unsafe"queries, we posit that the… 

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Yanai ElazarBhargavi ParanjapeHao PengNoah A. Smith
2024
EMNLP

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to… 

SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories

Ben BoginKejuan YangShashank GuptaTushar Khot
2024
EMNLP

Given that Large Language Models (LLMs) have made significant progress in writing code, can they now be used to autonomously reproduce results from research repositories? Such a capability would be… 

CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization

Bodhisattwa Prasad MajumderBhavana Dalvi MishraPeter JansenPeter Clark
2024
COLM

Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup… 

IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback

Kevin PuK. FengTovi GrossmanPao Siangliulue
2024
arXiv.org

Research ideation involves broad exploring and deep refining ideas. Both require deep engagement with literature. Existing tools focus primarily on idea broad generation, yet offer little support… 

AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents

Harsh TrivediTushar KhotMareike HartmannNiranjan Balasubramanian
2024
ACL

Autonomous agents that address day-to-day digital tasks (e.g., ordering groceries for a household), must not only operate multiple apps (e.g., notes, messaging, shopping app) via APIs, but also… 

1-10Next