Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature

David WaddenKejian ShiJacob Daniel MorrisonArman Cohan
2025
EMNLP

We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following instances for training and evaluation, covering 54 tasks. These tasks span… 

Text or Pixels? It Takes Half: On the Token Efficiency of Visual Text Inputs in Multimodal LLMs

Yanhong LiZixuan LanJiawei Zhou
2025
EMNLP

Large language models (LLMs) and their multimodal variants can now process visual inputs, including images of text. This raises an intriguing question: can we compress textual inputs by feeding them… 

Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning

Shuyue Stella LiJimin MunFaeze BrahmanMaarten Sap
2025
COLM

Large language models (LLMs) often fail to ask effective questions under uncertainty, making them unreliable in domains where proactive information-gathering is essential for decisionmaking. We… 

HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions

Xuhui ZhouHyunwoo KimFaeze BrahmanMaarten Sap
2025
COLM

AI agents are increasingly autonomous in their interactions with human users and tools, leading to increased interactional safety risks. We present HAICOSYSTEM, a framework examining AI agent safety… 

ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data

Tong ChenFaeze BrahmanJiacheng LiuHanna Hajishirzi
2025
COLM

Language models (LMs) can memorize and reproduce segments from their pretraining data verbatim even in non-adversarial settings, raising concerns about copyright, plagiarism, privacy, and… 

FlexOlmo: Open Language Models for Flexible Data Use

Weijia ShiAkshita BhagiaKevin FarhatSewon Min
2025
arXiv.org

We introduce FlexOlmo, a new class of language models (LMs) that supports (1) distributed training without data sharing, where different model parameters are independently trained on closed… 

Signal and Noise: A Framework for Reducing Uncertainty in Language Model Evaluation

David HeinemanValentin HofmannIan MagnussonJesse Dodge
2025
arXiv.org

Developing large language models is expensive and involves making decisions with small experiments, typically by evaluating on large, multi-task evaluation suites. In this work, we analyze specific… 

DataDecide: How to Predict Best Pretraining Data with Small Experiments

Ian MagnussonNguyen TaiBen BoginJesse Dodge
2025
ICML

Because large language models are expensive to pretrain on different datasets, using smaller-scale experiments to decide on data is crucial for reducing costs. Which benchmarks and methods of making… 

DataDecide: How to Predict Best Pretraining Data with Small Experiments

Ian MagnussonNguyen TaiBen BoginJesse Dodge
2025
arXiv.org

Because large language models are expensive to pretrain on different datasets, using smaller-scale experiments to decide on data is crucial for reducing costs. Which benchmarks and methods of making… 

Diverging Preferences: When do Annotators Disagree and do Models Know?

Michael J.Q. ZhangZhilin WangJena D. HwangValentina Pyatkin
2025
ICML

We examine diverging preferences in human-labeled preference datasets. We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes -- task underspecification,… 

1-10Next