Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature

David WaddenKejian ShiJacob Daniel MorrisonArman Cohan
2025
EMNLP

We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following instances for training and evaluation, covering 54 tasks. These tasks span… 

Intent-Aware Schema Generation And Refinement For Literature Review Tables

Vishakh PadmakumarJoseph Chee ChangKyle LoAakanksha Naik
2025
EMNLP

The increasing volume of academic literature makes it essential for researchers to organize, compare, and contrast collections of documents. Large language models (LLMs) can support this process by… 

Text or Pixels? It Takes Half: On the Token Efficiency of Visual Text Inputs in Multimodal LLMs

Yanhong LiZixuan LanJiawei Zhou
2025
EMNLP

Large language models (LLMs) and their multimodal variants can now process visual inputs, including images of text. This raises an intriguing question: can we compress textual inputs by feeding them… 

Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation

Bar IluzYanai ElazarAsaf YehudaiGabriel Stanovsky
2024
EMNLP

Most works on gender bias focus on intrinsic bias -- removing traces of information about a protected group from the model's internal representation. However, these works are often disconnected from… 

ArxivDIGESTables: Synthesizing Scientific Literature into Tables using Language Models

Benjamin NewmanYoonjoo LeeAakanksha NaikKyle Lo
2024
EMNLP

When conducting literature reviews, scientists often create literature review tables—tables whose rows are publications and whose columns constitute a schema, a set of aspects used to compare and… 

Detection and Measurement of Syntactic Templates in Generated Text

Chantal ShaibYanai ElazarJunyi Jessy LiByron C. Wallace
2024
EMNLP

Recent work on evaluating the diversity of text generated by LLMs has focused on word-level features. Here we offer an analysis of syntactic features to characterize general repetition in models,… 

Evaluating n-Gram Novelty of Language Models Using Rusty-DAWG

William MerrillNoah A. SmithYanai Elazar
2024
EMNLP

How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate /n/-grams from their training data,… 

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Yanai ElazarBhargavi ParanjapeHao PengNoah A. Smith
2024
EMNLP Findings

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to… 

Mechanistic?

Naomi SaphraSarah Wiegreffe
2024
EMNLP • BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

The rise of the term “mechanistic interpretability” has accompanied increasing interest in understanding neural models—particularly language models. However, this jargon has also led to a fair… 

Merge to Learn: Efficiently Adding Skills to Language Models with Model Merging

Jacob Daniel MorrisonNoah A. SmithHanna HajishirziPradeep Dasigi
2024
EMNLP Findings

Adapting general-purpose language models to new skills is currently an expensive process that must be repeated as new instruction datasets targeting new skills are created, or can cause the models… 

1-10Next