Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Yanai ElazarBhargavi ParanjapeHao PengNoah A. Smith
2024
EMNLP

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to… 

Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation

Bar IluzYanai ElazarAsaf YehudaiGabriel Stanovsky
2024
EMNLP

Most works on gender bias focus on intrinsic bias -- removing traces of information about a protected group from the model's internal representation. However, these works are often disconnected from… 

Evaluating n-Gram Novelty of Language Models Using Rusty-DAWG

William MerrillNoah A. SmithYanai Elazar
2024
EMNLP

How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate /n/-grams from their training data,… 

Detection and Measurement of Syntactic Templates in Generated Text

Chantal ShaibYanai ElazarJunyi Jessy LiByron C. Wallace
2024
EMNLP

Recent work on evaluating the diversity of text generated by LLMs has focused on word-level features. Here we offer an analysis of syntactic features to characterize general repetition in models,… 

SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories

Ben BoginKejuan YangShashank GuptaTushar Khot
2024
EMNLP

Given that Large Language Models (LLMs) have made significant progress in writing code, can they now be used to autonomously reproduce results from research repositories? Such a capability would be… 

Scalable Data Ablation Approximations for Language Models through Modular Training and Merging

Clara NaIan MagnussonAnanya Harsh JhaPradeep Dasigi
2024
EMNLP

Training data compositions for Large Language Models (LLMs) can significantly affect their downstream performance. However, a thorough data ablation study exploring large sets of candidate data… 

Merge to Learn: Efficiently Adding Skills to Language Models with Model Merging

Jacob Daniel MorrisonNoah A. SmithHanna HajishirziPradeep Dasigi
2024
EMNLP Findings

Adapting general-purpose language models to new skills is currently an expensive process that must be repeated as new instruction datasets targeting new skills are created, or can cause the models… 

ComPO: Community Preferences for Language Model Personalization

Sachin KumarChan Young ParkYulia TsvetkovHanna Hajishirzi
2024
arXiv.org

Conventional algorithms for training language models (LMs) with human feedback rely on preferences that are assumed to account for an"average"user, disregarding subjectivity and finer-grained… 

CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization

Bodhisattwa Prasad MajumderBhavana Dalvi MishraPeter JansenPeter Clark
2024
COLM

Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup… 

IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback

Kevin PuK. FengTovi GrossmanPao Siangliulue
2024
arXiv.org

Research ideation involves broad exploring and deep refining ideas. Both require deep engagement with literature. Existing tools focus primarily on idea broad generation, yet offer little support…