Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Paloma: A Benchmark for Evaluating Language Model Fit

Ian MagnussonAkshita BhagiaValentin HofmannJesse Dodge
2024
NeurIPS

Language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domains$\unicode{x2013}$varying distributions of… 

DISCOVERYWORLD: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents

Peter JansenMarc-Alexandre CoteTushar KhotPeter Clark
2024
NeurIPS Datasets and Benchmarks

Automated scientific discovery promises to accelerate progress across scientific domains. However, developing and evaluating an AI agent's capacity for end-to-end scientific reasoning is challenging… 

The Art of Saying No: Contextual Noncompliance in Language Models

Faeze BrahmanSachin KumarVidhisha BalachandranHannaneh Hajishirzi
2024
NeurIPS Datasets & Benchmarks

Chat-based language models are designed to be helpful, yet they should not comply with every user request. While most existing work primarily focuses on refusal of"unsafe"queries, we posit that the… 

MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization

Orevaoghene AhiaSachin KumarHila GonenNoah A. Smith
2024
NeurIPS

In multilingual settings, non-Latin scripts and low-resource languages are usually disadvantaged in terms of language models' utility, efficiency, and cost. Specifically, previous studies have… 

Tülu 3: Pushing Frontiers in Open Language Model Post-Training

Nathan LambertJacob Daniel MorrisonValentina PyatkinHanna Hajishirzi
2024
arXiv

Language model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary… 

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Yanai ElazarBhargavi ParanjapeHao PengNoah A. Smith
2024
EMNLP

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to… 

Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation

Bar IluzYanai ElazarAsaf YehudaiGabriel Stanovsky
2024
EMNLP

Most works on gender bias focus on intrinsic bias -- removing traces of information about a protected group from the model's internal representation. However, these works are often disconnected from… 

Evaluating n-Gram Novelty of Language Models Using Rusty-DAWG

William MerrillNoah A. SmithYanai Elazar
2024
EMNLP

How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate /n/-grams from their training data,… 

Detection and Measurement of Syntactic Templates in Generated Text

Chantal ShaibYanai ElazarJunyi Jessy LiByron C. Wallace
2024
EMNLP

Recent work on evaluating the diversity of text generated by LLMs has focused on word-level features. Here we offer an analysis of syntactic features to characterize general repetition in models,… 

SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories

Ben BoginKejuan YangShashank GuptaTushar Khot
2024
EMNLP

Given that Large Language Models (LLMs) have made significant progress in writing code, can they now be used to autonomously reproduce results from research repositories? Such a capability would be… 

1-10Next