Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

DISCOVERYWORLD: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents

Peter JansenMarc-Alexandre CoteTushar KhotPeter Clark
2024
NeurIPS Datasets and Benchmarks

Automated scientific discovery promises to accelerate progress across scientific domains. However, developing and evaluating an AI agent's capacity for end-to-end scientific reasoning is challenging… 

Paloma: A Benchmark for Evaluating Language Model Fit

Ian MagnussonAkshita BhagiaValentin HofmannJesse Dodge
2024
NeurIPS

Language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domains$\unicode{x2013}$varying distributions of… 

The Art of Saying No: Contextual Noncompliance in Language Models

Faeze BrahmanSachin KumarVidhisha BalachandranHannaneh Hajishirzi
2024
NeurIPS Datasets & Benchmarks

Chat-based language models are designed to be helpful, yet they should not comply with every user request. While most existing work primarily focuses on refusal of"unsafe"queries, we posit that the… 

Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation

Bar IluzYanai ElazarAsaf YehudaiGabriel Stanovsky
2024
EMNLP

Most works on gender bias focus on intrinsic bias -- removing traces of information about a protected group from the model's internal representation. However, these works are often disconnected from… 

Detection and Measurement of Syntactic Templates in Generated Text

Chantal ShaibYanai ElazarJunyi Jessy LiByron C. Wallace
2024
EMNLP

Recent work on evaluating the diversity of text generated by LLMs has focused on word-level features. Here we offer an analysis of syntactic features to characterize general repetition in models,… 

Evaluating n-Gram Novelty of Language Models Using Rusty-DAWG

William MerrillNoah A. SmithYanai Elazar
2024
EMNLP

How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate /n/-grams from their training data,… 

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Yanai ElazarBhargavi ParanjapeHao PengNoah A. Smith
2024
EMNLP

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to… 

SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories

Ben BoginKejuan YangShashank GuptaTushar Khot
2024
EMNLP

Given that Large Language Models (LLMs) have made significant progress in writing code, can they now be used to autonomously reproduce results from research repositories? Such a capability would be… 

CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization

Bodhisattwa Prasad MajumderBhavana Dalvi MishraPeter JansenPeter Clark
2024
COLM

Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup… 

Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models

Matt DeitkeChristopher ClarkSangho LeeAniruddha Kembhavi
2024
arXiv

Today's most advanced multimodal models remain proprietary. The strongest open-weight models rely heavily on synthetic data from proprietary VLMs to achieve good performance, effectively distilling… 

1-10Next