Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Yanai ElazarBhargavi ParanjapeHao PengNoah A. Smith
2024
EMNLP

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to… 

Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation

Bar IluzYanai ElazarAsaf YehudaiGabriel Stanovsky
2024
EMNLP

Most works on gender bias focus on intrinsic bias -- removing traces of information about a protected group from the model's internal representation. However, these works are often disconnected from… 

Evaluating n-Gram Novelty of Language Models Using Rusty-DAWG

William MerrillNoah A. SmithYanai Elazar
2024
EMNLP

How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate /n/-grams from their training data,… 

Detection and Measurement of Syntactic Templates in Generated Text

Chantal ShaibYanai ElazarJunyi Jessy LiByron C. Wallace
2024
EMNLP

Recent work on evaluating the diversity of text generated by LLMs has focused on word-level features. Here we offer an analysis of syntactic features to characterize general repetition in models,… 

CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization

Bodhisattwa Prasad MajumderBhavana Dalvi MishraPeter JansenPeter Clark
2024
COLM

Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup… 

Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models

Matt DeitkeChristopher ClarkSangho LeeAniruddha Kembhavi
2024
arXiv

Today's most advanced multimodal models remain proprietary. The strongest open-weight models rely heavily on synthetic data from proprietary VLMs to achieve good performance, effectively distilling… 

Application of the AI2 Climate Emulator to E3SMv2's global atmosphere model, with a focus on precipitation fidelity

James P. C. DuncanElynn WuJean-Christoph Golazand Christopher S. Bretherton
2024
Journal of Geophysical Research - Machine Learning

Can the current successes of global machine learning-based weather simulators be generalized beyond 2-week forecasts to stable and accurate multiyear runs? The recently developed AI2 Climate… 

Pushing the frontiers in climate modelling and analysis with machine learning

V. EyringWilliam D. CollinsPierre GentineLaure Zanna
2024
Nature Climate Change

Climate modelling and analysis are facing new demands to enhance projections and climate information. Here we argue that now is the time to push the frontiers of machine learning beyond… 

Weather and climate predicted accurately — without using a supercomputer

Oliver Watt-Meyer
2024
Nature

A cutting-edge global model of the atmosphere combines machine learning with a numerical model based on the laws of physics. This ‘hybrid’ system accurately predicts the weather — and even shows… 

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks

Peter HaseMohit BansalPeter ClarkSarah Wiegreffe
2024
ACL

How can we train models to perform well on hard test data when hard training data is by definition difficult to label correctly? This question has been termed the scalable oversight problem and has… 

1-10Next