Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Competency Problems: On Finding and Removing Artifacts in Language Data

Matt GardnerWilliam Cooper MerrillJesse DodgeNoah A. Smith
2021
EMNLP

Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have “spurious” instead of… 

Expected Validation Performance and Estimation of a Random Variable's Maximum

Jesse DodgeSuchin GururanganD. CardNoah A. Smith
2021
Findings of EMNLP

Research in NLP is often supported by experimental results, and improved reporting of such results can lead to better understanding and more reproducible science. In this paper we analyze three… 

Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation

Jungo KasaiNikolaos PappasHao PengNoah A. Smith
2021
ICLR

State-of-the-art neural machine translation models generate outputs autoregressively, where every step conditions on the previously generated tokens. This sequential nature causes inherent decoding… 

Random Feature Attention

Hao PengNikolaos PappasDani YogatamaLingpeng Kong
2021
ICLR

Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep.… 

All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text

Elizabeth ClarkTal AugustSofia SerranoNoah A. Smith
2021
ACL

Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run… 

Effective Attention Sheds Light On Interpretability

Kaiser Sun and Ana Marasović
2021
Findings of ACL

An attention matrix of a transformer selfattention sublayer can provably be decomposed into two components and only one of them (effective attention) contributes to the model output. This leads us… 

Explaining NLP Models via Minimal Contrastive Editing (MiCE)

Alexis RossAna MarasovićMatthew E. Peters
2021
Findings of ACL

Humans give contrastive explanations that explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the important role that contrastivity plays… 

Explaining Relationships Between Scientific Documents

Kelvin LuuXinyi WuRik Koncel-KedziorskiNoah A. Smit
2021
ACL

We address the task of explaining relationships between two scientific documents using natural language text. This task requires modeling the complex content of long technical documents, deducing a… 

PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World

Rowan ZellersAri HoltzmanMatthew E. PetersYejin Choi
2021
ACL

We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language. We factorize PIGLeT into a physical dynamics model, and a… 

Promoting Graph Awareness in Linearized Graph-to-Text Generation

Alexander M. HoyleAna MarasovićNoah A. Smith
2021
Findings of ACL

Generating text from structured inputs, such as meaning representations or RDF triples, has often involved the use of specialized graphencoding neural networks. However, recent applications of…