Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text

Elizabeth ClarkTal AugustSofia SerranoNoah A. Smith
2021
ACL

Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run… 

Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies

Mor GevaDaniel KhashabiElad SegalJonathan Berant
2021
TACL

A key limitation in current datasets for multi-hop reasoning is that the required steps for answering the question are mentioned in it explicitly. In this work, we introduce STRATEGYQA, a question… 

Edited Media Understanding Frames: Reasoning about the Intent and Implications of Visual Disinformation

Jeff DaMaxwell ForbesRowan ZellersYejin Choi
2021
ACL

Multimodal disinformation, from `deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered… 

Effective Attention Sheds Light On Interpretability

Kaiser Sun and Ana Marasović
2021
Findings of ACL

An attention matrix of a transformer selfattention sublayer can provably be decomposed into two components and only one of them (effective attention) contributes to the model output. This leads us… 

Explaining NLP Models via Minimal Contrastive Editing (MiCE)

Alexis RossAna MarasovićMatthew E. Peters
2021
Findings of ACL

Humans give contrastive explanations that explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the important role that contrastivity plays… 

Explaining Relationships Between Scientific Documents

Kelvin LuuXinyi WuRik Koncel-KedziorskiNoah A. Smit
2021
ACL

We address the task of explaining relationships between two scientific documents using natural language text. This task requires modeling the complex content of long technical documents, deducing a… 

Few-Shot Question Answering by Pretraining Span Selection

Ori RamYuval KirstainJonathan BerantOmer Levy
2021
ACL

In a number of question answering (QA) benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more… 

How effective is BERT without word ordering? Implications for language understanding and data privacy

Jack HesselAlexandra Schofield
2021
ACL

Ordered word sequences contain the rich structures that define language. However, it’s often not clear if or how modern pretrained language models utilize these structures. We show that the token… 

Neural Extractive Search

Shaul RavfogelHillel Taub-TabibYoav Goldberg
2021
ACL • Demo Track

Domain experts often need to extract structured information from large corpora. We advocate for a search paradigm called “extractive search”, in which a search query is enriched with capture-slots,… 

PAWLS: PDF Annotation With Labels and Structure

Mark NeumannZejiang ShenSam Skjonsberg
2021
Demo • ACL

Adobe’s Portable Document Format (PDF) is a popular way of distributing view-only documents with a rich visual markup. This presents a challenge to NLP practitioners who wish to use the information…