Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

Anthony ChenGabriel StanovskyS. SinghMatt Gardner
2020
EMNLP

Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded… 

More Bang for Your Buck: Natural Perturbation for Robust Question Answering

Daniel KhashabiTushar KhotAshish Sabharwal
2020
EMNLP

While recent models have achieved human-level scores on many NLP datasets, we observe that they are considerably sensitive to small changes in input. As an alternative to the standard approach of… 

Multilevel Text Alignment with Cross-Document Attention

Xuhui ZhouNikolaos PappasNoah A. Smith
2020
EMNLP

Text alignment finds application in tasks such as citation recommendation and plagiarism detection. Existing alignment methods operate at a single, predefined level and cannot learn to align texts… 

Multi-Step Inference for Reasoning over Paragraphs

Jiangming LiuMatt GardnerShay B. CohenMirella Lapata
2020
EMNLP

Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box… 

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

Ana MarasovićChandra BhagavatulaJ. ParkYejin Choi
2020
Findings of EMNLP

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on… 

OCNLI: Original Chinese Natural Language Inference

H. HuKyle RichardsonLiang XuL. Moss
2020
Findings of EMNLP

Despite the tremendous recent progress on natural language inference (NLI), driven largely by large-scale investment in new datasets (e.g., SNLI, MNLI) and advances in modeling, most progress has… 

Parsing with Multilingual BERT, a Small Treebank, and a Small Corpus

Ethan C. ChauLucy H. LinNoah A. Smith
2020
Findings of EMNLP

Pretrained multilingual contextual representations have shown great success, but due to the limits of their pretraining data, their benefits do not apply equally to all language varieties. This… 

PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking

Hannah RashkinAsli CelikyilmazYejin ChoiJianfeng Gao
2020
EMNLP

We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent… 

Plug and Play Autoencoders for Conditional Text Generation

Florian MaiNikolaos PappasI. MonteroNoah A. Smith
2020
EMNLP

Text autoencoders are commonly used for conditional generation tasks such as style transfer. We propose methods which are plug and play, where any pretrained autoencoder can be used, and only… 

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

Xinyao MaMaarten SapHannah RashkinYejin Choi
2020
EMNLP

Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often…