Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle

Peter WestAri HoltzmanJan BuysYejin Choi
2019
EMNLP

The principle of the Information Bottleneck (Tishby et al. 1999) is to produce a summary of information X optimized to predict some other relevant information Y. In this paper, we propose a novel… 

COSMOS QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

Lifu HuangRonan Le BrasChandra BhagavatulaYejin Choi
2019
EMNLP

Understanding narratives requires reading between the lines, which in turn, requires interpreting the likely causes and effects of events, even when they are not mentioned explicitly. In this paper,… 

Counterfactual Story Reasoning and Generation

Lianhui QinAntoine BosselutAri HoltzmanYejin Choi
2019
EMNLP

Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes. Despite being considered a necessary component of… 

Do NLP Models Know Numbers? Probing Numeracy in Embeddings

Eric WallaceYizhong WangSujian LiMatt Gardner
2019
EMNLP

The ability to understand and work with numbers (numeracy) is critical for many complex reasoning tasks. Currently, most NLP models treat numbers in text in the same way as other tokens---they embed… 

Don't paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing

Jonathan HerzigJonathan Berant
2019
EMNLP

A major hurdle on the road to conversational interfaces is the difficulty in collecting data that maps language utterances to logical forms. One prominent approach for data collection has been to… 

Efficient Navigation with Language Pre-training and Stochastic Sampling

Xiujun LiChunyuan LiQiaolin XiaYejin Choi
2019
EMNLP

Core to the vision-and-language navigation (VLN) challenge is building robust instruction representations and action decoding schemes, which can generalize well to previously unseen instructions and… 

Entity, Relation, and Event Extraction with Contextualized Span Representations

David WaddenUlme WennbergYi LuanHannaneh Hajishirzi
2019
EMNLP

We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called… 

Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text

Bhavana Dalvi MishraNiket TandonAntoine BosselutPeter Clark
2019
EMNLP

Our goal is to better comprehend procedural text, e.g., a paragraph about photosynthesis, by not only predicting what happens, but why some actions need to happen before others. Our approach builds… 

Global Reasoning over Database Structures for Text-to-SQL Parsing

Ben BoginMatt GardnerJonathan Berant
2019
EMNLP

State-of-the-art semantic parsers rely on auto-regressive decoding, emitting one symbol at a time. When tested against complex databases that are unobserved at training time (zero-shot), the parser… 

“Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding

Ben ZhouDaniel KhashabiQiang NingDan Roth
2019
EMNLP

Understanding time is crucial for understanding events expressed in natural language. Because people rarely say the obvious, it is often necessary to have commonsense knowledge about various…