Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle

Peter WestAri HoltzmanJan BuysYejin Choi
2019
EMNLP

The principle of the Information Bottleneck (Tishby et al. 1999) is to produce a summary of information X optimized to predict some other relevant information Y. In this paper, we propose a novel… 

COSMOS QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

Lifu HuangRonan Le BrasChandra BhagavatulaYejin Choi
2019
EMNLP

Understanding narratives requires reading between the lines, which in turn, requires interpreting the likely causes and effects of events, even when they are not mentioned explicitly. In this paper,… 

Counterfactual Story Reasoning and Generation

Lianhui QinAntoine BosselutAri HoltzmanYejin Choi
2019
EMNLP

Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes. Despite being considered a necessary component of… 

Efficient Navigation with Language Pre-training and Stochastic Sampling

Xiujun LiChunyuan LiQiaolin XiaYejin Choi
2019
EMNLP

Core to the vision-and-language navigation (VLN) challenge is building robust instruction representations and action decoding schemes, which can generalize well to previously unseen instructions and… 

Social IQA: Commonsense Reasoning about Social Interactions

Maarten SapHannah RashkinDerek ChenYejin Choi
2019
EMNLP

We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations. Social IQa contains 38,000 multiple choice questions for probing emotional and social… 

COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

Antoine BosselutHannah RashkinMaarten SapYejin Choi
2019
ACL

We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017).… 

HellaSwag: Can a Machine Really Finish Your Sentence?

Rowan ZellersAri HoltzmanYonatan BiskYejin Choi
2019
ACL

Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most… 

The Risk of Racial Bias in Hate Speech Detection

Maarten SapDallas CardSaadia GabrielNoah A. Smith
2019
ACL

We investigate how annotators’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations. We… 

Robust Navigation with Language Pretraining and Stochastic Sampling

Xiujun LiChunyuan LiQiaolin XiaYejin Choi
2019
EMNLP

Core to the vision-and-language navigation (VLN) challenge is building robust instruction representations and action decoding schemes, which can generalize well to previously unseen instructions and… 

Do Neural Language Representations Learn Physical Commonsense?

Maxwell ForbesAri HoltzmanYejin Choi
2019
CogSci

Humans understand language based on the rich background knowledge about how the physical world works, which in turn allows us to reason about the physical world through language. In addition to the…