Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Cooperative Generator-Discriminator Networks for Abstractive Summarization with Narrative Flow

Saadia GabrielAntoine BosselutAri HoltzmanYejin Choi
2019
arXiv

We introduce Cooperative Generator-Discriminator Networks (Co-opNet), a general framework for abstractive summarization with distinct modeling of the narrative flow in the output summary. Most… 

Efficient Adaptation of Pretrained Transformers for Abstractive Summarization

Andrew Pau HoangAntoine BosselutAsli ÇelikyilmazYejin Choi
2019
arXiv

Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks. Whether they can be effectively adapted for summarization, however,… 

From Recognition to Cognition: Visual Commonsense Reasoning

Rowan ZellersYonatan BiskAli FarhadiYejin Choi
2019
CVPR

Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people’s actions, goals,… 

Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

Lianhui QinMichel GalleyChris BrockettJianfeng Gao
2019
ACL

Although neural conversational models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and… 

Benchmarking Hierarchical Script Knowledge

Yonatan BiskJan BuysKarl PichottaYejin Choi
2019
NAACL

Understanding procedural language requires reasoning about both hierarchical and temporal relations between events. For example, “boiling pasta” is a sub-event of “making a pasta dish”, typically… 

CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge

Alon TalmorJonathan HerzigNicholas LourieJonathan Berant
2019
NAACL

When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant… 

MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms

ida AminiSaadia GabrielPeter LinHannaneh Hajishirzi
2019
NAACL

We introduce a large-scale dataset of math word problems and an interpretable neural math problem solver by learning to map problems to their operation programs. Due to annotation challenges,… 

The Curious Case of Neural Text Degeneration

Ari HoltzmanJan BuysLi DuYejin Choi
2019
ICLR

Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The… 

Tactical Rewind: Self-Correction via Backtracking in Vision-And-Language Navigation

Liyiming KeXiujun LiYonatan BiskS. Srinivasa
2019
CVPR

We present the Frontier Aware Search with backTracking (FAST) Navigator, a general framework for action decoding, that achieves state-of-the-art results on the 2018 Room-to-Room (R2R)… 

DREAM: A Challenge Data Set and Models for Dialogue-Based Reading Comprehension

Kai SunDian YuJianshu ChenClaire Cardie
2019
TACL

We present DREAM, the first dialogue-based multiple-choice reading comprehension data set. Collected from English as a Foreign Language examinations designed by human experts to evaluate the…