Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

Ximing LuPeter WestRowan ZellersYejin Choi
2020
NAACL

Conditional text generation often requires lexical constraints, i.e., which words should or shouldn’t be included in the output text. While the dominant recipe for conditional text generation has… 

Paraphrasing vs Coreferring: Two Sides of the Same Coin

Y. MegedAvi CaciularuVered ShwartzI. Dagan
2020
arXiv

We study the potential synergy between two different NLP tasks, both confronting lexical variability: identifying predicate paraphrases and event coreference resolution. First, we used annotations… 

Generative Data Augmentation for Commonsense Reasoning

Yiben YangChaitanya MalaviyaJared FernandezDoug Downey
2020
Findings of EMNLP

Recent advances in commonsense reasoning depend on large-scale human-annotated training data to achieve peak performance. However, manual curation of training examples is expensive and has been… 

VisualCOMET: Reasoning About the Dynamic Context of a Still Image

Jae Sung ParkChandra BhagavatulaRoozbeh MottaghiYejin Choi
2020
ECCV

Even from a single frame of a still image, people can reason about the dynamic story of the image before, after, and beyond the frame. For example, given an image of a man struggling to stay afloat… 

Adversarial Filters of Dataset Biases

Ronan Le BrasSwabha SwayamdiptaChandra BhagavatulaYejin Choi
2020
ICML

Large neural models have demonstrated humanlevel performance on language and vision benchmarks such as ImageNet and Stanford Natural Language Inference (SNLI). Yet, their performance degrades… 

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks

Suchin GururanganAna MarasovićSwabha SwayamdiptaNoah A. Smith
2020
ACL

Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still… 

Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models

Maarten SapEric HorvitzYejin ChoiJames W. Pennebaker
2020
ACL

We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release… 

Social Bias Frames: Reasoning about Social and Power Implications of Language

Maarten SapSaadia GabrielLianhui QinYejin Choi
2020
ACL

Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but all the implied meanings that… 

The Right Tool for the Job: Matching Model and Instance Complexities

Roy SchwartzGabi StanovskySwabha SwayamdiptaNoah A. Smith
2020
ACL

As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose… 

Procedural Reading Comprehension with Attribute-Aware Context Flow

Aida AminiAntoine BosselutBhavana Dalvi MishraHannaneh Hajishirzi
2020
AKBC

Procedural texts often describe processes (e.g., photosynthesis and cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading…