Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints
Conditional text generation often requires lexical constraints, i.e., which words should or shouldn’t be included in the output text. While the dominant recipe for conditional text generation has…
Paraphrasing vs Coreferring: Two Sides of the Same Coin
We study the potential synergy between two different NLP tasks, both confronting lexical variability: identifying predicate paraphrases and event coreference resolution. First, we used annotations…
Generative Data Augmentation for Commonsense Reasoning
Recent advances in commonsense reasoning depend on large-scale human-annotated training data to achieve peak performance. However, manual curation of training examples is expensive and has been…
VisualCOMET: Reasoning About the Dynamic Context of a Still Image
Even from a single frame of a still image, people can reason about the dynamic story of the image before, after, and beyond the frame. For example, given an image of a man struggling to stay afloat…
Adversarial Filters of Dataset Biases
Large neural models have demonstrated humanlevel performance on language and vision benchmarks such as ImageNet and Stanford Natural Language Inference (SNLI). Yet, their performance degrades…
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still…
Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models
We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release…
Social Bias Frames: Reasoning about Social and Power Implications of Language
Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but all the implied meanings that…
The Right Tool for the Job: Matching Model and Instance Complexities
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose…
Procedural Reading Comprehension with Attribute-Aware Context Flow
Procedural texts often describe processes (e.g., photosynthesis and cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading…