Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Social Chemistry 101: Learning to Reason about Social Norms and Moral Norms
Social norms---the unspoken commonsense rules about acceptable social behavior---are crucial in understanding the underlying causes and intents of people's actions in narratives. For example,…
Does my multimodal model learn cross-modal interactions? It’s harder to tell than you might think!
Modeling expressive cross-modal interactions seems crucial in multimodal tasks, such as visual question answering. However, sometimes high-performing black-box algorithms turn out to be mostly…
PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking
We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent…
PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction
Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often…
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pretrained LMs can…
TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions
A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated. However,…
Multi-Step Inference for Reasoning over Paragraphs
Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box…
MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics
Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded…
Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents
Images can give us insights into the contextual meanings of words, but current imagetext grounding approaches require detailed annotations. Such granular annotation is rare, expensive, and…
Grounded Compositional Outputs for Adaptive Language Modeling
Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A…