Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Learning from Task Descriptions
Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To…
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on…
PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking
We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent…
PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction
Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often…
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pretrained LMs can…
Social Chemistry 101: Learning to Reason about Social Norms and Moral Norms
Social norms---the unspoken commonsense rules about acceptable social behavior---are crucial in understanding the underlying causes and intents of people's actions in narratives. For example,…
Thinking Like a Skeptic: Defeasible Inference in Natural Language
Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin). Though long recognized in…
Unsupervised Commonsense Question Answering with Self-Talk
Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pre-trained language models as the sole implicit source of world…
"You are grounded!": Latent Name Artifacts in Pre-trained Language Models
Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models. We focus on artifacts associated with the representation of given names (e.g.,…
GO FIGURE: A Meta Evaluation of Factuality in Summarization
Text generation models can generate factually inconsistent text containing distorted or fabricated facts about the source text. Recent work has focused on building evaluation models to verify the…