Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
ZEST: Zero-shot Learning from Text Descriptions using Textual Similarity and Visual Summarization
We study the problem of recognizing visual entities from the textual descriptions of their classes. Specifically, given birds' images with free-text descriptions of their species, we learn to…
A Novel Challenge Set for Hebrew Morphological Disambiguation and Diacritics Restoration
One of the primary tasks of morphological parsers is the disambiguation of homographs. Particularly difficult are cases of unbalanced ambiguity, where one of the possible analyses is far more…
Learning from Task Descriptions
Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To…
Thinking Like a Skeptic: Defeasible Inference in Natural Language
Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin). Though long recognized in…
Social Chemistry 101: Learning to Reason about Social Norms and Moral Norms
Social norms---the unspoken commonsense rules about acceptable social behavior---are crucial in understanding the underlying causes and intents of people's actions in narratives. For example,…
Does my multimodal model learn cross-modal interactions? It’s harder to tell than you might think!
Modeling expressive cross-modal interactions seems crucial in multimodal tasks, such as visual question answering. However, sometimes high-performing black-box algorithms turn out to be mostly…
PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking
We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent…
PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction
Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often…
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pretrained LMs can…
TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions
A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated. However,…