Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Learning from Task Descriptions

Orion WellerNick LourieMatt GardnerMatthew Peters
2020
EMNLP

Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To… 

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

Ana MarasovićChandra BhagavatulaJ. ParkYejin Choi
2020
Findings of EMNLP

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on… 

PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking

Hannah RashkinAsli CelikyilmazYejin ChoiJianfeng Gao
2020
EMNLP

We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent… 

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

Xinyao MaMaarten SapHannah RashkinYejin Choi
2020
EMNLP

Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often… 

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

Samuel GehmanSuchin GururanganMaarten SapNoah A. Smith
2020
Findings of EMNLP

Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pretrained LMs can… 

Social Chemistry 101: Learning to Reason about Social Norms and Moral Norms

Maxwell ForbesJena D. HwangVered ShwartzYejin Choi
2020
EMNLP

Social norms---the unspoken commonsense rules about acceptable social behavior---are crucial in understanding the underlying causes and intents of people's actions in narratives. For example,… 

Thinking Like a Skeptic: Defeasible Inference in Natural Language

Rachel RudingerVered ShwartzJena D. HwangNoah A. Smith and Yejin Choi
2020
Findings of EMNLP

Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin). Though long recognized in… 

Unsupervised Commonsense Question Answering with Self-Talk

Vered ShwartzPeter WestRonan Le BrasYejin Choi
2020
EMNLP

Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pre-trained language models as the sole implicit source of world… 

"You are grounded!": Latent Name Artifacts in Pre-trained Language Models

Vered ShwartzRachel RudingerOyvind Tafjord
2020
EMNLP

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models. We focus on artifacts associated with the representation of given names (e.g.,… 

GO FIGURE: A Meta Evaluation of Factuality in Summarization

Saadia GabrielAsli CelikyilmazRahul JhaJianfeng Gao
2020
ACL

Text generation models can generate factually inconsistent text containing distorted or fabricated facts about the source text. Recent work has focused on building evaluation models to verify the…