Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Social Chemistry 101: Learning to Reason about Social Norms and Moral Norms

Maxwell ForbesJena D. HwangVered ShwartzYejin Choi
2020
EMNLP

Social norms---the unspoken commonsense rules about acceptable social behavior---are crucial in understanding the underlying causes and intents of people's actions in narratives. For example,… 

Does my multimodal model learn cross-modal interactions? It’s harder to tell than you might think!

Jack HesselLillian Lee
2020
EMNLP

Modeling expressive cross-modal interactions seems crucial in multimodal tasks, such as visual question answering. However, sometimes high-performing black-box algorithms turn out to be mostly… 

PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking

Hannah RashkinAsli CelikyilmazYejin ChoiJianfeng Gao
2020
EMNLP

We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent… 

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

Xinyao MaMaarten SapHannah RashkinYejin Choi
2020
EMNLP

Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often… 

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

Samuel GehmanSuchin GururanganMaarten SapNoah A. Smith
2020
Findings of EMNLP

Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pretrained LMs can… 

TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

Qiang NingHao WuRujun HanDan Roth
2020
EMNLP

A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated. However,… 

Multi-Step Inference for Reasoning over Paragraphs

Jiangming LiuMatt GardnerShay B. CohenMirella Lapata
2020
EMNLP

Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box… 

MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

Anthony ChenGabriel StanovskyS. SinghMatt Gardner
2020
EMNLP

Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded… 

Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents

Gregory YauneyJack HesselDavid Mimno
2020
EMNLP

Images can give us insights into the contextual meanings of words, but current imagetext grounding approaches require detailed annotations. Such granular annotation is rare, expensive, and… 

Grounded Compositional Outputs for Adaptive Language Modeling

Nikolaos PappasPhoebe MulcaireNoah A. Smith
2020
EMNLP

Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A…