Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries?

Saadia GabrielH. PalangiYejin Choi
2022
arXiv

While a substantial body of prior work has explored adversarial example generation for natural language understanding tasks, these examples are often unrealistic and diverge from the real-world data… 

Quantifying the narrative flow of imagined versus autobiographical stories.

Maarten SapA. JafarpourYejin ChoiE. Horvitz
2022
Proceedings of the National Academy of Sciences of the United States of America

Lifelong experiences and learned knowledge lead to shared expectations about how common situations tend to unfold. Such knowledge of narrative event flow enables people to weave together a story.… 

Generating Sequences by Learning to Self-Correct

S. WelleckXiming LuPeter WestYejin Choi
2022
arXiv

Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesir-able content. Language models,… 

Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE

Yuling GuYao FuValentina PyatkinPeter Clark
2022
EMNLP • The Third Workshop on Figurative Language Processing

Figurative language (e.g., “he flew like the wind”) is challenging to understand, as it is hard to tell what implicit information is being conveyed from the surface form alone. We hypothesize that… 

Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation

Melanie SclarPeter WestSachin KumarYejin Choi
2022
Conference on Empirical Methods in Natural Language Processing

We present Referee, a novel framework for sentence summarization that can be trained reference-free (i.e., requiring no gold summaries for supervision), while allowing direct control for compression… 

Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs

Maarten SapRonan LebrasDaniel FriedYejin Choi
2022
EMNLP

Social intelligence and Theory of Mind (T O M), i.e., the ability to reason about the different mental states, intents, and reactions of all people involved, allow humans to effectively navigate and… 

The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning

Jack HesselJena D. HwangJae Sung ParkYejin Choi
2022
ECCV

Humans have remarkable capacity to reason abductively and hypothesize about what lies beyond the literal content of an image. By identifying concrete visual clues scattered throughout a scene, we… 

NeuroCounterfactuals: Beyond Minimal-Edit Counterfactuals for Richer Data Augmentation

Phillip HowardGadi SingerVasudev LalSwabha Swayamdipta
2022
Conference on Empirical Methods in Natural Language Processing

While counterfactual data augmentation offers a promising step towards robust generalization in natural language processing, producing a set of counterfactuals that offer valuable inductive bias for… 

Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering

Jiacheng LiuSkyler HallinanXiming LuYejin Choi
2022
Conference on Empirical Methods in Natural Language Processing

Knowledge underpins reasoning. Recent research demonstrates that when relevant knowledge is provided as additional context to commonsense question answering (QA), it can substantially enhance the… 

REV: Information-Theoretic Evaluation of Free-Text Rationales

Hanjie ChenFaeze BrahmanXiang RenSwabha Swayamdipta
2022
arXiv

information. Future work might explore evaluation that penalizes rationales which support incorrect predictions, thus bridging together predictive performance with interpretability metrics.