Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Delphi: Towards Machine Ethics and Norms

Liwei JiangJena D. HwangChandrasekhar BhagavatulaYejin Choi
2021
arXiv

Failing to account for moral norms could notably hinder AI systems’ ability to interact with people. AI systems empirically require social, cultural, and ethical norms to make moral judgments.… 

Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models

Peter WestXiming LuAri HoltzmanYejin Choi
2021
ACL

Publicly available, large pretrained Language Models (LMs) generate text with remarkable quality, but only sequentially from left to right. As a result, they are not immediately applicable to… 

Symbolic Brittleness in Sequence Models: on Systematic Generalization in Symbolic Mathematics

S. WelleckPeter WestJize CaoYejin Choi
2021
AAAI

Neural sequence models trained with maximum likelihood estimation have led to breakthroughs in many tasks, where success is defined by the gap between training and test performance. However, their… 

Conversational Multi-Hop Reasoning with Neural Commonsense Knowledge and Symbolic Logic Rules

Forough ArabshahiJennifer LeeA. BosselutTom Mitchell
2021
EMNLP

One of the challenges faced by conversational agents is their inability to identify unstated presumptions of their users’ commands, a task trivial for humans due to their common sense. In this… 

It's not Rocket Science : Interpreting Figurative Language in Narratives

Tuhin ChakrabartyYejin ChoiVered Shwartz
2021
ACL

Figurative language is ubiquitous in English. Yet, the vast majority of NLP research focuses on literal language. Existing text representations by design rely on compositionality, while figurative… 

Edited Media Understanding Frames: Reasoning about the Intent and Implications of Visual Disinformation

Jeff DaMaxwell ForbesRowan ZellersYejin Choi
2021
ACL

Multimodal disinformation, from `deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered… 

How effective is BERT without word ordering? Implications for language understanding and data privacy

Jack HesselAlexandra Schofield
2021
ACL

Ordered word sequences contain the rich structures that define language. However, it’s often not clear if or how modern pretrained language models utilize these structures. We show that the token… 

PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World

Rowan ZellersAri HoltzmanMatthew E. PetersYejin Choi
2021
ACL

We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language. We factorize PIGLeT into a physical dynamics model, and a… 

Analyzing Commonsense Emergence in Few-shot Knowledge Models

Jeff DaRonan Le BrasXiming LuAntoine Bosselut
2021
AKBC

Recently, commonsense knowledge models — pretrained language models (LM) finetuned on knowledge graph (KG) tuples — showed that considerable amounts of commonsense knowledge can be encoded in the… 

Scarecrow: A Framework for Scrutinizing Machine Text

Yao DouMaxwell ForbesRik Koncel-KedziorskiYejin Choi
2021
arXiv

Modern neural text generation systems can produce remarkably fluent and grammatical texts. While earlier language models suffered from repetition and syntactic errors, the errors made by…