Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Is GPT-3 Text Indistinguishable from Human Text? SCARECROW: A Framework for Scrutinizing Machine Text

Yao DouMaxwell ForbesRik Koncel-KedziorskiYejin Choi
2022
ACL

Modern neural text generation systems can produce remarkably fluent and grammatical texts. While earlier language models suffered from repetition and syntactic errors, the errors made by contemporary… 

Situated Dialogue Learning through Procedural Environment Generation

Prithviraj AmmanabroluRenee JiaMark O. Riedl
2022
ACL

We teach goal-driven agents to interactively act and speak in situated environments by training on generated curriculums. Our agents operate in LIGHT (Urbanek et al. 2019)—a large-scale… 

Draw Me a Flower: Grounding Formal Abstract Structures Stated in Informal Natural Language

Royi LachmyValentina PyatkinReut Tsarfaty
2022
ACL

Forming and interpreting abstraction is a core process in human communication. In particular, when giving and performing complex instructions stated in natural language (NL), people may naturally… 

ACCoRD: A Multi-Document Approach to Generating Diverse Descriptions of Scientific Concepts

Sonia K. MurthyKyle LoDaniel KingDoug Downey
2022
arXiv

Systems that can automatically define unfamiliar terms hold the promise of improving the accessibility of scientific texts, especially for readers who may lack prerequisite background knowledge.… 

Understanding Dataset Difficulty with 𝒱-Usable Information

Kawin EthayarajhYejin Choiand Swabha Swayamdipta
2022
ICML

Estimating the difficulty of a dataset typically involves comparing state-of-the-art models to humans; the bigger the performance gap, the harder the dataset is said to be. However, this comparison… 

PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

Wen XiaoIz BeltagyG. CareniniArman Cohan
2022
ACL

We introduce PRIMERA, a pre-trained model for multi-document representation with a focus on summarization that reduces the need for dataset-specific architectures and large amounts of fine-tuning… 

Better Retrieval May Not Lead to Better Question Answering

Zhengzhong LiangTushar KhotSteven BethardAshish Sabharwal
2022
arXiv

Considerable progress has been made recently in open-domain question answering (QA) problems, which require Information Retrieval (IR) and Reading Comprehension (RC). A popular approach to improve… 

Scaling Creative Inspiration with Fine-Grained Functional Facets of Product Ideas

Tom HopeRonen TamariHyeonsu KangDafna Shahaf
2022
CHI

Web-scale repositories of products, patents and scientific papers offer an opportunity for building automated systems that scour millions of existing ideas and assist users in discovering novel… 

Saturated Transformers are Constant-Depth Threshold Circuits

William MerrillAshish SabharwalNoah A. Smith
2022
TACL

Transformers have become a standard neural network architecture for many NLP problems, motivating theoretical analysis of their power in terms of formal languages. Recent work has shown that… 

The Curious Case of Commonsense Intelligence

Yejin Choi
2022
Daedalus

Abstract Commonsense intelligence is a long-standing puzzle in AI. Despite considerable advances in deep learning, AI continues to be narrow and brittle due to its lack of common sense. Why is…