Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

Tianbao XieChen Henry WuPeng ShiTao Yu
2022
EMNLP

Structured knowledge grounding (SKG) leverages structured knowledge to complete user requests, such as semantic parsing over databases and question answering over knowledge bases. Since the inputs… 

Unsupervised Learning of Hierarchical Conversation Structure

Bo-Ru LuYushi HuHao ChengMari Ostendorf
2022
EMNLP Findings

Human conversations can evolve in many different ways, creating challenges for automatic understanding and summarization. Goal-oriented conversations often have meaningful sub-dialogue structure,… 

WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation

Alisa LiuSwabha SwayamdiptaNoah A. SmithYejin Choi
2022
Findings of EMNLP

A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a… 

What Makes Instruction Learning Hard? An Investigation and a New Challenge in a Synthetic Environment

Matthew FinlaysonKyle RichardsonAshish SabharwalPeter Clark
2022
EMNLP

The instruction learning paradigm—where a model learns to perform new tasks from task descriptions alone—has become popular in general-purpose model research. The capabilities of large transformer… 

Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection

Suchin GururanganDallas CardSarah K. DrierNoah A. Smith
2022
EMNLP

Language models increasingly rely on massive web dumps for diverse text data. However, these sources are rife with undesirable content. As such, resources like Wikipedia, books, and news often… 

Breakpoint Transformers for Modeling and Tracking Intermediate Beliefs

Kyle RichardsonRonen TamariOren SultanAshish Sabharwal
2022
EMNLP

Can we teach natural language understanding models to track their beliefs through intermediate points in text? We propose a representation learning framework called breakpoint modeling that allows… 

Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts

Ben ZhouKyle RichardsonXiaodong YuDan Roth
2022
EMNLP

Explicit decomposition modeling, which involves breaking down complex tasks into more straightforward and often more interpretable sub-tasks, has long been a central theme in developing robust and… 

Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE

Yuling GuYao FuValentina PyatkinPeter Clark
2022
EMNLP • The Third Workshop on Figurative Language Processing

Figurative language (e.g., “he flew like the wind”) is challenging to understand, as it is hard to tell what implicit information is being conveyed from the surface form alone. We hypothesize that… 

SciFact-Open: Towards open-domain scientific claim verification

David WaddenKyle LoBailey KuehlHannaneh Hajishirzi
2022
EMNLP 2022

While research on scientific claim verification has led to the development of powerful systems that appear to approach human performance, these approaches have yet to be tested in a realistic… 

Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs

Maarten SapRonan LebrasDaniel FriedYejin Choi
2022
EMNLP

Social intelligence and Theory of Mind (T O M), i.e., the ability to reason about the different mental states, intents, and reactions of all people involved, allow humans to effectively navigate and…