Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

UnifiedQA: Crossing Format Boundaries With a Single QA System

Daniel KhashabiSewon MinTushar KhotHannaneh Hajishirzi
2020
Findings of EMNLP

Question answering (QA) tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit… 

UnQovering Stereotyping Biases via Underspecified Questions

Tao LiTushar KhotDaniel KhashabiVivek Srikumar
2020
Findings of EMNLP

While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework… 

Unsupervised Commonsense Question Answering with Self-Talk

Vered ShwartzPeter WestRonan Le BrasYejin Choi
2020
EMNLP

Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pre-trained language models as the sole implicit source of world… 

What-if I ask you to explain: Explaining the effects of perturbations in procedural text

Dheeraj RajagopalNiket TandonPeter ClarkEduard H. Hovy
2020
Findings of EMNLP

We address the task of explaining the effects of perturbations in procedural text, an important test of process comprehension. Consider a passage describing a rabbit's life-cycle: humans can easily… 

Writing Strategies for Science Communication: Data and Computational Analysis

Tal AugustLauren KimKatharina ReineckeNoah A. Smith
2020
EMNLP

Communicating complex scientific ideas without misleading or overwhelming the public is challenging. While science communication guides exist, they rarely offer empirical evidence for how their… 

X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers

Jaemin ChoJiasen LuDustin Schwenkand Aniruddha Kembhavi
2020
EMNLP

Mirroring the success of masked language models, vision-and-language counterparts like VILBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative… 

"You are grounded!": Latent Name Artifacts in Pre-trained Language Models

Vered ShwartzRachel RudingerOyvind Tafjord
2020
EMNLP

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models. We focus on artifacts associated with the representation of given names (e.g.,… 

ZEST: Zero-shot Learning from Text Descriptions using Textual Similarity and Visual Summarization

Tzuf Paz-ArgamanY. AtzmonGal ChechikReut Tsarfaty
2020
Findings of EMNLP

We study the problem of recognizing visual entities from the textual descriptions of their classes. Specifically, given birds' images with free-text descriptions of their species, we learn to… 

Rearrangement: A Challenge for Embodied AI

Dhruv BatraA. X. ChangS. ChernovaHao Su
2020
arXiv

We describe a framework for research and evaluation in Embodied AI. Our proposal is based on a canonical task: Rearrangement. A standard task can focus the development of new techniques and serve as… 

ABNIRML: Analyzing the Behavior of Neural IR Models

Sean MacAvaneySergey FeldmanNazli GoharianArman Cohan
2020
TACL

Numerous studies have demonstrated the effectiveness of pretrained contextualized language models such as BERT and T5 for ad-hoc search. However, it is not wellunderstood why these methods are so…