Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning

H. TrivediN. BalasubramanianTushar KhotA. Sabharwal
2020
EMNLP

Has there been real progress in multi-hop question-answering? Models often exploit dataset artifacts to produce correct answers, without connecting information across multiple supporting facts. This… 

X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers

Jaemin ChoJiasen LuDustin Schwenkand Aniruddha Kembhavi
2020
EMNLP

Mirroring the success of masked language models, vision-and-language counterparts like VILBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative… 

SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search

S. MacAvaneyArman CohanN. Goharian
2020
EMNLP

With worldwide concerns surrounding the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), there is a rapidly growing body of literature on the virus. Clinicians, researchers, and… 

QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines

Valentina PyatkinAyal KleinReut TsarfatyIdo Dagan
2020
EMNLP

Discourse relations describe how two propositions relate to one another, and identifying them automatically is an integral part of natural language understanding. However, annotating discourse… 

A Simple and Effective Model for Answering Multi-span Questions

Elad SegalAvia EfratMor ShohamJonathan Berant
2020
EMNLP

Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for… 

Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering.

Harsh JhamtaniP. Clark
2020
EMNLP

Despite the rapid progress in multihop question-answering (QA), models still have trouble explaining why an answer is correct, with limited explanation training data available to learn from. To… 

More Bang for Your Buck: Natural Perturbation for Robust Question Answering

Daniel KhashabiTushar KhotAshish Sabharwal
2020
EMNLP

While recent models have achieved human-level scores on many NLP datasets, we observe that they are considerably sensitive to small changes in input. As an alternative to the standard approach of… 

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

Lianhui QinVered ShwartzP. WestYejin Choi
2020
EMNLP

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the… 

CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning

Bill Yuchen LinM. ShenWangchunshu ZhouX. Ren
2020
EMNLP

Recently, large-scale pre-trained language models have demonstrated impressive performance on several commonsense benchmark datasets. However, building machines with common-sense to compose… 

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

Ana MarasovićChandra BhagavatulaJ. ParkYejin Choi
2020
Findings of EMNLP

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on…