Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

What's Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering

Tushar KhotAshish SabharwalPeter Clark
2019
EMNLP

Multi-hop textual question answering requires combining information from multiple sentences. We focus on a natural setting where, unlike typical reading comprehension, only partial information is… 

Evaluating Question Answering Evaluation

Anthony ChenGabriel StanovskySameer SinghMatt Gardner
2019
EMNLP • MRQA Workshop

As the complexity of question answering (QA) datasets evolve, moving away from restricted formats like span extraction and multiple-choice (MC) to free-form answer generation, it is imperative to… 

Reasoning Over Paragraph Effects in Situations

Kevin LinOyvind TafjordPeter ClarkMatt Gardner
2019
EMNLP • MRQA Workshop

A key component of successfully reading a passage of text is the ability to apply knowledge gained from the passage to a new situation. In order to facilitate progress on this kind of reading, we… 

ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension

Dheeru DuaAnanth GottumukkalaAlon TalmorMatt Gardner
2019
EMNLP • MRQA Workshop

Reading comprehension is one of the crucial tasks for furthering research in natural language understanding. A lot of diverse reading comprehension datasets have recently been introduced to study… 

On Making Reading Comprehension More Comprehensive

Matt GardnerJonathan BerantHannaneh HajishirziSewon Min
2019
EMNLP • MRQA Workshop

Machine reading comprehension, the task of evaluating a machine’s ability to comprehend a passage of text, has seen a surge in popularity in recent years. There are many datasets that are targeted… 

Knowledge Enhanced Contextual Word Representations

Matthew E. PetersMark NeumannRobert L. Loganand Noah A. Smith
2019
EMNLP

Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those… 

Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning

Pradeep DasigiNelson F. LiuAna MarasovicMatt Gardner
2019
EMNLP

Machine comprehension of texts longer than a single sentence often requires coreference resolution. However, most current reading comprehension benchmarks do not contain complex coreferential… 

Show Your Work: Improved Reporting of Experimental Results

Jesse DodgeSuchin GururanganDallas CardNoah A. Smith
2019
EMNLP

Research in natural language processing proceeds, in part, by demonstrating that new models achieve superior performance (e.g., accuracy) on held-out test data, compared to previous results. In this… 

RNN Architecture Learning with Sparse Regularization

Jesse DodgeRoy SchwartzHao PengNoah A. Smith
2019
EMNLP

Neural models for NLP typically use large numbers of parameters to reach state-of-the-art performance, which can lead to excessive memory usage and increased runtime. We present a structure learning… 

PaLM: A Hybrid Parser and Language Model

Hao PengRoy SchwartzNoah A. Smith
2019
EMNLP

We present PaLM, a hybrid parser and neural language model. Building on an RNN language model, PaLM adds an attention layer over text spans in the left context. An unsupervised constituency parser…