Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning

Pradeep DasigiNelson F. LiuAna MarasovicMatt Gardner
2019
EMNLP

Machine comprehension of texts longer than a single sentence often requires coreference resolution. However, most current reading comprehension benchmarks do not contain complex coreferential… 

RNN Architecture Learning with Sparse Regularization

Jesse DodgeRoy SchwartzHao PengNoah A. Smith
2019
EMNLP

Neural models for NLP typically use large numbers of parameters to reach state-of-the-art performance, which can lead to excessive memory usage and increased runtime. We present a structure learning… 

SciBERT: A Pretrained Language Model for Scientific Text

Iz BeltagyKyle LoArman Cohan
2019
EMNLP

Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et al., 2018) to… 

Show Your Work: Improved Reporting of Experimental Results

Jesse DodgeSuchin GururanganDallas CardNoah A. Smith
2019
EMNLP

Research in natural language processing proceeds, in part, by demonstrating that new models achieve superior performance (e.g., accuracy) on held-out test data, compared to previous results. In this… 

Social IQA: Commonsense Reasoning about Social Interactions

Maarten SapHannah RashkinDerek ChenYejin Choi
2019
EMNLP

We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations. Social IQa contains 38,000 multiple choice questions for probing emotional and social… 

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Mandar JoshiDanqi ChenYinhan LiuOmer Levy
2019
EMNLP

We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random… 

Topics to Avoid: Demoting Latent Confounds in Text Classification

Sachin KumarShuly WintnerNoah A. SmithYulia Tsvetkov
2019
EMNLP

Despite impressive performance on many text classification tasks, deep neural networks tend to learn frequent superficial patterns that are specific to the training data and do not always generalize… 

Transfer Learning Between Related Tasks Using Expected Label Proportions

Matan Ben NoachYoav Goldberg
2019
EMNLP

Deep learning systems thrive on abundance of labeled training data but such data is not always available, calling for alternative methods of supervision. One such method is expectation… 

Universal Adversarial Triggers for Attacking and Analyzing NLP

Eric WallaceShi FengNikhil KandpalSameer Singh
2019
EMNLP

dversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a… 

WIQA: A dataset for "What if..." reasoning over procedural text

Niket TandonBhavana Dalvi MishraKeisuke SakaguchiPeter Clark
2019
EMNLP

We introduce WIQA, the first large-scale dataset of "What if..." questions over procedural text. WIQA contains three parts: a collection of paragraphs each describing a process, e.g., beach erosion;…