Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge

Alon TalmorJonathan HerzigNicholas LourieJonathan Berant
2019
NAACL

When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant… 

DiscoFuse: A Large-Scale Dataset for Discourse-based Sentence Fusion

Mor GevaEric MalmiIdan SzpektorJonathan Berant
2019
NAACL

Sentence fusion is the task of joining several independent sentences into a single coherent text. Current datasets for sentence fusion are small and insufficient for training modern neural models.… 

DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs

Dheeru DuaYizhong WangPradeep DasigiMatt Gardner
2019
NAACL-HLT

Reading comprehension has recently seen rapid progress, with systems matching humans on the most popular datasets for the task. However, a large body of work has highlighted the brittleness of these… 

Evaluating Text GANs as Language Models

Guy TevetGavriel HabibVered ShwartzJonathan Berant
2019
NAACL

Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of “exposure bias”. However, A… 

Inoculation by Fine-Tuning: A Method for Analyzing Challenge Datasets

Nelson F. LiuRoy SchwartzNoah Smith
2019
NAACL

Several datasets have recently been constructed to expose brittleness in models trained on existing benchmarks. While model performance on these challenge datasets is significantly lower compared… 

Iterative Search for Weakly Supervised Semantic Parsing

Pradeep DasigiMatt GardnerShikhar MurtyEd Hovy
2019
NAACL

Training semantic parsers from question-answer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical… 

Linguistic Knowledge and Transferability of Contextual Representations

Nelson F. LiuMatt GardnerYonatan BelinkovNoah A. Smith
2019
NAACL

Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of… 

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

Hila GonenYoav Goldberg
2019
NAACL

Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society. This phenomenon is pervasive and… 

MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms

ida AminiSaadia GabrielPeter LinHannaneh Hajishirzi
2019
NAACL

We introduce a large-scale dataset of math word problems and an interpretable neural math problem solver by learning to map problems to their operation programs. Due to annotation challenges,… 

Polyglot Contextual Representations Improve Crosslingual Transfer

Phoebe MulcaireJungo KasaiNoah A. Smith
2019
NAACL

We introduce a method to produce multilingual contextual word representations by training a single language model on text from multiple languages. Our method combines the advantages of contextual…