Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Paired Examples as Indirect Supervision in Latent Decision Models

Nitish GuptaSameer SinghMatt Gardner and Dan Roth
2021
EMNLP

Compositional, structured models are appealing because they explicitly decompose problems and provide interpretable intermediate outputs that give confidence that the model is not simply latching… 

Mitigating False-Negative Contexts in Multi-document Question Answering with Retrieval Marginalization

Ansong NiMatt GardnerPradeep Dasigi
2021
EMNLP

Question Answering (QA) tasks requiring information from multiple documents often rely on a retrieval model to identify relevant information from which the reasoning model can derive an answer. The… 

Parameter Norm Growth During Training of Transformers

William MerrillVivek RamanujanYoav GoldbergNoah A. Smith
2021
EMNLP

The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine,… 

Probing Across Time: What Does RoBERTa Know and When?

Leo Z. LiuYizhong WangJungo KasaiNoah A. Smith
2021
Findings of EMNLP

Models of language trained on very large corpora have been demonstrated useful for NLP. As fixed artifacts, they have become the object of intense study, with many researchers “probing” the extent… 

CDLM: Cross-Document Language Modeling

Avi CaciularuArman CohanIz BeltagyIdo Dagan
2021
Findings of EMNLP

We introduce a new pretraining approach for language models that are geared to support multi-document NLP tasks. Our crossdocument language model (CD-LM) improves masked language modeling for these… 

Explaining Answers with Entailment Trees

Bhavana DalviPeter A. JansenOyvind TafjordPeter Clark
2021
EMNLP

Our goal, in the context of open-domain textual question-answering (QA), is to explain answers by not just listing supporting textual evidence (“rationales”), but also showing how such evidence… 

Understanding Mention Detector-Linker Interaction in Neural Coreference Resolution

Zhaofeng WuMatt Gardner
2021
EMNLP • CRAC

Despite significant recent progress in coreference resolution, the quality of current state-of-the-art systems still considerably trails behind human-level performance. Using the CoNLL-2012 and… 

How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI

A. KalyanAbhinav KumarArjun ChandrasekaranPeter Clark
2021
EMNLP

Many real-world problems require the combined application of multiple reasoning abilities employing suitable abstractions, commonsense knowledge, and creative synthesis of problem-solving… 

Surface Form Competition: Why the Highest Probability Answer Isn't Always Right

Ari HoltzmanPeter WestVered SchwartzLuke Zettlemoyer
2021
EMNLP

Large language models have shown promising results in zero-shot settings (Brown et al., 2020; Radford et al., 2019). For example, they can perform multiple choice tasks simply by conditioning on a… 

Sister Help: Data Augmentation for Frame-Semantic Role Labeling

Ayush PancholyMiriam R. L. PetruckSwabha Swayamdipta
2021
EMNLP • LAW-DMR Workshop

While FrameNet is widely regarded as a rich resource of semantics in natural language processing, a major criticism concerns its lack of coverage and the relative paucity of its labeled data…