Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

proScript: Partially Ordered Scripts Generation

Keisuke SakaguchiChandra BhagavatulaRonan Le BrasYejin Choi
2021
EMNLP • Findings

Scripts standardized event sequences describing typical everyday activities have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated… 

Contrastive Explanations for Model Interpretability

Alon JacoviSwabha SwayamdiptaShauli RavfogelYoav Goldberg
2021
EMNLP

Contrastive explanations clarify why an event occurred in contrast to another. They are more inherently intuitive to humans to both produce and comprehend. We propose a methodology to produce… 

Back to Square One: Bias Detection, Training and Commonsense Disentanglement in the Winograd Schema

Yanai ElazarHongming ZhangYoav GoldbergDan Roth
2021
EMNLP

The Winograd Schema (WS) has been proposed as a test for measuring commonsense capabilities of models. Recently, pre-trained language model-based approaches have boosted performance on some WS… 

Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus

Jesse DodgeMaarten SapAna MarasovićMatt Gardner
2021
EMNLP

As language models are trained on ever more text, researchers are turning to some of the largest corpora available. Unlike most other types of datasets in NLP, large unlabeled text corpora are often… 

Generative Context Pair Selection for Multi-hop Question Answering

Dheeru DuaCicero Nogueira dos SantosPatrick NgSameer Singh
2021
EMNLP

Compositional reasoning tasks like multi-hop question answering, require making latent decisions to get the final answer, given a question. However, crowdsourced datasets often capture only a slice… 

Learning with Instance Bundles for Reading Comprehension

Dheeru DuaPradeep DasigiSameer Singh and Matt Gardner
2021
EMNLP

When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their… 

Paired Examples as Indirect Supervision in Latent Decision Models

Nitish GuptaSameer SinghMatt Gardner and Dan Roth
2021
EMNLP

Compositional, structured models are appealing because they explicitly decompose problems and provide interpretable intermediate outputs that give confidence that the model is not simply latching… 

Mitigating False-Negative Contexts in Multi-document Question Answering with Retrieval Marginalization

Ansong NiMatt GardnerPradeep Dasigi
2021
EMNLP

Question Answering (QA) tasks requiring information from multiple documents often rely on a retrieval model to identify relevant information from which the reasoning model can derive an answer. The… 

Parameter Norm Growth During Training of Transformers

William MerrillVivek RamanujanYoav GoldbergNoah A. Smith
2021
EMNLP

The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine,… 

Probing Across Time: What Does RoBERTa Know and When?

Leo Z. LiuYizhong WangJungo KasaiNoah A. Smith
2021
Findings of EMNLP

Models of language trained on very large corpora have been demonstrated useful for NLP. As fixed artifacts, they have become the object of intense study, with many researchers “probing” the extent…