Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Back to Square One: Bias Detection, Training and Commonsense Disentanglement in the Winograd Schema
The Winograd Schema (WS) has been proposed as a test for measuring commonsense capabilities of models. Recently, pre-trained language model-based approaches have boosted performance on some WS…
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
As language models are trained on ever more text, researchers are turning to some of the largest corpora available. Unlike most other types of datasets in NLP, large unlabeled text corpora are often…
Generative Context Pair Selection for Multi-hop Question Answering
Compositional reasoning tasks like multi-hop question answering, require making latent decisions to get the final answer, given a question. However, crowdsourced datasets often capture only a slice…
Learning with Instance Bundles for Reading Comprehension
When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their…
Paired Examples as Indirect Supervision in Latent Decision Models
Compositional, structured models are appealing because they explicitly decompose problems and provide interpretable intermediate outputs that give confidence that the model is not simply latching…
Mitigating False-Negative Contexts in Multi-document Question Answering with Retrieval Marginalization
Question Answering (QA) tasks requiring information from multiple documents often rely on a retrieval model to identify relevant information from which the reasoning model can derive an answer. The…
Parameter Norm Growth During Training of Transformers
The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine,…
Probing Across Time: What Does RoBERTa Know and When?
Models of language trained on very large corpora have been demonstrated useful for NLP. As fixed artifacts, they have become the object of intense study, with many researchers “probing” the extent…
CDLM: Cross-Document Language Modeling
We introduce a new pretraining approach for language models that are geared to support multi-document NLP tasks. Our crossdocument language model (CD-LM) improves masked language modeling for these…
Explaining Answers with Entailment Trees
Our goal, in the context of open-domain textual question-answering (QA), is to explain answers by not just listing supporting textual evidence (“rationales”), but also showing how such evidence…