Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
Although pretrained language models (PTLMs) have been shown to contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after using…
CDLM: Cross-Document Language Modeling
We introduce a new pretraining approach for language models that are geared to support multi-document NLP tasks. Our crossdocument language model (CD-LM) improves masked language modeling for these…
Contrastive Explanations for Model Interpretability
Contrastive explanations clarify why an event occurred in contrast to another. They are more inherently intuitive to humans to both produce and comprehend. We propose a methodology to produce…
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
As language models are trained on ever more text, researchers are turning to some of the largest corpora available. Unlike most other types of datasets in NLP, large unlabeled text corpora are often…
Explaining Answers with Entailment Trees
Our goal, in the context of open-domain textual question-answering (QA), is to explain answers by not just listing supporting textual evidence (“rationales”), but also showing how such evidence…
Finetuning Pretrained Transformers into RNNs
Transformers have outperformed recurrent neural networks (RNNs) in natural language generation. But this comes with a significant computational cost, as the attention mechanism’s complexity scales…
Generative Context Pair Selection for Multi-hop Question Answering
Compositional reasoning tasks like multi-hop question answering, require making latent decisions to get the final answer, given a question. However, crowdsourced datasets often capture only a slice…
GooAQ: Open Question Answering with Diverse Answer Types
While day-to-day questions come with a variety of answer types, the current questionanswering (QA) literature has failed to adequately address the answer diversity of questions. To this end, we…
How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI
Many real-world problems require the combined application of multiple reasoning abilities employing suitable abstractions, commonsense knowledge, and creative synthesis of problem-solving…
Learning with Instance Bundles for Reading Comprehension
When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their…