Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
GooAQ: Open Question Answering with Diverse Answer Types
While day-to-day questions come with a variety of answer types, the current questionanswering (QA) literature has failed to adequately address the answer diversity of questions. To this end, we…
How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI
Many real-world problems require the combined application of multiple reasoning abilities employing suitable abstractions, commonsense knowledge, and creative synthesis of problem-solving…
Learning with Instance Bundles for Reading Comprehension
When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their…
Measuring Association Between Labels and Free-Text Rationales
Interpretable NLP has taking increasing interest in ensuring that explanations are faithful to the model’s decision-making process. This property is crucial for machine learning researchers and…
Mitigating False-Negative Contexts in Multi-document Question Answering with Retrieval Marginalization
Question Answering (QA) tasks requiring information from multiple documents often rely on a retrieval model to identify relevant information from which the reasoning model can derive an answer. The…
Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences
In social settings, much of human behavior is governed by unspoken rules of conduct. For artificial systems to be fully integrated into social environments, adherence to such norms is a central…
MS2: Multi-Document Summarization of Medical Studies
To assess the effectiveness of any medical intervention, researchers must conduct a timeintensive and highly manual literature review. NLP systems can help to automate or assist in parts of this…
Paired Examples as Indirect Supervision in Latent Decision Models
Compositional, structured models are appealing because they explicitly decompose problems and provide interpretable intermediate outputs that give confidence that the model is not simply latching…
Parameter Norm Growth During Training of Transformers
The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine,…
Probing Across Time: What Does RoBERTa Know and When?
Models of language trained on very large corpora have been demonstrated useful for NLP. As fixed artifacts, they have become the object of intense study, with many researchers “probing” the extent…