Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
A key limitation in current datasets for multi-hop reasoning is that the required steps for answering the question are mentioned in it explicitly. In this work, we introduce STRATEGYQA, a question…
Few-Shot Question Answering by Pretraining Span Selection
In a number of question answering (QA) benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more…
Effective Attention Sheds Light On Interpretability
An attention matrix of a transformer selfattention sublayer can provably be decomposed into two components and only one of them (effective attention) contributes to the model output. This leads us…
Neural Extractive Search
Domain experts often need to extract structured information from large corpora. We advocate for a search paradigm called “extractive search”, in which a search query is enriched with capture-slots,…
Shortformer: Better Language Modeling using Shorter Inputs
We explore the benefits of decreasing the input length of transformers. First, we show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall…
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World
We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language. We factorize PIGLeT into a physical dynamics model, and a…
All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text
Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run…
How effective is BERT without word ordering? Implications for language understanding and data privacy
Ordered word sequences contain the rich structures that define language. However, it’s often not clear if or how modern pretrained language models utilize these structures. We show that the token…
Edited Media Understanding Frames: Reasoning about the Intent and Implications of Visual Disinformation
Multimodal disinformation, from `deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered…
PAWLS: PDF Annotation With Labels and Structure
Adobe’s Portable Document Format (PDF) is a popular way of distributing view-only documents with a rich visual markup. This presents a challenge to NLP practitioners who wish to use the information…