Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation
State-of-the-art neural machine translation models generate outputs autoregressively, where every step conditions on the previously generated tokens. This sequential nature causes inherent decoding…
Random Feature Attention
Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep.…
All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text
Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run…
Effective Attention Sheds Light On Interpretability
An attention matrix of a transformer selfattention sublayer can provably be decomposed into two components and only one of them (effective attention) contributes to the model output. This leads us…
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Humans give contrastive explanations that explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the important role that contrastivity plays…
Explaining Relationships Between Scientific Documents
We address the task of explaining relationships between two scientific documents using natural language text. This task requires modeling the complex content of long technical documents, deducing a…
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World
We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language. We factorize PIGLeT into a physical dynamics model, and a…
Promoting Graph Awareness in Linearized Graph-to-Text Generation
Generating text from structured inputs, such as meaning representations or RDF triples, has often involved the use of specialized graphencoding neural networks. However, recent applications of…
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, web-mined text datasets covering…
Shortformer: Better Language Modeling using Shorter Inputs
We explore the benefits of decreasing the input length of transformers. First, we show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall…