Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
The Right Tool for the Job: Matching Model and Instance Complexities
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose…
Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?
With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this…
Unsupervised Domain Clusters in Pretrained Language Models
The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain…
Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering
Answering questions that involve multi-step reasoning requires decomposing them and using the answers of intermediate steps to reach the final answer. However, state-ofthe-art models in grounded…
Procedural Reading Comprehension with Attribute-Aware Context Flow
Procedural texts often describe processes (e.g., photosynthesis and cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading…
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions…
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
In this paper, we introduce the Butterfly Transform (BFT), a light weight channel fusion method that reduces the computational complexity of point-wise convolutions from O(n^2) of conventional…
RoboTHOR: An Open Simulation-to-Real Embodied AI Platform
Visual recognition ecosystems (e.g. ImageNet, Pascal, COCO) have undeniably played a prevailing role in the evolution of modern computer vision. We argue that interactive and embodied visual AI has…
Use the Force, Luke! Learning to Predict Physical Forces by Simulating Effects
When we humans look at a video of human-object interaction, we can not only infer what is happening but we can even extract actionable information and imitate those interactions. On the other hand,…
Visual Reaction: Learning to Play Catch with Your Drone
In this paper we address the problem of visual reaction: the task of interacting with dynamic environments where the changes in the environment are not necessarily caused by the agents itself.…