Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Social intelligence and Theory of Mind (T O M), i.e., the ability to reason about the different mental states, intents, and reactions of all people involved, allow humans to effectively navigate and…
The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning
Humans have remarkable capacity to reason abductively and hypothesize about what lies beyond the literal content of an image. By identifying concrete visual clues scattered throughout a scene, we…
A Dataset of Alt Texts from HCI Publications
Figures in scientifc publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize…
NeuroCounterfactuals: Beyond Minimal-Edit Counterfactuals for Richer Data Augmentation
While counterfactual data augmentation offers a promising step towards robust generalization in natural language processing, producing a set of counterfactuals that offer valuable inductive bias for…
Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering
Knowledge underpins reasoning. Recent research demonstrates that when relevant knowledge is provided as additional context to commonsense question answering (QA), it can substantially enhance the…
Towards Disturbance-Free Visual Mobile Manipulation
Deep reinforcement learning has shown promising results on an abundance of robotic tasks in simulation, including visual navigation and manipulation. Prior work generally aims to build embodied…
Benchmarking Progress to Infant-Level Physical Reasoning in AI
To what extent do modern AI systems comprehend the physical world? We introduce the open-access Infant-Level Physical Reasoning Benchmark ( InfLevel ) to gain insight into this question. We evaluate…
Transparency Helps Reveal When Language Models Learn Meaning
Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our…
REV: Information-Theoretic Evaluation of Free-Text Rationales
information. Future work might explore evaluation that penalizes rationales which support incorrect predictions, thus bridging together predictive performance with interpretability metrics.
Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization
We tackle the problem of aligning pre-trained large language models (LMs) with human preferences. If we view text generation as a sequential decision-making problem, reinforcement learning (RL)…