Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Beyond Summarization: Designing AI Support for Real-World Expository Writing Tasks
Large language models have introduced exciting new opportunities and challenges in designing and developing new AI-assisted writing support tools. Recent work has shown that leveraging this new…
When Learning Is Out of Reach, Reset: Generalization in Autonomous Visuomotor Reinforcement Learning
Episodic training, where an agent's environment is reset to some initial condition after every success or failure, is the de facto standard when training embodied reinforcement learning (RL) agents.…
Queer In AI: A Case Study in Community-Led Participatory AI
We present Queer in AI as a case study for community-led participatory design in AI. We examine how participatory design and intersectional tenets started and shaped this community's programs over…
Scim: Intelligent Faceted Highlights for Interactive, Multi-Pass Skimming of Scientific Papers
Researchers are expected to keep up with an immense literature, yet often find it prohibitively time-consuming to do so. This paper ex-plores how intelligent agents can help scaffold in-situ…
The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces
Scholarly publications are key to the transfer of knowledge from scholars to others. However, research papers are information-dense, and as the volume of the scientific literature grows, the need…
CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
Visual information is central to conversation: body gestures and facial expressions, for example, contribute to meaning that transcends words alone. To date, however, most neural conversational…
Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models
The escalating debate on AI’s capabilities warrants developing reliable metrics to assess machine “intelligence.” Recently, many anecdotal examples were used to suggest that newer Large Language…
Comparing Sentence-Level Suggestions to Message-Level Suggestions in AI-Mediated Communication
Traditionally, writing assistance systems have focused on short or even single-word suggestions. Recently, large language models like GPT-3 have made it possible to generate significantly longer…
The Parallelism Tradeoff: Limitations of Log-Precision Transformers
Abstract Despite their omnipresence in modern NLP, characterizing the computational power of transformer neural nets remains an interesting open question. We prove that transformers whose arithmetic…
AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Pretrained language models (PLMs) are trained on massive corpora, but often need to specialize to specific domains. A parameter-efficient adaptation method suggests training an adapter for each…