Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Complexity-Based Prompting for Multi-Step Reasoning
We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences…
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual…
Editing Models with Task Arithmetic
Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine…
InSCIt: Information-Seeking Conversations with Mixed-Initiative Interactions
In an information-seeking conversation, a user may ask questions that are under-specified or unanswerable. An ideal agent would interact by initiating different response types according to the…
Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization
We tackle the problem of aligning pre-trained large language models (LMs) with human preferences. If we view text generation as a sequential decision-making problem, reinforcement learning (RL)…
LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization
While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when…
Moving Forward by Moving Backward: Embedding Action Impact over Action Semantics
A common assumption when training embodied agents is that the impact of taking an action is stable; for instance, executing the"move ahead"action will always move the agent forward by a fixed…
Selective Annotation Makes Language Models Better Few-Shot Learners
Many recent approaches to natural language tasks are built on the remarkable abilities of large language models. Large language models can perform in-context learning, where they learn a new task…
Old dog, new trick: Reservoir computing advances machine learning for climate modeling
Physics-informed machine learning (ML) applied to geophysical simulation is developing explosively. Recently, graph neural net and vision transformer architectures have shown 1-7 day global weather…
S2abEL: A Dataset for Entity Linking from Scientific Tables
Entity linking (EL) is the task of linking a textual mention to its corresponding entry in a knowledge base, and is critical for many knowledge-intensive NLP applications. When applied to tables in…