Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents
Autonomous agents that address day-to-day digital tasks (e.g., ordering groceries for a household), must not only operate multiple apps (e.g., notes, messaging, shopping app) via APIs, but also…
Can Language Models Serve as Text-Based World Simulators?
Virtual environments play a key role in benchmarking advances in complex planning and decision-making tasks but are expensive and complicated to build by hand. Can current language models themselves…
Few-shot Dialogue Strategy Learning for Motivational Interviewing via Inductive Reasoning
We consider the task of building a dialogue system that can motivate users to adopt positive lifestyle changes: Motivational Interviewing. Addressing such a task requires a system that can infer…
Data Contamination Report from the 2024 CONDA Shared Task
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where…
Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills
Large language models (LLMs) have recently been used for sequential decision making in interactive environments. However, leveraging environment reward signals for continual LLM actor improvement is…
Data-driven Discovery with Large Generative Models
With the accumulation of data at an unprecedented rate, its potential to fuel scientific discovery is growing exponentially. This position paper urges the Machine Learning (ML) community to exploit…
Tell, Don't Show!: Language Guidance Eases Transfer Across Domains in Images and Videos
We introduce LaGTran, a novel framework that utilizes text supervision to guide robust transfer of discriminative knowledge from labeled source to unlabeled target data with domain gaps. While…
Answer, Assemble, Ace: Understanding How Transformers Answer Multiple Choice Questions
Multiple-choice question answering (MCQA) is a key competence of performant transformer language models that is tested by mainstream benchmarks. However, recent evidence shows that models can have…
DiscoveryBench: Towards Data-Driven Discovery with Large Language Models
Can the rapid advances in code generation, function calling, and data analysis using large language models (LLMs) help automate the search and verification of hypotheses purely from a set of…
Climate sensitivity and relative humidity changes in global storm-resolving model simulations of climate change
The climate simulation frontier of a global storm-resolving model (GSRM; or k-scale model because of its kilometer-scale horizontal resolution) is deployed for climate change simulations. The…