Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
From Centralized to Ad-Hoc Knowledge Base Construction for Hypotheses Generation.
Objective To demonstrate and develop an approach enabling individual researchers or small teams to create their own ad-hoc, lightweight knowledge bases tailored for specialized scientific interests,…
Machine-Learned Climate Model Corrections From a Global Storm-Resolving Model: Performance Across the Annual Cycle
One approach to improving the accuracy of a coarse-grid global climate model is to add machine-learned (ML) state-dependent corrections to the prognosed model tendencies, such that the climate model…
TESS: Text-to-Text Self-Conditioned Simplex Diffusion
Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various domains with continuous-valued inputs. Despite the promises of fully non-autoregressive…
Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Among the remarkable emergent capabilities of large language models (LMs) is free-text rationalization; beyond a certain scale, large LMs are capable of generating seemingly useful rationalizations,…
Embedding Recycling for Language Models
Training and inference with large neural models is expensive. However, for many application domains, while new tasks and models arise frequently, the underlying doc-uments being modeled remain…
ArK: Augmented Reality with Knowledge Interactive Emergent Ability
Despite the growing adoption of mixed reality and interactive AI agents, it remains challenging for these systems to generate high quality 2D/3D scenes in unseen environments. The common practice…
Binding Language Models in Symbolic Languages
Though end-to-end neural approaches have recently been dominating NLP tasks in both performance and ease-of-use, they lack interpretability and robustness. We propose Binder, a training-free…
Can AI language models replace human participants?
Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in…
Complexity-Based Prompting for Multi-Step Reasoning
We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences…
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual…