Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Riveter: Measuring Power and Social Dynamics Between Entities
Riveter provides a complete easy-to-use pipeline for analyzing verb connotations associated with entities in text corpora. We prepopulate the package with connotation frames of sentiment, power, and…
RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs
Despite their unprecedented success, even the largest language models make mistakes.Similar to how humans learn and improve using feedback, previous work proposed providing language models with…
Self-Instruct: Aligning Language Models with Self-Generated Instructions
Large “instruction-tuned” language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily…
Stubborn Lexical Bias in Data and Models
In NLP, recent work has seen increased focus on spurious correlations between various features and labels in training data, and how these influence model behavior. However, the presence and effect…
Task-aware Retrieval with Instructions
We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries. We aim to develop a general-purpose task-aware…
When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories
Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world…
Words as Gatekeepers: Measuring Discipline-specific Terms and Meanings in Scholarly Publications
Scholarly text is often laden with jargon, or specialized language that can facilitate efficient in-group communication within fields but hinder understanding for out-groups. In this work, we…
Global Precipitation Correction Across a Range of Climates Using CycleGAN
Accurate precipitation simulations for various climate scenarios are critical for understanding and predicting the impacts of climate change. This study employs a Cycle-generative adversarial…
I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Commonsense capabilities of pre-trained language models dramatically improve with scale, leading many to believe that scale is the only winning recipe. But is it? Here, we investigate an alternative…
Let Me Teach You: Pedagogical Foundations of Feedback for Language Models
Natural Language Feedback (NLF) is an increasingly popular avenue to align Large Language Models (LLMs) to human preferences. Despite the richness and diversity of the information it can convey, NLF…