Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering
Despite thousands of researchers, engineers, and artists actively working on improving text-to-image generation models, systems often fail to produce images that accurately align with the text…
Exploiting Generalization in Offline Reinforcement Learning via Unseen State Augmentations
Offline reinforcement learning (RL) methods strike a balance between exploration and exploitation by conservative value estimation -- penalizing values of unseen states and actions. Model-free…
Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms
Bias evaluation benchmarks and dataset and model documentation have emerged as central processes for assessing the biases and harms of artificial intelligence (AI) systems. However, these auditing…
LEXPLAIN: Improving Model Explanations via Lexicon Supervision
Model explanations that shed light on the model’s predictions are becoming a desired additional output of NLP models, alongside their predictions. Challenges in creating these explanations include…
When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories
Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world…
COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements
Warning: This paper contains content that may be offensive or upsetting. Understanding the harms and offensiveness of statements requires reasoning about the social and situational context in which…
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Language models trained on massive prompted multitask datasets like T0 (Sanh et al., 2021) or FLAN (Wei et al., 2021a) can generalize to tasks unseen during training. We show that training on a…
Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Text detoxification has the potential to miti- 001 gate the harms of toxicity by rephrasing text to 002 remove offensive meaning, but subtle toxicity 003 remains challenging to tackle. We introduce…
DISCO: Distilling Phrasal Counterfactuals with Large Language Models
Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such…
From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second one, often hateful or provocative, to a narrow in-group; they are deployed to evade both…