Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
LEXPLAIN: Improving Model Explanations via Lexicon Supervision
Model explanations that shed light on the model’s predictions are becoming a desired additional output of NLP models, alongside their predictions. Challenges in creating these explanations include…
When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories
Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world…
Objaverse-XL: A Universe of 10M+ 3D Objects
Natural language processing and 2D vision models have attained remarkable proficiency on many tasks primarily by escalating the scale of training data. However, 3D vision tasks have not seen the…
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Language models trained on massive prompted multitask datasets like T0 (Sanh et al., 2021) or FLAN (Wei et al., 2021a) can generalize to tasks unseen during training. We show that training on a…
Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions
Prompting-based large language models (LLMs) are surprisingly powerful at generating natural language reasoning steps or Chains-of-Thoughts (CoT) for multi-step question answering (QA). They…
DISCO: Distilling Phrasal Counterfactuals with Large Language Models
Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such…
Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Text detoxification has the potential to miti- 001 gate the harms of toxicity by rephrasing text to 002 remove offensive meaning, but subtle toxicity 003 remains challenging to tackle. We introduce…
HINT: Hypernetwork Instruction Tuning for Efficient Few- and Zero-Shot Generalisation
Recent NLP models have shown the remarkable ability to effectively generalise `zero-shot' to new tasks using only natural language instructions as guidance. However, many of these approaches suffer…
From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second one, often hateful or provocative, to a narrow in-group; they are deployed to evade both…
Reproducibility in NLP: What Have We Learned from the Checklist?
Scientific progress in NLP rests on the reproducibility of researchers' claims. The *CL conferences created the NLP Reproducibility Checklist in 2020 to be completed by authors at submission to…