Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
HINT: Hypernetwork Instruction Tuning for Efficient Zero-Shot Generalisation
Recent NLP models have the great ability to generalise ‘zero-shot’ to new tasks using only an instruction as guidance. However, these approaches usually repeat their instructions with every input,…
Elaboration-Generating Commonsense Question Answering at Scale
In question answering requiring common sense, language models (e.g., GPT-3) have been used to generate text expressing background knowledge that helps improve performance. Yet the cost of working…
FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning
Large pre-trained models are capable of few-shot in-context learning (ICL), i.e., performing a new task by prepending a few demonstrations before the test input. However, the concatenated…
Risks and NLP Design: A Case Study on Procedural Document QA
As NLP systems are increasingly deployed at scale, concerns about their potential negative impacts have attracted the attention of the research community, yet discussions of risk have mostly been at…
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain…
NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining…
RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs
Despite their unprecedented success, even the largest language models make mistakes.Similar to how humans learn and improve using feedback, previous work proposed providing language models with…
Global Precipitation Correction Across a Range of Climates Using CycleGAN
Accurate precipitation simulations for various climate scenarios are critical for understanding and predicting the impacts of climate change. This study employs a Cycle-generative adversarial…
I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Commonsense capabilities of pre-trained language models dramatically improve with scale, leading many to believe that scale is the only winning recipe. But is it? Here, we investigate an alternative…
Let Me Teach You: Pedagogical Foundations of Feedback for Language Models
Natural Language Feedback (NLF) is an increasingly popular avenue to align Large Language Models (LLMs) to human preferences. Despite the richness and diversity of the information it can convey, NLF…