Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Self-Instruct: Aligning Language Models with Self-Generated Instructions

Yizhong WangYeganeh KordiSwaroop MishraHannaneh Hajishirzi
2023
ACL

Large “instruction-tuned” language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily… 

Stubborn Lexical Bias in Data and Models

Sofia SerranoJesse DodgeNoah A. Smith
2023
ACL

In NLP, recent work has seen increased focus on spurious correlations between various features and labels in training data, and how these influence model behavior. However, the presence and effect… 

Task-aware Retrieval with Instructions

Akari AsaiTimo SchickPatrick LewisWen-tau Yih
2023
ACL • Findings

We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries. We aim to develop a general-purpose task-aware… 

When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories

Alex MallenAkari AsaiVictor ZhongHannaneh Hajishirzi
2023
ACL

Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world… 

Words as Gatekeepers: Measuring Discipline-specific Terms and Meanings in Scholarly Publications

Li LucyJesse DodgeDavid BammanKatherine A. Keith
2023
Findings of ACL

Scholarly text is often laden with jargon, or specialized language that can facilitate efficient in-group communication within fields but hinder understanding for out-groups. In this work, we… 

Global Precipitation Correction Across a Range of Climates Using CycleGAN

Jeremy J McGibbonSpencer K. ClarkBrian HennChristopher S. Bretherton
2023
ESSOAr

Accurate precipitation simulations for various climate scenarios are critical for understanding and predicting the impacts of climate change. This study employs a Cycle-generative adversarial… 

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation

Chandra BhagavatulaJena D. HwangDoug DowneyYejin Choi
2023
Annual Meeting of the Association for Computational Linguistics

Commonsense capabilities of pre-trained language models dramatically improve with scale, leading many to believe that scale is the only winning recipe. But is it? Here, we investigate an alternative… 

Let Me Teach You: Pedagogical Foundations of Feedback for Language Models

Beatriz BorgesNiket TandonTanja KaserAntoine Bosselut
2023
arXiv

Natural Language Feedback (NLF) is an increasingly popular avenue to align Large Language Models (LLMs) to human preferences. Despite the richness and diversity of the information it can convey, NLF… 

Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance

Yao FuLitu OuMingyu ChenTushar Khot
2023
ICML 2023, the Challenges in Deployable Generative AI workshop

As large language models (LLMs) are continuously being developed, their evaluation becomes increasingly important yet challenging. This work proposes Chain-of-Thought Hub, an open-source evaluation… 

ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews

Mike D'ArcyAlexis RossErin BransomDoug Downey
2023
arXiv.org

Revising scientific papers based on peer feedback is a challenging task that requires not only deep scientific knowledge and reasoning, but also the ability to recognize the implicit requests in…