Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore

Sewon MinSuchin GururanganEric WallaceLuke Zettlemoyer
2024
ICLR

The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on… 

BTR: Binary Token Representations for Efficient Retrieval Augmented Language Models

Qingqing CaoSewon MinYizhong WangHannaneh Hajishirzi
2024
ICLR

Retrieval augmentation addresses many critical problems in large language models such as hallucination, staleness, and privacy leaks. However, running retrieval-augmented language models (LMs) is… 

WildChat: 1M ChatGPT Interaction Logs in the Wild

Wenting ZhaoXiang RenJ. HesselYuntian Deng
2024
ICLR

Chatbots such as GPT-4 and ChatGPT are now serving millions of users. Despite their widespread use, there remains a lack of public datasets showcasing how these tools are used by a population of… 

WildChat: 1M ChatGPT Interaction Logs in the Wild

Wenting ZhaoXiang RenJ. HesselYuntian Deng
2024
ICLR

Chatbots such as GPT-4 and ChatGPT are now serving millions of users. Despite their widespread use, there remains a lack of public datasets showcasing how these tools are used by a population of… 

Tailoring Self-Rationalizers with Multi-Reward Distillation

Sahana RamnathBrihi JoshiSkyler HallinanXiang Ren
2024
ICLR

Large language models (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant… 

PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning

Faeze BrahmanChandra BhagavatulaValentina PyatkinYejin Choi
2024
ICLR

Procedural planning, which entails decomposing a high-level goal into a sequence of temporally ordered steps, is an important yet intricate task for machines. It involves integrating common-sense… 

Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting

Melanie SclarYejin ChoiYulia TsvetkovAlane Suhr
2024
ICLR

As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance. Because choices in prompt design can… 

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory

Niloofar MireshghallahHyunwoo KimXuhui ZhouYejin Choi
2024
ICLR

The interactive use of large language models (LLMs) in AI assistants (at work, home, etc.) introduces a new set of inference-time privacy risks: LLMs are fed different types of information from… 

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

Linlu QiuLiwei JiangXiming LuXiang Ren
2024
ICLR

The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence. Prior work… 

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

Bill Yuchen LinAbhilasha RavichanderXiming LuYejin Choi
2024
ICLR

The alignment tuning process of large language models (LLMs) typically involves instruction learning through supervised fine-tuning (SFT) and preference tuning via reinforcement learning from human…