Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

WildChat: 1M ChatGPT Interaction Logs in the Wild

Wenting ZhaoXiang RenJ. HesselYuntian Deng
2024
ICLR

Chatbots such as GPT-4 and ChatGPT are now serving millions of users. Despite their widespread use, there remains a lack of public datasets showcasing how these tools are used by a population of… 

WildChat: 1M ChatGPT Interaction Logs in the Wild

Wenting ZhaoXiang RenJ. HesselYuntian Deng
2024
ICLR

Chatbots such as GPT-4 and ChatGPT are now serving millions of users. Despite their widespread use, there remains a lack of public datasets showcasing how these tools are used by a population of… 

Tailoring Self-Rationalizers with Multi-Reward Distillation

Sahana RamnathBrihi JoshiSkyler HallinanXiang Ren
2024
ICLR

Large language models (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant… 

PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning

Faeze BrahmanChandra BhagavatulaValentina PyatkinYejin Choi
2024
ICLR

Procedural planning, which entails decomposing a high-level goal into a sequence of temporally ordered steps, is an important yet intricate task for machines. It involves integrating common-sense… 

Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting

Melanie SclarYejin ChoiYulia TsvetkovAlane Suhr
2024
ICLR

As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance. Because choices in prompt design can… 

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory

Niloofar MireshghallahHyunwoo KimXuhui ZhouYejin Choi
2024
ICLR

The interactive use of large language models (LLMs) in AI assistants (at work, home, etc.) introduces a new set of inference-time privacy risks: LLMs are fed different types of information from… 

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

Linlu QiuLiwei JiangXiming LuXiang Ren
2024
ICLR

The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence. Prior work… 

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

Bill Yuchen LinAbhilasha RavichanderXiming LuYejin Choi
2024
ICLR

The alignment tuning process of large language models (LLMs) typically involves instruction learning through supervised fine-tuning (SFT) and preference tuning via reinforcement learning from human… 

The Generative AI Paradox: "What It Can Create, It May Not Understand"

Peter WestXiming LuNouha DziriYejin Choi
2024
ICLR

The recent wave of generative AI has sparked unprecedented global attention, with both excitement and concern over potentially superhuman levels of artificial intelligence: models now take only… 

MacGyver: Are Large Language Models Creative Problem Solvers?

Yufei TianAbhilasha RavichanderLianhui QinFaeze Brahman
2024
NAACL

We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting. To this end, we create MACGYVER, an automatically generated dataset consisting of over 1,600…