Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

The Expressive Power of Transformers with Chain of Thought

William MerrillAshish Sabharwal
2024
ICLR

Recent theoretical work has identified surprisingly simple reasoning problems, such as checking if two nodes in a graph are connected or simulating finite-state machines, that are provably… 

TRAM: Bridging Trust Regions and Sharpness Aware Minimization

Tom SherborneNaomi SaphraPradeep DasigiHao Peng
2024
ICLR

By reducing the curvature of the loss surface in the parameter space, Sharpness-aware minimization (SAM) yields widespread robustness improvement under domain transfer. Instead of focusing on… 

What's In My Big Data?

Yanai ElazarAkshita BhagiaIan MagnussonJesse Dodge
2024
ICLR

Large text corpora are the backbone of language models. However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion… 

WildChat: 1M ChatGPT Interaction Logs in the Wild

Wenting ZhaoXiang RenJ. HesselYuntian Deng
2024
ICLR

Chatbots such as GPT-4 and ChatGPT are now serving millions of users. Despite their widespread use, there remains a lack of public datasets showcasing how these tools are used by a population of… 

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory

Niloofar MireshghallahHyunwoo KimXuhui ZhouYejin Choi
2024
ICLR

The interactive use of large language models (LLMs) in AI assistants (at work, home, etc.) introduces a new set of inference-time privacy risks: LLMs are fed different types of information from… 

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

Linlu QiuLiwei JiangXiming LuXiang Ren
2024
ICLR

The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence. Prior work… 

PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning

Faeze BrahmanChandra BhagavatulaValentina PyatkinYejin Choi
2024
ICLR

Procedural planning, which entails decomposing a high-level goal into a sequence of temporally ordered steps, is an important yet intricate task for machines. It involves integrating common-sense… 

Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting

Melanie SclarYejin ChoiYulia TsvetkovAlane Suhr
2024
ICLR

As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance. Because choices in prompt design can… 

Tailoring Self-Rationalizers with Multi-Reward Distillation

Sahana RamnathBrihi JoshiSkyler HallinanXiang Ren
2024
ICLR

Large language models (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant… 

The Generative AI Paradox: "What It Can Create, It May Not Understand"

Peter WestXiming LuNouha DziriYejin Choi
2024
ICLR

The recent wave of generative AI has sparked unprecedented global attention, with both excitement and concern over potentially superhuman levels of artificial intelligence: models now take only…