Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

The Generative AI Paradox: "What It Can Create, It May Not Understand"

Peter WestXiming LuNouha DziriYejin Choi
2024
ICLR

The recent wave of generative AI has sparked unprecedented global attention, with both excitement and concern over potentially superhuman levels of artificial intelligence: models now take only… 

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

Bill Yuchen LinAbhilasha RavichanderXiming LuYejin Choi
2024
ICLR

The alignment tuning process of large language models (LLMs) typically involves instruction learning through supervised fine-tuning (SFT) and preference tuning via reinforcement learning from human… 

MacGyver: Are Large Language Models Creative Problem Solvers?

Yufei TianAbhilasha RavichanderLianhui QinFaeze Brahman
2024
NAACL

We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting. To this end, we create MACGYVER, an automatically generated dataset consisting of over 1,600… 

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

Taylor SorensenLiwei JiangJena D. HwangYejin Choi
2024
AAAI

Human values are crucial to human decision-making. Value pluralism is the view that multiple correct values may be held in tension with one another (e.g., when considering lying to a friend to… 

OLMo: Accelerating the Science of Language Models

Dirk GroeneveldIz BeltagyPete WalshHanna Hajishirzi
2024
ACL 2024

Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off,… 

Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Luca SoldainiRodney KinneyAkshita BhagiaKyle Lo
2024
ACL 2024

Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often… 

Self-Refine: Iterative Refinement with Self-Feedback

Aman MadaanNiket TandonPrakhar GuptaPeter Clark
2023
NeurIPS

Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for… 

Faith and Fate: Limits of Transformers on Compositionality

Nouha DziriXiming LuMelanie SclarYejin Choi
2023
NeurIPS

Transformer large language models (LLMs) have sparked admiration for their exceptional performance on tasks that demand intricate multi-step reasoning. Yet, these models simultaneously show failures… 

Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

Zeqiu WuYushi HuWeijia ShiHanna Hajishirzi
2023
NeurIPS

Language models (LMs) often exhibit undesirable text generation behaviors, including generating false, toxic, or irrelevant outputs. Reinforcement learning from human feedback (RLHF) - where human… 

How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources

Yizhong WangHamish IvisonPradeep DasigiHanna Hajishirzi
2023
NeurIPS

In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. Despite recent claims that open models can be on par with…