Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Beam Decoding with Controlled Patience

Jungo KasaiKeisuke SakaguchiRonan Le BrasNoah A. Smith
2022
arXiv

Text generation with beam search has proven successful in a wide range of applications. The commonly-used implementation of beam decoding follows a first come, first served heuris-tic: it keeps a set… 

Benchmarking Generalization via In-Context Instructions on 1, 600+ Language Tasks

Yizhong WangSwaroop MishraPegah AlipoormolabashiDaniel Khashabi
2022
arXiv

How can we measure the generalization of models to a variety of unseen tasks when provided with their language instructions? To facilitate progress in this goal, we introduce N ATURAL -I NSTRUCTIONS… 

Don't Say What You Don't Know: Improving the Consistency of Abstractive Summarization by Constraining Beam Search

Daniel KingZejiang ShenNishant SubramaniDoug Downey
2022
GEM Workshop 2022

Abstractive summarization systems today produce fluent and relevant output, but often “hallucinate” statements not supported by the source text. We analyze the connection between hallucinations and… 

Staged Training for Transformer Language Models

Sheng ShenPete WalshK. KeutzerIz Beltagy
2022
ICML 2022

The current standard approach to scaling transformer language models trains each model size from a different random initialization. As an alternative, we consider a staged training setup that begins… 

A Controllable Model of Grounded Response Generation

Zeqiu WuMichel GalleyChris BrockettBill Dolan
2022
AAAI

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process. This control is essential to ensure that users' semantic… 

Computational Lens on Cognition: Study Of Autobiographical Versus Imagined Stories With Large-Scale Language Models

Maarten SapA. JafarpourYejin ChoiE. Horvitz
2022
arXiv

Lifelong experiences and learned knowledge lead to shared expectations about how common situations tend to unfold. Such knowledge enables people to interpret story narratives and identify salient… 

Imagined versus Remembered Stories: Quantifying Differences in Narrative Flow

Maarten SapA. JafarpourYejin ChoiE. Horvitz
2022
Sociology

Lifelong experiences and learned knowledge lead to shared expectations about how common situations tend to unfold. Such knowledge of narrative event flow enables people to weave together a story.… 

PROMPT WAYWARDNESS: The Curious Case of Discretized Interpretation of Continuous Prompts

Daniel KhashabiShan LyuSewon MinYejin Choi
2022
NAACL

Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of… 

UnifiedQA-v2: Stronger Generalization via Broader Cross-Format Training

Daniel KhashabiYeganeh KordiHannaneh Hajishirzi
2022
arXiv

We present UNIFIEDQA-v2, a QA model built with the same process as UNIFIEDQA, except that it utilizes more supervision – roughly 3× the number of datasets used for UNIFIEDQA. This generally leads to… 

FLEX: Unifying Evaluation for Few-Shot NLP

Jonathan BraggArman CohanKyle LoIz Beltagy
2021
NeurIPS

Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental…