Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Saturated Transformers are Constant-Depth Threshold Circuits

William MerrillAshish SabharwalNoah A. Smith
2022
TACL

Transformers have become a standard neural network architecture for many NLP problems, motivating theoretical analysis of their power in terms of formal languages. Recent work has shown that… 

Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation

Ofir PressNoah A. SmithM. Lewis
2022
ICLR

Since the introduction of the transformer model by Vaswani et al. (2017), a fundamental question has yet to be answered: how does a model achieve extrapolation at inference time for sequences that… 

Beam Decoding with Controlled Patience

Jungo KasaiKeisuke SakaguchiRonan Le BrasNoah A. Smith
2022
arXiv

Text generation with beam search has proven successful in a wide range of applications. The commonly-used implementation of beam decoding follows a first come, first served heuris-tic: it keeps a set… 

Benchmarking Generalization via In-Context Instructions on 1, 600+ Language Tasks

Yizhong WangSwaroop MishraPegah AlipoormolabashiDaniel Khashabi
2022
arXiv

How can we measure the generalization of models to a variety of unseen tasks when provided with their language instructions? To facilitate progress in this goal, we introduce N ATURAL -I NSTRUCTIONS… 

Don't Say What You Don't Know: Improving the Consistency of Abstractive Summarization by Constraining Beam Search

Daniel KingZejiang ShenNishant SubramaniDoug Downey
2022
GEM Workshop 2022

Abstractive summarization systems today produce fluent and relevant output, but often “hallucinate” statements not supported by the source text. We analyze the connection between hallucinations and… 

Staged Training for Transformer Language Models

Sheng ShenPete WalshK. KeutzerIz Beltagy
2022
ICML 2022

The current standard approach to scaling transformer language models trains each model size from a different random initialization. As an alternative, we consider a staged training setup that begins… 

A Controllable Model of Grounded Response Generation

Zeqiu WuMichel GalleyChris BrockettBill Dolan
2022
AAAI

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process. This control is essential to ensure that users' semantic… 

Computational Lens on Cognition: Study Of Autobiographical Versus Imagined Stories With Large-Scale Language Models

Maarten SapA. JafarpourYejin ChoiE. Horvitz
2022
arXiv

Lifelong experiences and learned knowledge lead to shared expectations about how common situations tend to unfold. Such knowledge enables people to interpret story narratives and identify salient… 

Imagined versus Remembered Stories: Quantifying Differences in Narrative Flow

Maarten SapA. JafarpourYejin ChoiE. Horvitz
2022
Sociology

Lifelong experiences and learned knowledge lead to shared expectations about how common situations tend to unfold. Such knowledge of narrative event flow enables people to weave together a story.… 

PROMPT WAYWARDNESS: The Curious Case of Discretized Interpretation of Continuous Prompts

Daniel KhashabiShan LyuSewon MinYejin Choi
2022
NAACL

Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of…