Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Beam Decoding with Controlled Patience

Jungo KasaiKeisuke SakaguchiRonan Le BrasNoah A. Smith
2022
arXiv

Text generation with beam search has proven successful in a wide range of applications. The commonly-used implementation of beam decoding follows a first come, first served heuris-tic: it keeps a set… 

Infrastructure for rapid open knowledge network development

Michael CafarellaMichael AndersonIz BeltagyJiayun Zou
2022
AI Magazine

The past decade has witnessed a growth in the use of knowledge graph technologies for advanced data search, data integration, and query-answering applications. The leading example of a public,… 

Continuous Scene Representations for Embodied AI

S. GadreKiana EhsaniS. SongRoozbeh Mottaghi
2022
arXiv

We propose Continuous Scene Representations (CSR), a scene representation constructed by an embodied agent navigating within a space, where objects and their relationships are modeled by continuous… 

Benchmarking Generalization via In-Context Instructions on 1, 600+ Language Tasks

Yizhong WangSwaroop MishraPegah AlipoormolabashiDaniel Khashabi
2022
arXiv

How can we measure the generalization of models to a variety of unseen tasks when provided with their language instructions? To facilitate progress in this goal, we introduce N ATURAL -I NSTRUCTIONS… 

Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space

Mor GevaAvi CaciularuKevin Ro WangYoav Goldberg
2022
arXiv

Transformer-based language models (LMs) are at the core of modern NLP, but their inter-nal prediction construction process is opaque and largely not understood. In this work, we make a substantial… 

COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics

Lianhui QinS. WelleckDaniel KhashabiYejin Choi
2022
arXiv

Many applications of text generation require incorporating different constraints to control the semantics or style of generated text. These constraints can be hard (e.g., ensuring certain keywords… 

CiteRead: Integrating Localized Citation Contexts into Scientific Paper Reading

Napol RachatasumritJonathan BraggAmy X. ZhangDaniel S. Weld
2022
IUI

When reading a scholarly paper, scientists oftentimes wish to understand how follow-on work has built on or engages with what they are reading. While a paper itself can only discuss prior work, some… 

Probing Factually Grounded Content Transfer with Factual Ablation

Peter WestChris QuirkMichel GalleyYejin Choi
2022
Findings of ACL

Despite recent success, large neural models often generate factually incorrect text. Compounding this is the lack of a standard automatic evaluation for factuality–it cannot be meaningfully improved… 

Memory-assisted prompt editing to improve GPT-3 after deployment

Aman MadaanNiket TandonPeter ClarkYiming Yang
2022
ACL • Workshop on Commonsense Reasoning

Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. For example, GPT-3 would mistakenly interpret "What word is similar to good?" to mean a homonym, while the… 

Don't Say What You Don't Know: Improving the Consistency of Abstractive Summarization by Constraining Beam Search

Daniel KingZejiang ShenNishant SubramaniDoug Downey
2022
GEM Workshop 2022

Abstractive summarization systems today produce fluent and relevant output, but often “hallucinate” statements not supported by the source text. We analyze the connection between hallucinations and…