Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Plug and Play Autoencoders for Conditional Text Generation

Florian MaiNikolaos PappasI. MonteroNoah A. Smith
2020
EMNLP

Text autoencoders are commonly used for conditional generation tasks such as style transfer. We propose methods which are plug and play, where any pretrained autoencoder can be used, and only… 

The Multilingual Amazon Reviews Corpus

Phillip KeungY. LuGyorgy SzarvasNoah A. Smith
2020
EMNLP

We present the Multilingual Amazon Reviews Corpus (MARC), a large-scale collection of Amazon reviews for multilingual text classification. The corpus contains reviews in English, Japanese, German,… 

Writing Strategies for Science Communication: Data and Computational Analysis

Tal AugustLauren KimKatharina ReineckeNoah A. Smith
2020
EMNLP

Communicating complex scientific ideas without misleading or overwhelming the public is challenging. While science communication guides exist, they rarely offer empirical evidence for how their… 

Do Language Embeddings Capture Scales?

Xikun ZhangDeepak RamachandranIan TenneyDan Roth
2020
Findings of EMNLP • BlackboxNLP Workshop

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is… 

UnQovering Stereotyping Biases via Underspecified Questions

Tao LiTushar KhotDaniel KhashabiVivek Srikumar
2020
Findings of EMNLP

While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework… 

Rearrangement: A Challenge for Embodied AI

Dhruv BatraA. X. ChangS. ChernovaHao Su
2020
arXiv

We describe a framework for research and evaluation in Embodied AI. Our proposal is based on a canonical task: Rearrangement. A standard task can focus the development of new techniques and serve as… 

ABNIRML: Analyzing the Behavior of Neural IR Models

Sean MacAvaneySergey FeldmanNazli GoharianArman Cohan
2020
TACL

Numerous studies have demonstrated the effectiveness of pretrained contextualized language models such as BERT and T5 for ad-hoc search. However, it is not wellunderstood why these methods are so… 

GO FIGURE: A Meta Evaluation of Factuality in Summarization

Saadia GabrielAsli CelikyilmazRahul JhaJianfeng Gao
2020
ACL

Text generation models can generate factually inconsistent text containing distorted or fabricated facts about the source text. Recent work has focused on building evaluation models to verify the… 

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

Ximing LuPeter WestRowan ZellersYejin Choi
2020
NAACL

Conditional text generation often requires lexical constraints, i.e., which words should or shouldn’t be included in the output text. While the dominant recipe for conditional text generation has… 

Paraphrasing vs Coreferring: Two Sides of the Same Coin

Y. MegedAvi CaciularuVered ShwartzI. Dagan
2020
arXiv

We study the potential synergy between two different NLP tasks, both confronting lexical variability: identifying predicate paraphrases and event coreference resolution. First, we used annotations…