Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Natural Adversarial Objects

Felix LauNishant SubramaniSasha HarrisonRosanne Liu
2021
NeurIPS 2021 Data Centric AI Workshop

Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset,… 

Bridging the Imitation Gap by Adaptive Insubordination

Luca WeihsUnnat JainJordi SalvadorA. Schwing
2021
arXiv

Why do agents often obtain better reinforcement learning policies when imitating a worse expert? We show that privileged information used by the expert is marginalized in the learned agent policy,… 

Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text

Christopher ClarkJordi SalvadorDustin SchwenkAli Farhadi
2021
arXiv

Communicating with humans is challenging for AIs because it requires a shared understanding of the world, complex semantics (e.g., metaphors or analogies), and at times multimodal gestures (e.g.,… 

Specializing Multilingual Language Models: An Empirical Study

Ethan C. ChauNoah A. Smith
2021
EMNLP • Workshop on Multilingual Representation Learning

Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance,… 

Towards Personalized Descriptions of Scientific Concepts

Sonia K. MurthyDaniel KingTom HopeDoug Downey
2021
EMNLP 2021 • WiNLP

A single scientific concept can be described in many different ways, and the most informative description depends on the audience. In this paper, we propose generating personalized scientific… 

Measuring Association Between Labels and Free-Text Rationales

Sarah WiegreffeAna MarasovićNoah A. Smith
2021
EMNLP

Interpretable NLP has taking increasing interest in ensuring that explanations are faithful to the model’s decision-making process. This property is crucial for machine learning researchers and… 

Transformer Feed-Forward Layers Are Key-Value Memories

Mor GevaR. SchusterJonathan BerantOmer Levy
2021
EMNLP

Feed-forward layers constitute two-thirds of a transformer model’s parameters, yet their role in the network remains underexplored. We show that feed-forward layers in transformer-based language… 

Value-aware Approximate Attention

Ankit GuptaJonathan Berant
2021
EMNLP

Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. However, all… 

GooAQ: Open Question Answering with Diverse Answer Types

Daniel KhashabiAmos NgTushar KhotChris Callison-Burch
2021
Findings of EMNLP

While day-to-day questions come with a variety of answer types, the current questionanswering (QA) literature has failed to adequately address the answer diversity of questions. To this end, we… 

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Nora KassnerOyvind TafjordH. SchutzeP. Clark
2021
EMNLP

Although pretrained language models (PTLMs) have been shown to contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after using…