Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Infusing Finetuning with Semantic Dependencies

Zhaofeng WuHao PengNoah A. Smith
2021
TACL

Abstract For natural language processing systems, two kinds of evidence support the use of text representations from neural language models “pretrained” on large unannotated corpora: performance on… 

Break, Perturb, Build: Automatic Perturbation of Reasoning Paths through Question Decomposition

Mor GevaTomer WolfsonJonathan Berant
2021
TACL

Recent efforts to create challenge benchmarks that test the abilities of natural language understanding models have largely depended on human annotations. In this work, we introduce the “Break,… 

Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes

Ofer SaboYanai ElazarYoav GoldbergIdo Dagan
2021
TACL

We explore few-shot learning (FSL) for relation classification (RC). Focusing on the realistic scenario of FSL, in which a test instance might not belong to any of the target categories… 

MultiCite: Modeling realistic citations requires moving beyond the single-sentence single-label setting

Anne LauscherBrandon KoBailey KuehlKyle Lo
2021
NAACL

Citation context analysis (CCA) is an important task in natural language processing that studies how and why scholars discuss each others’ work. Despite being studied for decades, traditional… 

“How’s Shelby the Turtle today?” Strengths and Weaknesses of Interactive Animal-Tracking Maps for Environmental Communication

Matt ZieglerMichael QuinlanZage Strassberg-PhillipsKurtis Heimerl
2021
COMPASS

Interactive wildlife-tracking maps on public-facing websites and apps have become a popular way to share scientific data with the public as more conservationists and wildlife researchers deploy… 

Critical Thinking for Language Models

Gregor BetzChristian VoigtKyle Richardson
2021
IWCS

This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic text corpus of deductively valid arguments, and use this… 

Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral

Lang LiuKrishna PillutlaS. WelleckZ. Harchaoui
2021
arXiv

The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance. Divergence frontiers have recently been proposed as an evaluation framework… 

Memory-efficient Transformers via Top-k Attention

Ankit GuptaGuy DarShaya GoodmanJonathan Berant
2021
arXiv

Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. While these… 

Overview and Insights from the SciVer Shared Task on Scientific Claim Verification

David WaddenKyle Lo
2021
SDP Workshop • NAACL

We present an overview of the SCIVER shared task, presented at the 2nd Scholarly Document Processing (SDP) workshop at NAACL 2021. In this shared task, systems were provided a scientific claim and a… 

RobustNav: Towards Benchmarking Robustness in Embodied Navigation

Prithvijit ChattopadhyayJudy HoffmanR. MottaghiAniruddha Kembhavi
2021
arXiv

As an attempt towards assessing the robustness of embodied navigation agents, we propose ROBUSTNAV, a framework to quantify the performance of embodied navigation agents when exposed to a wide…