Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
MultiCite: Modeling realistic citations requires moving beyond the single-sentence single-label setting
Citation context analysis (CCA) is an important task in natural language processing that studies how and why scholars discuss each others’ work. Despite being studied for decades, traditional…
ParsiNLU: A Suite of Language Understanding Challenges for Persian
Despite the progress made in recent years in addressing natural language understanding (NLU) challenges, the majority of this progress remains to be concentrated on resource-rich languages like…
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?
Language models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever “understand”…
Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes
We explore few-shot learning (FSL) for relation classification (RC). Focusing on the realistic scenario of FSL, in which a test instance might not belong to any of the target categories…
“How’s Shelby the Turtle today?” Strengths and Weaknesses of Interactive Animal-Tracking Maps for Environmental Communication
Interactive wildlife-tracking maps on public-facing websites and apps have become a popular way to share scientific data with the public as more conservationists and wildlife researchers deploy…
Critical Thinking for Language Models
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic text corpus of deductively valid arguments, and use this…
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral
The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance. Divergence frontiers have recently been proposed as an evaluation framework…
Memory-efficient Transformers via Top-k Attention
Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. While these…
Overview and Insights from the SciVer Shared Task on Scientific Claim Verification
We present an overview of the SCIVER shared task, presented at the 2nd Scholarly Document Processing (SDP) workshop at NAACL 2021. In this shared task, systems were provided a scientific claim and a…
RobustNav: Towards Benchmarking Robustness in Embodied Navigation
As an attempt towards assessing the robustness of embodied navigation agents, we propose ROBUSTNAV, a framework to quantify the performance of embodied navigation agents when exposed to a wide…