Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

SPECTER: Document-level Representation Learning using Citation-informed Transformers

Arman CohanSergey FeldmanIz BeltagyDaniel S. Weld
2020
ACL

Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are… 

Stolen Probability: A Structural Weakness of Neural Language Models

David DemeterGregory KimmelDoug Downey
2020
ACL

Neural Network Language Models (NNLMs) generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word… 

Syntactic Search by Example

Micah ShlainHillel Taub-TabibShoval SaddeYoav Goldberg
2020
ACL

We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we… 

Temporal Common Sense Acquisition with Minimal Supervision

Ben ZhouQiang NingDaniel KhashabiDan Roth
2020
ACL

Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not… 

The Right Tool for the Job: Matching Model and Instance Complexities

Roy SchwartzGabi StanovskySwabha SwayamdiptaNoah A. Smith
2020
ACL

As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose… 

Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?

Alon JacoviYoav Goldberg
2020
ACL

With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this… 

Unsupervised Domain Clusters in Pretrained Language Models

Roee AharoniYoav Goldberg
2020
ACL

The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain… 

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

Ben BoginSanjay SubramanianMatt GardnerJonathan Berant
2020
TACL

Answering questions that involve multi-step reasoning requires decomposing them and using the answers of intermediate steps to reach the final answer. However, state-ofthe-art models in grounded… 

Procedural Reading Comprehension with Attribute-Aware Context Flow

Aida AminiAntoine BosselutBhavana Dalvi MishraHannaneh Hajishirzi
2020
AKBC

Procedural texts often describe processes (e.g., photosynthesis and cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading… 

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

Mohit ShridharJesse ThomasonDaniel GordonDieter Fox
2020
CVPR

We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions…