Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Break It Down: A Question Understanding Benchmark

Tomer WolfsonMor GevaAnkit GuptaJonathan Berant
2020
TACL

Understanding natural language questions entails the ability to break down a question into the requisite steps for computing its answer. In this work, we introduce a Question Decomposition Meaning… 

oLMpics - On what Language Model Pre-training Captures

Alon TalmorYanai ElazarYoav GoldbergJonathan Berant
2020
TACL

Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are… 

A Formal Hierarchy of RNN Architectures

William. MerrillGail Garfinkel WeissYoav GoldbergEran Yahav
2020
ACL

We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational… 

A Two-Stage Masked LM Method for Term Set Expansion

Guy KushilevitzShaul MarkovitchYoav Goldberg
2020
ACL

We tackle the task of Term Set Expansion (TSE): given a small seed set of example terms from a semantic class, finding more members of that class. The task is of great practical utility, and also of… 

Injecting Numerical Reasoning Skills into Language Models

Mor GevaAnkit GuptaJonathan Berant
2020
ACL

Large pre-trained language models (LMs) are known to encode substantial amounts of linguistic information. However, high-level reasoning skills, such as numerical reasoning, are difficult to learn… 

Interactive Extractive Search over Biomedical Corpora

Hillel Taub-TabibMicah ShlainShoval SaddeYoav Goldberg
2020
ACL

We present a system that allows life-science researchers to search a linguistically annotated corpus of scientific texts using patterns over dependency graphs, as well as using patterns over token… 

Nakdan: Professional Hebrew Diacritizer

Avi ShmidmanShaltiel ShmidmanMoshe KoppelYoav Goldberg
2020
ACL

We present a system for automatic diacritization of Hebrew text. The system combines modern neural models with carefully curated declarative linguistic knowledge and comprehensive manually… 

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

Shauli RavfogelYanai ElazarHila GonenYoav Goldberg
2020
ACL

The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models. We present… 

Obtaining Faithful Interpretations from Compositional Neural Networks

Sanjay SubramanianBen BoginNitish GuptaMatt Gardner
2020
ACL

Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional… 

pyBART: Evidence-based Syntactic Transformations for IE

Aryeh TiktinskyYoav GoldbergReut Tsarfaty
2020
ACL

Syntactic dependencies can be predicted with high accuracy, and are useful for both machine-learned and pattern-based information extraction tasks. However, their utility can be improved. These…