Papers

Learn more about AI2's Lasting Impact Award
Viewing 61-70 of 106 papers
  • A Two-Stage Masked LM Method for Term Set Expansion

    Guy Kushilevitz, Shaul Markovitch, Yoav GoldbergACL2020 We tackle the task of Term Set Expansion (TSE): given a small seed set of example terms from a semantic class, finding more members of that class. The task is of great practical utility, and also of theoretical utility as it requires generalization from few…
  • Injecting Numerical Reasoning Skills into Language Models

    Mor Geva, Ankit Gupta, Jonathan BerantACL2020 Large pre-trained language models (LMs) are known to encode substantial amounts of linguistic information. However, high-level reasoning skills, such as numerical reasoning, are difficult to learn from a language-modeling objective only. Consequently…
  • Interactive Extractive Search over Biomedical Corpora

    Hillel Taub-Tabib, Micah Shlain, Shoval Sadde, Dan Lahav, Matan Eyal, Yaara Cohen, Yoav GoldbergACL2020 We present a system that allows life-science researchers to search a linguistically annotated corpus of scientific texts using patterns over dependency graphs, as well as using patterns over token sequences and a powerful variant of boolean keyword queries…
  • Nakdan: Professional Hebrew Diacritizer

    Avi Shmidman, Shaltiel Shmidman, Moshe Koppel, Yoav GoldbergACL2020 We present a system for automatic diacritization of Hebrew text. The system combines modern neural models with carefully curated declarative linguistic knowledge and comprehensive manually constructed tables and dictionaries. Besides providing state of the…
  • Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

    Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav GoldbergACL2020 The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models. We present Iterative Null-space Projection (INLP), a novel method for removing…
  • Obtaining Faithful Interpretations from Compositional Neural Networks

    Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, Matt Gardner ACL2020 Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional structure of the problem in the network architecture. However…
  • pyBART: Evidence-based Syntactic Transformations for IE

    Aryeh Tiktinsky, Yoav Goldberg, Reut TsarfatyACL2020 Syntactic dependencies can be predicted with high accuracy, and are useful for both machine-learned and pattern-based information extraction tasks. However, their utility can be improved. These syntactic dependencies are designed to accurately reflect…
  • Syntactic Search by Example

    Micah Shlain, Hillel Taub-Tabib, Shoval Sadde, Yoav GoldbergACL2020 We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we introduce a light-weight query language that does not require the…
  • Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?

    Alon Jacovi, Yoav GoldbergACL2020 With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this opinion piece we reflect on the current state of interpretability…
  • Unsupervised Domain Clusters in Pretrained Language Models

    Roee Aharoni, Yoav GoldbergACL2020 The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to…