Papers

Learn more about AI2's Lasting Impact Award
Viewing 41-50 of 106 papers
  • First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT

    Benjamin Muller, Yanai Elazar, Benoît Sagot, Djamé SeddahEACL2021 Multilingual pretrained language models have demonstrated remarkable zero-shot crosslingual transfer capabilities. Such transfer emerges by fine-tuning on a task of interest in one language and evaluating on a distinct language, not seen during the fine…
  • Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

    Alon Jacovi, Ana Marasović, Tim Miller, Yoav GoldbergFAccT2021 Trust is a central component of the interaction between people and AI, in that 'incorrect' levels of trust may cause misuse, abuse or disuse of the technology. But what, precisely, is the nature of trust in AI? What are the prerequisites and goals of the…
  • Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

    Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, Jonathan BerantNeurIPS • Spotlight Presentation2020 To what extent can a neural network systematically reason over symbolic facts? Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that…
  • It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT

    Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav GoldbergEMNLP • BlackboxNLP Workshop 2020 Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages. We study the word-level translation information embedded in mBERT and present two simple methods that expose…
  • Unsupervised Distillation of Syntactic Information from Contextualized Word Representations

    Shauli Ravfogel, Yanai Elazar, Jacob Goldberger, Yoav GoldbergEMNLP • BlackboxNLP Workshop2020 Contextualized word representations, such as ELMo and BERT, were shown to perform well on various semantic and syntactic task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations…
  • The Extraordinary Failure of Complement Coercion Crowdsourcing

    Yanai Elazar, Victoria Basmov, Shauli Ravfogel, Yoav Goldberg, Reut TsarfatyEMNLP • Insights from Negative Results in NLP Workshop 2020 Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon. These are constructions with an implied action -- e.g…
  • A Novel Challenge Set for Hebrew Morphological Disambiguation and Diacritics Restoration

    Avi Shmidman, Joshua Guedalia, Shaltiel Shmidman, Moshe Koppel, Reut TsarfatyFindings of EMNLP2020 One of the primary tasks of morphological parsers is the disambiguation of homographs. Particularly difficult are cases of unbalanced ambiguity, where one of the possible analyses is far more frequent than the others. In such cases, there may not exist…
  • A Simple and Effective Model for Answering Multi-span Questions

    Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, Jonathan BerantEMNLP2020 Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for a model that generates text explicitly. However, forcing an…
  • Do Language Embeddings Capture Scales?

    Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, Dan RothFindings of EMNLP • BlackboxNLP Workshop 2020 Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that…
  • Improving Compositional Generalization in Semantic Parsing

    Inbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, Jonathan BerantFindings of EMNLP2020 Generalization of models to out-of-distribution (OOD) data has captured tremendous attention recently. Specifically, compositional generalization, i.e., whether a model generalizes to new structures built of components observed during training, has sparked…