Papers

Learn more about AI2's Lasting Impact Award
Viewing 541-550 of 991 papers
  • It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT

    Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav GoldbergEMNLP • BlackboxNLP Workshop 2020 Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages. We study the word-level translation information embedded in mBERT and present two simple methods that expose…
  • Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation

    Atticus Geiger, Kyle Richardson, Christopher PottsEMNLP • BlackboxNLP Workshop 2020 We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic…
  • Unsupervised Distillation of Syntactic Information from Contextualized Word Representations

    Shauli Ravfogel, Yanai Elazar, Jacob Goldberger, Yoav GoldbergEMNLP • BlackboxNLP Workshop2020 Contextualized word representations, such as ELMo and BERT, were shown to perform well on various semantic and syntactic task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations…
  • Document-Level Definition Detection in Scholarly Documents: Existing Models, Error Analyses, and Future Directions

    Dongyeop Kang, Andrew Head, Risham Sidhu, Kyle Lo, Daniel S. Weld, Marti A. HearstEMNLP • SDP workshop2020 The task of definition detection is important for scholarly papers, because papers often make use of technical terminology that may be unfamiliar to readers. Despite prior work on definition detection, current approaches are far from being accurate enough to…
  • PySBD: Pragmatic Sentence Boundary Disambiguation

    Nipun Sadvilkar, M. NeumannEMNLP • NLP-OSS Workshop2020 In this paper, we present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages. We aim to provide a realistic segmenter which can provide logical sentences even when the format and domain of the input text is…
  • The Extraordinary Failure of Complement Coercion Crowdsourcing

    Yanai Elazar, Victoria Basmov, Shauli Ravfogel, Yoav Goldberg, Reut TsarfatyEMNLP • Insights from Negative Results in NLP Workshop 2020 Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon. These are constructions with an implied action -- e.g…
  • A Dataset for Tracking Entities in Open Domain Procedural Text

    Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi Mishra, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, Eduard HovyEMNLP2020 We present the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. For example, in a text describing fog removal using potatoes, a car window may transition between being foggy, sticky…
  • A Novel Challenge Set for Hebrew Morphological Disambiguation and Diacritics Restoration

    Avi Shmidman, Joshua Guedalia, Shaltiel Shmidman, Moshe Koppel, Reut TsarfatyFindings of EMNLP2020 One of the primary tasks of morphological parsers is the disambiguation of homographs. Particularly difficult are cases of unbalanced ambiguity, where one of the possible analyses is far more frequent than the others. In such cases, there may not exist…
  • A Simple and Effective Model for Answering Multi-span Questions

    Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, Jonathan BerantEMNLP2020 Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for a model that generates text explicitly. However, forcing an…
  • A Simple Yet Strong Pipeline for HotpotQA

    Dirk Groeneveld, Tushar Khot, Mausam, Ashish SabharwalEMNLP2020 State-of-the-art models for multi-hop question answering typically augment large-scale language models like BERT with additional, intuitively useful capabilities such as named entity recognition, graph-based reasoning, and question decomposition. However…