Papers

Viewing 21-30 of 307 papers
  • Do NLP Models Know Numbers? Probing Numeracy in Embeddings

    Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt GardnerEMNLP2019The ability to understand and work with numbers (numeracy) is critical for many complex reasoning tasks. Currently, most NLP models treat numbers in text in the same way as other tokens---they embed them as distributed vectors. Is this enough to capture numeracy? We begin by investigating the… more
  • Low-Resource Parsing with Crosslingual Contextualized Representations

    Phoebe Mulcaire, Jungo Kasai, Noah A. SmithCoNLL2019Despite advances in dependency parsing, languages with small treebanks still present challenges. We assess recent approaches to multilingual contextual word representations (CWRs), and compare them for crosslingual transfer from a language with a large treebank to a language with a small or… more
  • On the Limits of Learning to Actively Learn Semantic Representations

    Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar and Jonathan BerantCoNLL2019One of the goals of natural language understanding is to develop models that map sentences into meaning representations. However, training such models requires expensive annotation of complex structures, which hinders their adoption. Learning to actively-learn (LTAL) is a recent paradigm for… more
  • Universal Adversarial Triggers for Attacking and Analyzing NLP

    Eric Wallace, Shi Feng, Nikhil Kandpal, Matthew Gardner, Sameer Singh EMNLP2019dversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a… more
  • Y'all should read this! Identifying Plurality in Second-Person Personal Pronouns in English Texts

    Gabriel Stanovsky, Ronen TamariEMNLP • W-NUT2019Distinguishing between singular and plural "you" in English is a challenging task which has potential for downstream applications, such as machine translation or coreference resolution. While formal written English does not distinguish between these cases, other languages (such as Spanish), as well… more
  • A Discrete Hard EM Approach for Weakly Supervised Question Answering

    Sewon Min, Danqi Chen, Hannaneh Hajishirzi, Luke ZettlemoyerEMNLP2019Many question answering (QA) tasks only provide weak supervision for how the answer should be computed. For example, TriviaQA answers are entities that can be mentioned multiple times in supporting documents, while DROP answers can be computed by deriving many different equations from numbers in… more
  • BERT for Coreference Resolution: Baselines and Analysis

    Mandar Joshi, Omer Levy, Daniel S. Weld, Luke ZettlemoyerEMNLP2019We apply BERT to coreference resolution, achieving strong improvements on the OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. A qualitative analysis of model predictions indicates that, compared to ELMo and BERT-base, BERT-large is particularly better at distinguishing between related but… more
  • BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle

    Peter West, Ari Holtzman, Jan Buys, Yejin ChoiEMNLP2019The principle of the Information Bottleneck (Tishby et al. 1999) is to produce a summary of information X optimized to predict some other relevant information Y. In this paper, we propose a novel approach to unsupervised sentence summarization by mapping the Information Bottleneck principle to a… more
  • COSMOS QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

    Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, Yejin ChoiEMNLP2019Understanding narratives requires reading between the lines, which in turn, requires interpreting the likely causes and effects of events, even when they are not mentioned explicitly. In this paper, we introduce Cosmos QA, a large-scale dataset of 35,600 problems that require commonsense-based… more
  • Counterfactual Story Reasoning and Generation

    Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin ChoiEMNLP2019Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes. Despite being considered a necessary component of AI-complete systems, few resources have been developed for evaluating counterfactual reasoning in… more