Papers

Learn more about AI2's Lasting Impact Award
Viewing 731-740 of 988 papers
  • Do Neural Language Representations Learn Physical Commonsense?

    Maxwell Forbes, Ari Holtzman, Yejin ChoiCogSci2019 Humans understand language based on the rich background knowledge about how the physical world works, which in turn allows us to reason about the physical world through language. In addition to the properties of objects (e.g., boats require fuel) and their…
  • To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks

    Matthew E. Peters, Sebastian Ruder, Noah A. SmithACL • RepL4NLP2019 While most previous work has focused on different pretraining objectives and architectures for transfer learning, we ask how to best adapt the pretrained model to a given target task. We focus on the two most common forms of adaptation, feature extraction…
  • Evaluating Gender Bias in Machine Translation

    Gabriel Stanovsky, Noah A. Smith, Luke ZettlemoyerACL2019 We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical…
  • MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension

    Alon Talmor, Jonathan BerantACL2019 A large number of reading comprehension (RC) datasets has been created recently, but little analysis has been done on whether they generalize to one another, and the extent to which existing datasets can be leveraged for improving performance on new ones. In…
  • Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing

    Ben Bogin, Jonathan Berant, Matt GardnerACL2019 Research on parsing language to SQL has largely ignored the structure of the database (DB) schema, either because the DB was very simple, or because it was observed at both training and test time. In SPIDER, a recently-released text-to-SQL dataset, new and…
  • ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing

    Mark Neumann, Daniel King, Iz Beltagy, Waleed AmmarACL • BioNLP Workshop2019 Despite recent advances in natural language processing, many statistical models for processing text perform extremely poorly under domain shift. Processing biomedical and clinical text is a critically important application area of natural language processing…
  • Adaptive Hashing for Model Counting

    Jonathan Kuck, Tri Dao, Yuanrun Zheng, Burak Bartan, Ashish Sabharwal, Stefano ErmonUAI2019 Randomized hashing algorithms have seen recent success in providing bounds on the model count of a propositional formula. These methods repeatedly check the satisfiability of a formula subject to increasingly stringent random constraints. Key to these…
  • CEDR: Contextualized Embeddings for Document Ranking

    Sean MacAvaney, Andrew Yates, Arman Cohan, Nazli GoharianSIGIR2019 Although considerable attention has been given to neural ranking architectures recently, far less attention has been paid to the term representations that are used as input to these models. In this work, we investigate how two pretrained contextualized…
  • Ontology-Aware Clinical Abstractive Summarization

    Sean MacAvaney, Sajad Sotudeh, Arman Cohan, Nazli Goharian, Ish Talati, Ross W. FiliceSIGIR2019 Automatically generating accurate summaries from clinical reports could save a clinician's time, improve summary coverage, and reduce errors. We propose a sequence-to-sequence abstractive summarization model augmented with domain-specific ontological…
  • Exploiting Explicit Paths for Multi-hop Reading Comprehension

    Souvik Kundu, Tushar Khot, Ashish Sabharwal, Peter ClarkACL2019 We propose a novel, path-based reasoning approach for the multi-hop reading comprehension task where a system needs to combine facts from multiple passages to answer a question. Although inspired by multi-hop reasoning over knowledge graphs, our proposed…