Papers

Learn more about AI2's Lasting Impact Award
Viewing 281-290 of 298 papers
  • AllenNLP: A Deep Semantic Natural Language Processing Platform

    Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, Luke ZettlemoyerACL • NLP OSS Workshop2018 This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily. It is built on top of…
  • Event2Mind: Commonsense Inference on Events, Intents, and Reactions

    Maarten Sap, Hannah Rashkin, Emily Allaway, Noah A. Smith and Yejin ChoiACL2018 We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s…
  • Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples

    Vidur Joshi, Matthew Peters, and Mark HopkinsACL2018 We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train…
  • LSTMs Exploit Linguistic Attributes of Data

    Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, Noah A. SmithACL • RepL4NLP Workshop2018 While recurrent neural networks have found success in a variety of natural language processing applications, they are general models of sequential data. We investigate how the properties of natural language data affect an LSTM's ability to learn a…
  • Simple and Effective Multi-Paragraph Reading Comprehension

    Christopher Clark, Matt GardnerACL2018 We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual…
  • Ultra-Fine Entity Typing

    Eunsol Choi, Omer Levy, Yejin Choi and Luke ZettlemoyerACL2018 We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to…
  • A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications

    Dongyeop Kang, Waleed Ammar, Bhavana Dalvi Mishra, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy SchwartzNAACL-HLT2018 Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research pur- poses (PeerRead v1), providing an opportunity to study this important artifact. The dataset…
  • Annotation Artifacts in Natural Language Inference Data

    Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Sam Bowman and Noah A. SmithNAACL2018 Large-scale datasets for natural language inference are created by presenting crowd workers with a sentence (premise), and asking them to generate three new sentences (hypotheses) that it entails, contradicts, or is logically neutral with respect to. We show…
  • Deep Contextualized Word Representations

    Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke ZettlemoyerNAACL2018 We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are…
  • SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines

    Roy Schwartz, Sam Thomson and Noah A. SmithACL2018 Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances. In this paper we present SoPa, a new model that aims to bridge these two approaches. SoPa combines…