Papers

Learn more about AI2's Lasting Impact Award
Viewing 171-180 of 192 papers
  • Spot the Odd Man Out: Exploring the Associative Power of Lexical Resources

    Gabriel Stanovsky, Mark HopkinsEMNLP2018 We propose Odd-Man-Out, a novel task which aims to test different properties of word representations. An Odd-Man-Out puzzle is composed of 5 (or more) words, and requires the system to choose the one which does not belong with the others. We show that this…
  • Structured Alignment Networks for Matching Sentences

    Yang Liu, Matt Gardner, Mirella LapataEMNLP2018 Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity. Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the…
  • SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

    Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin ChoiEMNLP2018 Given a partial description like"she opened the hood of the car,"humans can reason about the situation and anticipate what might come next ("then, she examined the engine"). In this paper, we introduce the task of grounded commonsense inference, unifying…
  • Syntactic Scaffolds for Semantic Structures

    Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. SmithEMNLP2018 We introduce the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks. Syntactic scaffolds avoid expensive syntactic processing at runtime, only making use of a treebank during training, through a multitask objective. We…
  • AllenNLP: A Deep Semantic Natural Language Processing Platform

    Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, Luke ZettlemoyerACL • NLP OSS Workshop2018 This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily. It is built on top of…
  • Event2Mind: Commonsense Inference on Events, Intents, and Reactions

    Maarten Sap, Hannah Rashkin, Emily Allaway, Noah A. Smith and Yejin ChoiACL2018 We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s…
  • Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples

    Vidur Joshi, Matthew Peters, and Mark HopkinsACL2018 We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train…
  • LSTMs Exploit Linguistic Attributes of Data

    Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, Noah A. SmithACL • RepL4NLP Workshop2018 While recurrent neural networks have found success in a variety of natural language processing applications, they are general models of sequential data. We investigate how the properties of natural language data affect an LSTM's ability to learn a…
  • Simple and Effective Multi-Paragraph Reading Comprehension

    Christopher Clark, Matt GardnerACL2018 We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual…
  • Ultra-Fine Entity Typing

    Eunsol Choi, Omer Levy, Yejin Choi and Luke ZettlemoyerACL2018 We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to…