Papers

Learn more about AI2's Lasting Impact Award
Viewing 211-219 of 219 papers
  • SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines

    Roy Schwartz, Sam Thomson and Noah A. SmithACL2018 Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances. In this paper we present SoPa, a new model that aims to bridge these two approaches. SoPa combines…
  • Dynamic Entity Representations in Neural Language Models

    Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, Noah A. SmithEMNLP2017 Understanding a long document requires tracking how entities are introduced and evolve over time. We present a new type of language model, EntityNLM, that can explicitly model entities, dynamically update their representations, and contextually generate their…
  • Learning a Neural Semantic Parser from User Feedback

    Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke ZettlemoyerACL2017 We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to…
  • Semi-supervised sequence tagging with bidirectional language models

    Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, and Russell PowerACL2017 Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive…
  • Deep Semantic Role Labeling: What Works and What's Next

    Luheng He, Kenton Lee, Mike Lewis, Luke S. ZettlemoyerACL2017 We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding…
  • End-to-end Neural Coreference Resolution

    Kenton Lee, Luheng He, Mike Lewis, and Luke ZettlemoyerEMNLP2017 We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or handengineered mention detector. The key idea is to directly consider all spans in a document as…
  • Neural Semantic Parsing with Type Constraints for Semi-Structured Tables

    Jayant Krishnamurthy, Pradeep Dasigi, and Matt GardnerEMNLP2017 We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. Our parser is an encoder-decoder neural network with two key technical innovations: (1) a grammar for the decoder that only generates well-typed…
  • Parsing Algebraic Word Problems into Equations

    Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas AngTACL2015 This paper formalizes the problem of solving multi-sentence algebraic word problems as that of generating and scoring equation trees. We use integer linear programming to generate equation trees and score their likelihood by learning local and global…
  • Learning to Solve Arithmetic Word Problems with Verb Categorization

    Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate KushmanEMNLP2014 This paper presents a novel approach to learning to solve simple arithmetic word problems. Our system, ARIS, analyzes each of the sentences in the problem statement to identify the relevant variables and their values. ARIS then maps this information into an…