Menu
Viewing 18 papers from 2017 in Aristo
Clear all
    • SIGMOD Record 2017
      Niket Tandon, Aparna S. Varde, Gerard de Melo
      There is growing conviction that the future of computing depends on our ability to exploit big data on theWeb to enhance intelligent systems. This includes encyclopedic knowledge for factual details, common sense for human-like reasoning and natural language generation for smarter communication…  (More)
    • Award Best Paper Award
      EMNLP 2017
      Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordóñez, Kai-Wei Chang
      Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding…  (More)
    • ACL 2017
      Tushar Khot, Ashish Sabharwal, and Peter Clark
      While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for…  (More)
    • ACL 2017
      Niket Tandon, Gerard de Melo, and Gerhard Weikum
      Despite important progress in the area of intelligent systems, most such systems still lack commonsense knowledge that appears crucial for enabling smarter, more human-like decisions. In this paper, we present a system based on a series of algorithms to distill fine-grained disambiguated…  (More)
    • ACL 2017
      Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer
      We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to map utterances directly to SQL with its…  (More)
    • WWW 2017
      Cuong Xuan Chu, Niket Tandon, and Gerhard Weikum
      Knowledge graphs have become a fundamental asset for search engines. A fair amount of user queries seek information on problem-solving tasks such as building a fence or repairing a bicycle. However, knowledge graphs completely lack this kind of how-to knowledge. This paper presents a method for…  (More)
    • TACL 2017
      Bhavana Dalvi, Niket Tandon, and Peter Clark
      Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable…  (More)
    • arXiv 2017 Slides
      Peter D. Turney
      While open-domain question answering (QA) systems have proven effective for answering simple questions, they struggle with more complex questions. Our goal is to answer more complex questions reliably, without incurring a significant cost in knowledge resource construction to support the QA. One…  (More)
    • UAI 2017
      Ashish Sabharwal and Hanie Sedghi
      Large scale machine learning produces massive datasets whose items are often associated with a confidence level and can thus be ranked. However, computing the precision of these resources requires human annotation, which is often prohibitively expensive and is therefore skipped. We consider the…  (More)
    • VAST 2017 Demo Video
      Nan-Chen Chen and Been Kim
      Developing sophisticated artificial intelligence (AI) systems requires AI researchers to experiment with different designs and analyze results from evaluations (we refer this task as evaluation analysis). In this paper, we tackle the challenges of evaluation analysis in the domain of question…  (More)
    • EMNLP • Workshop on Noisy User-generated Text 2017
      Johannes Welbl, Nelson F. Liu, and Matt Gardner
      We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large…  (More)
    • CoNLL 2017
      Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Dan Roth
      Question answering (QA) systems are easily distracted by irrelevant or redundant words in questions, especially when faced with long or multi-sentence questions in difficult domains. This paper introduces and studies the notion of essential question terms with the goal of improving such QA solvers…  (More)
    • CoNLL 2017
      Rebecca Sharp, Mihai Surdeanu, Peter Jansen, Marco A. Valenzuela-Escárcega, Peter Clark, and Michael Hammond
      For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision…  (More)
    • CoNLL 2017
      Ivan Vulic, Roy Schwartz, Ari Rappoport, Roi Reichart, and Anna Korhonen
      This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We…  (More)
    • CoNLL 2017
      Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, Noah A. Smith
      A writer’s style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where…  (More)
    • EMNLP 2017
      Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer
      We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or handengineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions…  (More)
    • EMNLP 2017
      Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner
      We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. Our parser is an encoder-decoder neural network with two key technical innovations: (1) a grammar for the decoder that only generates well-typed logical forms; and (2) an entity…  (More)
    • AAAI 2017
      Matt Gardner and Jayant Krishnamurthy
      Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This map- ping allows them to effectively leverage the information con- tained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting…  (More)