Viewing 101-120 of 286 papers
Clear all
    • NeurIPS 2018
      Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, Ni Lao
      This paper presents Memory Augmented Policy Optimization (MAPO): a novel policy optimization formulation that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradient estimates for deterministic environments with discrete actions. The formulation expresses…  (More)
    • EMNLP • Workshop: Analyzing and interpreting neural networks for NLP 2018
      Alon Jacovi, Oren Sar Shalom, Yoav Goldberg
      We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim to understand the method by which the…  (More)
    • EMNLP 2018
      Yang Liu, Matt Gardner, Mirella Lapata
      Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity. Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence. When…  (More)
    • EMNLP 2018
      Gabriel Stanovsky, Mark Hopkins
      We propose Odd-Man-Out, a novel task which aims to test different properties of word representations. An Odd-Man-Out puzzle is composed of 5 (or more) words, and requires the system to choose the one which does not belong with the others. We show that this simple setup is capable of teasing out…  (More)
    • EMNLP 2018
      Jonathan Herzig, Jonathan Berant
      Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to generalize to new domains. In this paper, we introduce a zero-shot approach to semantic parsing that can…  (More)
    • EMNLP • Workshop: Analyzing and interpreting neural networks for NLP 2018
      Shauli Ravfogel, Francis M. Tyers, Yoav Goldberg
      Sequential neural networks models are powerful tools in a variety of Natural Language Processing (NLP) tasks. The sequential nature of these models raises the questions: to what extent can these models implicitly learn hierarchical structures typical to human language, and what kind of grammatical…  (More)
    • EMNLP 2018
      Yanai Elazar, Yoav Goldberg
      Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in—and can be recovered from—the intermediate representations learned by text-based neural…  (More)
    • EMNLP 2018
      Asaf Amrami, Yoav Goldberg
      An established method for Word Sense Induction (WSI) uses a language model to predict probable substitutes for target words, and induces senses by clustering these resulting substitute vectors. We replace the ngram-based language model (LM) with a recurrent one. Beyond being more accurate, the use…  (More)
    • EMNLP 2018
      Todor Mihaylov, Peter Clark, Tushar Khot, Ashish Sabharwal
      We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts…  (More)
    • EMNLP 2018
      Niket Tandon, Bhavana Dalvi Mishra, Joel Grus, Wen-tau Yih, Antoine Bosselut, Peter Clark
      Comprehending procedural text, e.g., a paragraph describing photosynthesis, requires modeling actions and the state changes they produce, so that questions about entities at different timepoints can be answered. Although several recent systems have shown impressive progress in this task, their…  (More)
    • EMNLP 2018
      Ge Gao, Eunsol Choi, Yejin Choi and Luke Zettlemoyer
      We present end-to-end neural models for detecting metaphorical word use in context. We show that relatively standard BiLSTM models which operate on complete sentences work well in this setting, in comparison to previous work that used more restricted forms of linguistic context. These models…  (More)
    • EMNLP 2018
      Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang and Luke Zettlemoyer
      We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2…  (More)
    • EMNLP 2018
      Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi
      Given a partial description like"she opened the hood of the car,"humans can reason about the situation and anticipate what might come next ("then, she examined the engine"). In this paper, we introduce the task of grounded commonsense inference, unifying natural language inference and commonsense…  (More)
    • EMNLP 2018
      Dipendra Misra, Ming-Wei Chang, Xiaodong He, Wen-tau Yih
      Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping…  (More)
    • EMNLP 2018
      Dongyeop Kang, Tushar Khot, Ashish Sabharwal and Peter Clark
      Most textual entailment models focus on lexical gaps between the premise text and the hypothesis, but rarely on knowledge gaps. We focus on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts. Our new architecture…  (More)
    • EMNLP 2018
      Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith
      Despite the tremendous empirical success of neural models in natural language processing, many of them lack the strong intuitions that accompany classical machine learning approaches. Recently, connections have been shown between convolutional neural networks (CNNs) and weighted finite state…  (More)
    • EMNLP 2018
      Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith
      We introduce the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks. Syntactic scaffolds avoid expensive syntactic processing at runtime, only making use of a treebank during training, through a multitask objective. We improve over strong baselines on…  (More)
    • EMNLP 2018
      Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, Jaime Carbonell
      For languages with no annotated resources, unsupervised transfer of natural language processing models such as named-entity recognition (NER) from resource-rich languages would be an appealing capability. However, differences in words and word order across languages make it a challenging problem…  (More)
    • EMNLP 2018
      Matthew Peters, Mark Neumann, Wen-tau Yih, and Luke Zettlemoyer
      Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this…  (More)
    • EMNLP 2018
      Michael Petrochuk, Luke Zettlemoyer
      The SimpleQuestions dataset is one of the most commonly used benchmarks for studying single-relation factoid questions. In this paper, we present new evidence that this benchmark can be nearly solved by standard methods. First we show that ambiguity in the data bounds performance on this benchmark…  (More)