Menu
Viewing 7 papers from 2018 in AI2 Israel
Clear all
    • NeurIPS 2018
      Roei Herzig, Moshiko Raboh, Gal Chechik, Jonathan Berant, Amir Globerson
      Machine understanding of complex images is a key goal of artificial intelligence. One challenge underlying this task is that visual scenes contain multiple inter-related objects, and that global context plays an important role in interpreting the scene. A natural modeling framework for capturing…  (More)
    • NeurIPS 2018
      Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, Ni Lao
      This paper presents Memory Augmented Policy Optimization (MAPO): a novel policy optimization formulation that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradient estimates for deterministic environments with discrete actions. The formulation expresses…  (More)
    • EMNLP • Workshop: Analyzing and interpreting neural networks for NLP 2018
      Alon Jacovi, Oren Sar Shalom, Yoav Goldberg
      We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim to understand the method by which the…  (More)
    • EMNLP 2018
      Jonathan Herzig, Jonathan Berant
      Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to generalize to new domains. In this paper, we introduce a zero-shot approach to semantic parsing that can…  (More)
    • EMNLP • Workshop: Analyzing and interpreting neural networks for NLP 2018
      Shauli Ravfogel, Francis M. Tyers, Yoav Goldberg
      Sequential neural networks models are powerful tools in a variety of Natural Language Processing (NLP) tasks. The sequential nature of these models raises the questions: to what extent can these models implicitly learn hierarchical structures typical to human language, and what kind of grammatical…  (More)
    • EMNLP 2018
      Yanai Elazar, Yoav Goldberg
      Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in—and can be recovered from—the intermediate representations learned by text-based neural…  (More)
    • EMNLP 2018
      Asaf Amrami, Yoav Goldberg
      An established method for Word Sense Induction (WSI) uses a language model to predict probable substitutes for target words, and induces senses by clustering these resulting substitute vectors. We replace the ngram-based language model (LM) with a recurrent one. Beyond being more accurate, the use…  (More)