Papers

Learn more about AI2's Lasting Impact Award
AI2 Israel
All Years
Viewing 81-89 of 89 papers
  • Adversarial Removal of Demographic Attributes from Text Data

    Yanai Elazar, Yoav GoldbergEMNLP2018 Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in—and can be recovered from—the intermediate… more
  • Can LSTM Learn to Capture Agreement? The Case of Basque

    Shauli Ravfogel, Francis M. Tyers, Yoav GoldbergEMNLP • Workshop: Analyzing and interpreting neural networks for NLP 2018 Sequential neural networks models are powerful tools in a variety of Natural Language Processing (NLP) tasks. The sequential nature of these models raises the questions: to what extent can these models implicitly learn hierarchical structures typical to human… more
  • Decoupling Structure and Lexicon for Zero-Shot Semantic Parsing

    Jonathan Herzig, Jonathan BerantEMNLP2018 Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to generalize to new domains. In this paper, we introduce a zero-shot… more
  • Understanding Convolutional Neural Networks for Text Classification

    Alon Jacovi, Oren Sar Shalom, Yoav GoldbergEMNLP • Workshop: Analyzing and interpreting neural networks for NLP2018 We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim… more
  • Word Sense Induction with Neural biLM and Symmetric Patterns

    Asaf Amrami, Yoav GoldbergEMNLP2018 An established method for Word Sense Induction (WSI) uses a language model to predict probable substitutes for target words, and induces senses by clustering these resulting substitute vectors. We replace the ngram-based language model (LM) with a recurrent… more
  • The Web as a Knowledge-base for Answering Complex Questions

    Alon Talmor, Jonathan BerantNAACL2018 Answering complex questions is a time-consuming activity for humans that requires reasoning and integration of information. Recent work on reading comprehension made headway in answering simple questions, but tackling complex questions is still an ongoing… more
  • Freebase QA: Information Extraction or Semantic Parsing?

    Xuchen Yao, Jonathan Berant, and Benjamin Van DurmeACL • Workshop on Semantic Parsing2014 We contrast two seemingly distinct approaches to the task of question answering (QA) using Freebase: one based on information extraction techniques, the other on semantic parsing. Results over the same test-set were collected from two state-ofthe-art, open… more
  • Modeling Biological Processes for Reading Comprehension

    Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Brad Huang, Christopher D. Manning, Abby Vander Linden, Brittany Harding, and Peter ClarkEMNLP2014 Machine reading calls for programs that read and understand text, but most current work only attempts to extract facts from redundant web-scale corpora. In this paper, we focus on a new reading comprehension task that requires complex reasoning over a single… more
  • Learning Biological Processes with Global Constraints

    Aju Thalappillil Scaria, Jonathan Berant, Mengqiu Wang, Christopher D. Manning, Justin Lewis, Brittany Harding, and Peter ClarkEMNLP2013 Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic… more
AI2 Israel
All Years