Papers

Viewing 1-10 of 33 papers
  • Differentiable Scene Graphs

    Moshiko Raboh, Roei Herzig, Gal Chechik, Jonathan Berant, Amir Globerson WACV2020Understanding the semantics of complex visual scenes involves perception of entities and reasoning about their relations. Scene graphs provide a natural representation for these tasks, by assigning labels to both entities (nodes) and relations (edges). However, scene graphs are not commonly used as… more
  • oLMpics - On what Language Model Pre-training Captures

    Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan BerantarXiv2019Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight… more
  • On Making Reading Comprehension More Comprehensive

    Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon MinEMNLP • MRQA Workshop2019Machine reading comprehension, the task of evaluating a machine’s ability to comprehend a passage of text, has seen a surge in popularity in recent years. There are many datasets that are targeted at reading comprehension, and many systems that perform as well as humans on some of these datasets… more
  • ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension

    Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, Matt GardnerEMNLP • MRQA Workshop2019Reading comprehension is one of the crucial tasks for furthering research in natural language understanding. A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to… more
  • On the Limits of Learning to Actively Learn Semantic Representations

    Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar and Jonathan BerantCoNLL2019One of the goals of natural language understanding is to develop models that map sentences into meaning representations. However, training such models requires expensive annotation of complex structures, which hinders their adoption. Learning to actively-learn (LTAL) is a recent paradigm for… more
  • Don't paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing

    Jonathan Herzig, Jonathan BerantEMNLP2019A major hurdle on the road to conversational interfaces is the difficulty in collecting data that maps language utterances to logical forms. One prominent approach for data collection has been to automatically generate pseudo-language paired with logical forms, and paraphrase the pseudo-language to… more
  • Global Reasoning over Database Structures for Text-to-SQL Parsing

    Ben Bogin, Matt Gardner, Jonathan BerantEMNLP2019State-of-the-art semantic parsers rely on auto-regressive decoding, emitting one symbol at a time. When tested against complex databases that are unobserved at training time (zero-shot), the parser often struggles to select the correct set of database constants in the new database, due to the local… more
  • Language Modeling for Code-Switching: Evaluation, Integration of Monolingual Data, and Discriminative Training

    Hila Gonen, Yoav GoldbergEMNLP2019We focus on the problem of language modeling for code-switched language, in the context of automatic speech recognition (ASR). Language modeling for code-switched language is challenging for (at least) three reasons: (1) lack of available large-scale code-switched data for training; (2) lack of a… more
  • Transfer Learning Between Related Tasks Using Expected Label Proportions

    Matan Ben Noach, Yoav GoldbergEMNLP2019Deep learning systems thrive on abundance of labeled training data but such data is not always available, calling for alternative methods of supervision. One such method is expectation regularization (XR) (Mann and McCallum, 2007), where models are trained based on expected label proportions. We… more
  • Question Answering is a Format; When is it Useful?

    Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon MinarXiv2019Recent years have seen a dramatic expansion of tasks and datasets posed as question answering, from reading comprehension, semantic role labeling, and even machine translation, to image and video understanding. With this expansion, there are many differing views on the utility and definition of… more