Papers

Learn more about AI2's Lasting Impact Award
Viewing 51-60 of 106 papers
  • QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines

    Valentina Pyatkin, Ayal Klein, Reut Tsarfaty, Ido DaganEMNLP2020 Discourse relations describe how two propositions relate to one another, and identifying them automatically is an integral part of natural language understanding. However, annotating discourse relations typically requires expert annotators. Recently…
  • ZEST: Zero-shot Learning from Text Descriptions using Textual Similarity and Visual Summarization

    Tzuf Paz-Argaman, Y. Atzmon, Gal Chechik, Reut TsarfatyFindings of EMNLP2020 We study the problem of recognizing visual entities from the textual descriptions of their classes. Specifically, given birds' images with free-text descriptions of their species, we learn to classify images of previously-unseen species based on specie…
  • Evaluating Models' Local Decision Boundaries via Contrast Sets

    M. Gardner, Y. Artzi, V. Basmova, J. Berant, B. Bogin, S. Chen, P. Dasigi, D. Dua, Y. Elazar, A. Gottumukkala, N. Gupta, H. Hajishirzi, G. Ilharco, D.Khashabi, K. Lin, J. Liu, N. F. Liu, P. Mulcaire, Q. Ning, S.Singh, N.A. Smith, S. Subramanian, et alFindings of EMNLP2020 Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on…
  • Learning Object Detection from Captions via Textual Scene Attributes

    Achiya Jerbi, Roei Herzig, Jonathan Berant, Gal Chechik, Amir GlobersonarXiv2020 Object detection is a fundamental task in computer vision, requiring large annotated datasets that are difficult to collect, as annotators need to label objects and their bounding boxes. Thus, it is a significant challenge to use cheaper forms of supervision…
  • Scene Graph to Image Generation with Contextualized Object Layout Refinement

    Maor Ivgi, Yaniv Benny, Avichai Ben-David, Jonathan Berant, Lior WolfarXiv2020 Generating high-quality images from scene graphs, that is, graphs that describe multiple entities in complex relations, is a challenging task that attracted substantial interest recently. Prior work trained such models by using supervised learning, where the…
  • Span-based Semantic Parsing for Compositional Generalization

    Jonathan Herzig, Jonathan BerantarXiv2020 Despite the success of sequence-tosequence (seq2seq) models in semantic parsing, recent work has shown that they fail in compositional generalization, i.e., the ability to generalize to new structures built of components observed during training. In this work…
  • Reading Akkadian cuneiform using natural language processing

    Shai Gordin, Gai Gutherz, Ariel Elazary, Avital Romach, E. Jiménez, Jonathan Berant, Y. CohenPLoS ONE 2020 In this paper we present a new method for automatic transliteration and segmentation of Unicode cuneiform glyphs using Natural Language Processing (NLP) techniques. Cuneiform is one of the earliest known writing system in the world, which documents millennia…
  • Break It Down: A Question Understanding Benchmark

    Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, Daniel Deutch, Jonathan BerantTACL2020 Understanding natural language questions entails the ability to break down a question into the requisite steps for computing its answer. In this work, we introduce a Question Decomposition Meaning Representation (QDMR) for questions. QDMR constitutes the…
  • oLMpics - On what Language Model Pre-training Captures

    Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan BerantTACL2020 Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and…
  • A Formal Hierarchy of RNN Architectures

    William. Merrill, Gail Garfinkel Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, Eran YahavACL2020 We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational recurrence, defined as whether the recurrent update can be…