AI2 ISRAEL

About

The Allen Institute for AI Israel office was founded in 2019 in Sarona, Tel Aviv. AI2's mission is to contribute to humanity through high-impact AI research and engineering.

AI2 Israel About

AI2 Israel continues our mission of AI for the Common Good through groundbreaking research in natural language processing and machine learning, all in close association with the AI2 home office in Seattle, Washington.

Our Focus

The focus of AI2 Israel is bringing people closer to information, by creating and using advanced language-centered AI. As a scientific approach, we believe in combining strong linguistics-oriented foundations, state-of-the-art machine learning, and top-notch engineering, with a user oriented design.

For application domains, we focus on understanding and answering complex questions, filling in commonsense gaps in text, and enabling robust extraction of structured information from text. This is an integral part of AI2’s vision of pushing the boundaries of the algorithmic understanding of human language and advancing the common good through AI.

AI2 Israel also enjoys research relationships with top local universities Tel Aviv University and Bar-Ilan University.

Team

  • Yoav Goldberg's Profile PhotoYoav GoldbergResearch Director, AI2 Israel
  • Ron Yachini's Profile PhotoRon YachiniChief Operating Officer, AI2 Israel
  • Jonathan Berant's Profile PhotoJonathan BerantResearch
  • Matan Eyal's Profile PhotoMatan EyalResearch & Engineering
  • Tom Hope's Profile PhotoTom HopeYoung Investigator
  • personal photoMenny PinhasovEngineering
  • Yael Rachmut's Profile PhotoYael RachmutOperations
  • Shoval Sadde's Profile PhotoShoval SaddeLinguistics
  • Micah Shlain's Profile PhotoMicah ShlainResearch & Engineering
  • Hillel  Taub-Tabib's Profile PhotoHillel Taub-TabibResearch & Engineering
  • personal photoAryeh TiktinskyResearch & Engineering
  • Reut Tsarfaty's Profile PhotoReut TsarfatyResearch

Current Openings

AI2 Israel is a non-profit offering exceptional opportunities for researchers and engineers to develop AI for the common good. We are currently looking for outstanding software engineers and research engineers. Candidates should send their CV to: ai2israel-cv@allenai.org

AI2 Israel Office

Research Areas

DIY Information Extraction

Data scientists have a set of tools to work with structured data in tables. But how does one extract meaning from textual data? While NLP provides some solutions, they all require expertise in either machine learning, linguistics, or both. How do we expose advanced AI and text mining capabilities to domain experts who do not know ML or CS?

Question Understanding

The goal of this project is to develop models that understand complex questions in broad domains, and answer them from multiple information sources. Our research revolves around investigating symbolic and distributed representations that facilitate reasoning over multiple facts and offer explanations for model decisions.

Missing Elements

Current natural language processing technology aims to process what is explicitly mentioned in text. But what about the elements that are being left out of the text, yet are easily and naturally inferred by the human hearer? Can our computer programs identify and infer such elements too? In this project, we develop benchmarks and models to endow NLP applications with this capacity.

AI Gamification

The goal of this project is to involve the public in the development of better AI models. We use stimulating games alongside state-of-the-art AI models to create an appealing experience for non-scientific users. We aim to improve the ways data is collected for AI training as well as surface strengths and weaknesses of current models.

  • Try the QDMR CopyNet parser | AI2 Israel, Question Understanding

    Live demo of the QDMR CopyNet parser from the paper Break It Down: A Question Understanding Benchmark (TACL 2020). The parser receives a natural language question as input and returns its Question Decomposition Meaning Representation (QDMR). Each step in the decomposition constitutes a subquestion necessary to…

    Try the demo
    Break QDMR representation
  • Break QDMR representation
    Try the QDMR CopyNet parser | AI2 Israel, Question Understanding

    Live demo of the QDMR CopyNet parser from the paper Break It Down: A Question Understanding Benchmark (TACL 2020). The parser receives a natural language question as input and returns its Question Decomposition Meaning Representation (QDMR). Each step in the decomposition constitutes a subquestion necessary to…

    Try the demo
  • Crowd Sense: Helps us Better Define Common Sense
    Interactive common sense | AI2 Israel, AI Gamification

    CrowdSense is an interactive effort to better understand what types of questions people consider to be common sense.

    Try the demo
  • Crowd Sense: Helps us Better Define Common Sense
    Interactive common sense | AI2 Israel, AI Gamification

    CrowdSense is an interactive effort to better understand what types of questions people consider to be common sense.

    Try the demo
    • CommonsenseQA 2.0: Exposing the Limits of AI through Gamification

      Alon Talmor, Ori Yoran, Ronan Le Bras, Chandrasekhar Bhagavatula, Yoav Goldberg, Yejin Choi, Jonathan Berant NeurIPS2021 Constructing benchmarks that test the abilities of modern natural language un1 derstanding models is difficult – pre-trained language models exploit artifacts in 2 benchmarks to achieve human parity, but still fail on adversarial examples and make 3 errors…
    • Achieving Model Robustness through Discrete Adversarial Training

      Maor Ivgi, Jonathan BerantEMNLP2021 Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error. While such attacks have been extensively explored for the purpose of evaluating model robustness, their utility for…
    • Back to Square One: Bias Detection, Training and Commonsense Disentanglement in the Winograd Schema

      Yanai Elazar, Hongming Zhang, Yoav Goldberg, Dan RothEMNLP2021 The Winograd Schema (WS) has been proposed as a test for measuring commonsense capabilities of models. Recently, pre-trained language model-based approaches have boosted performance on some WS benchmarks but the source of improvement is still not clear. We…
    • Contrastive Explanations for Model Interpretability

      Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, Yoav GoldbergEMNLP2021 Contrastive explanations clarify why an event occurred in contrast to another. They are more inherently intuitive to humans to both produce and comprehend. We propose a methodology to produce contrastive explanations for classification models by modifying the…
    • Parameter Norm Growth During Training of Transformers

      William Merrill, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, Noah A. Smith EMNLP2021 The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine, typically some variant of gradient descent (GD). To better…

    מערכת בינה מלאכותית עברה בהצטיינות יתרה מבחן במדעים של כיתה ח' (Artificial Intelligence System Cum Laude Passed 8th Grade Science Test)

    Haaretz
    September 6, 2019
    Read the Article

    המחיר המושתק של בינה מלאכותית (The secret price of artificial intelligence)

    ynet
    August 12, 2019
    Read the Article

    Allen Institute for Artificial Intelligence to Open Israeli Branch

    CTech
    May 20, 2019
    Read the Article
    “Please join us to tackle an extraordinary set of scientific and engineering challenges. Let’s make history together.”
    Oren Etzioni, CEO