Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Weakly Supervised Text-to-SQL Parsing through Question Decomposition

Tomer WolfsonDaniel DeutchJonathan Berant
2022
Findings of NAACL

Text-to-SQL parsers are crucial in enabling non-experts to effortlessly query relational data. Training such parsers, by contrast, generally requires expertise in annotating natural language (NL)… 

Draw Me a Flower: Grounding Formal Abstract Structures Stated in Informal Natural Language

Royi LachmyValentina PyatkinReut Tsarfaty
2022
ACL

Forming and interpreting abstraction is a core process in human communication. In particular, when giving and performing complex instructions stated in natural language (NL), people may naturally… 

Large Scale Substitution-based Word Sense Induction

Matan EyalShoval SaddeHillel Taub-TabibYoav Goldberg
2022
ACL

We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora. The result is a corpus which is… 

Inferring Implicit Relations with Language Models

Uri KatzMor GevaJonathan Berant
2022
NAACL • UnImplicit

A prominent challenge for modern language understanding systems is the ability to answer implicit reasoning questions, where the required reasoning steps for answering the question are not mentioned… 

LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models

Mor GevaAvi CaciularuGuy DarYoav Goldberg
2022
arXiv

The opaque nature and unexplained behavior of transformer-based language models (LMs) have spurred a wide interest in interpreting their predictions. However, current interpretation methods mostly… 

Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space

Mor GevaAvi CaciularuKevin Ro WangYoav Goldberg
2022
arXiv

Transformer-based language models (LMs) are at the core of modern NLP, but their inter-nal prediction construction process is opaque and largely not understood. In this work, we make a substantial… 

Text-based NP Enrichment

Yanai ElazarVictoria BasmovYoav GoldbergReut Tsarfaty
2022
TACL

Understanding the relations between entities denoted by NPs in text is a critical part of human-like natural language understanding. However, only a fraction of such relations is covered by NLP… 

SCROLLS: Standardized CompaRison Over Long Language Sequences

Uri ShahamElad SegalMaor IvgiOmer Levy
2022
arXiv

NLP benchmarks have largely focused on short texts, such as sentences and paragraphs, even though long texts comprise a considerable amount of natural language in the wild. We introduce SCROLLS, a… 

CommonsenseQA 2.0: Exposing the Limits of AI through Gamification

Alon TalmorOri YoranRonan Le BrasJonathan Berant
2021
NeurIPS

Constructing benchmarks that test the abilities of modern natural language un1 derstanding models is difficult – pre-trained language models exploit artifacts in 2 benchmarks to achieve human… 

Achieving Model Robustness through Discrete Adversarial Training

Maor IvgiJonathan Berant
2021
EMNLP

Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error. While such attacks have been extensively explored for the…