Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

A Two-Stage Masked LM Method for Term Set Expansion

Guy KushilevitzShaul MarkovitchYoav Goldberg
2020
ACL

We tackle the task of Term Set Expansion (TSE): given a small seed set of example terms from a semantic class, finding more members of that class. The task is of great practical utility, and also of… 

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks

Suchin GururanganAna MarasovićSwabha SwayamdiptaNoah A. Smith
2020
ACL

Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still… 

Improving Transformer Models by Reordering their Sublayers

Ofir PressNoah A. SmithOmer Levy
2020
ACL

Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly… 

Injecting Numerical Reasoning Skills into Language Models

Mor GevaAnkit GuptaJonathan Berant
2020
ACL

Large pre-trained language models (LMs) are known to encode substantial amounts of linguistic information. However, high-level reasoning skills, such as numerical reasoning, are difficult to learn… 

Interactive Extractive Search over Biomedical Corpora

Hillel Taub-TabibMicah ShlainShoval SaddeYoav Goldberg
2020
ACL

We present a system that allows life-science researchers to search a linguistically annotated corpus of scientific texts using patterns over dependency graphs, as well as using patterns over token… 

Language (Re)modelling: Towards Embodied Language Understanding

Ronen TamariChen ShaniTom HopeDafna Shahaf
2020
ACL

While natural language understanding (NLU) is advancing rapidly, today’s technology differs from human-like language understanding in fundamental ways, notably in its inferior efficiency,… 

Nakdan: Professional Hebrew Diacritizer

Avi ShmidmanShaltiel ShmidmanMoshe KoppelYoav Goldberg
2020
ACL

We present a system for automatic diacritization of Hebrew text. The system combines modern neural models with carefully curated declarative linguistic knowledge and comprehensive manually… 

Not All Claims are Created Equal: Choosing the Right Approach to Assess Your Hypotheses

Erfan Sadeqi AzerDaniel KhashabiAshish SabharwalDan Roth
2020
ACL

Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p-value computation, which suffers from several known… 

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

Shauli RavfogelYanai ElazarHila GonenYoav Goldberg
2020
ACL

The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models. We present… 

Obtaining Faithful Interpretations from Compositional Neural Networks

Sanjay SubramanianBen BoginNitish GuptaMatt Gardner
2020
ACL

Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional…