Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Language (Re)modelling: Towards Embodied Language Understanding

Ronen TamariChen ShaniTom HopeDafna Shahaf
2020
ACL

While natural language understanding (NLU) is advancing rapidly, today’s technology differs from human-like language understanding in fundamental ways, notably in its inferior efficiency,… 

QuASE: Question-Answer Driven Sentence Encoding.

Hangfeng HeQiang NingDan Roth
2020
ACL

Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)?… 

Obtaining Faithful Interpretations from Compositional Neural Networks

Sanjay SubramanianBen BoginNitish GuptaMatt Gardner
2020
ACL

Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional… 

A Formal Hierarchy of RNN Architectures

William. MerrillGail Garfinkel WeissYoav GoldbergEran Yahav
2020
ACL

We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational… 

A Mixture of h-1 Heads is Better than h Heads

Hao PengRoy SchwartzDianqi LiNoah A. Smith
2020
ACL

Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks. Evidence has shown that they are overparameterized; attention… 

The Right Tool for the Job: Matching Model and Instance Complexities

Roy SchwartzGabi StanovskySwabha SwayamdiptaNoah A. Smith
2020
ACL

As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose… 

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks

Suchin GururanganAna MarasovićSwabha SwayamdiptaNoah A. Smith
2020
ACL

Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still… 

Social Bias Frames: Reasoning about Social and Power Implications of Language

Maarten SapSaadia GabrielLianhui QinYejin Choi
2020
ACL

Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but all the implied meanings that… 

Improving Transformer Models by Reordering their Sublayers

Ofir PressNoah A. SmithOmer Levy
2020
ACL

Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly… 

Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models

Maarten SapEric HorvitzYejin ChoiJames W. Pennebaker
2020
ACL

We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release…