Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Improving Transformer Models by Reordering their Sublayers

Ofir PressNoah A. SmithOmer Levy
2020
ACL

Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly… 

Obtaining Faithful Interpretations from Compositional Neural Networks

Sanjay SubramanianBen BoginNitish GuptaMatt Gardner
2020
ACL

Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional… 

QuASE: Question-Answer Driven Sentence Encoding.

Hangfeng HeQiang NingDan Roth
2020
ACL

Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)?… 

Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models

Maarten SapEric HorvitzYejin ChoiJames W. Pennebaker
2020
ACL

We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release… 

Social Bias Frames: Reasoning about Social and Power Implications of Language

Maarten SapSaadia GabrielLianhui QinYejin Choi
2020
ACL

Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but all the implied meanings that… 

The Right Tool for the Job: Matching Model and Instance Complexities

Roy SchwartzGabi StanovskySwabha SwayamdiptaNoah A. Smith
2020
ACL

As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose… 

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

Ben BoginSanjay SubramanianMatt GardnerJonathan Berant
2020
TACL

Answering questions that involve multi-step reasoning requires decomposing them and using the answers of intermediate steps to reach the final answer. However, state-ofthe-art models in grounded… 

Contextual Word Representations: Putting Words into Computers

Noah A. Smith
2020
CACM

This article aims to tell the story of how we put words into computers. It is part of the story of the field of natural language processing (NLP), a branch of artificial intelligence.a It targets a… 

On Consequentialism and Fairness

Dallas CardNoah A. Smith
2020
Frontiers in AI Journal

Recent work on fairness in machine learning has primarily emphasized how to define, quantify, and encourage "fair" outcomes. Less attention has been paid, however, to the ethical foundations which… 

Explain like I am a Scientist: The Linguistic Barriers of Entry to r/science

Tal AugustDallas CardGary HsiehKatharina Reinecke
2020
CHI

As an online community for discussing research findings, r/science has the potential to contribute to science outreach and communication with a broad audience. Yet previous work suggests that most…