Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Specializing Multilingual Language Models: An Empirical Study

Ethan C. ChauNoah A. Smith
2021
EMNLP • Workshop on Multilingual Representation Learning

Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance,… 

Towards Personalized Descriptions of Scientific Concepts

Sonia K. MurthyDaniel KingTom HopeDoug Downey
2021
EMNLP 2021 • WiNLP

A single scientific concept can be described in many different ways, and the most informative description depends on the audience. In this paper, we propose generating personalized scientific… 

Achieving Model Robustness through Discrete Adversarial Training

Maor IvgiJonathan Berant
2021
EMNLP

Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error. While such attacks have been extensively explored for the… 

Back to Square One: Bias Detection, Training and Commonsense Disentanglement in the Winograd Schema

Yanai ElazarHongming ZhangYoav GoldbergDan Roth
2021
EMNLP

The Winograd Schema (WS) has been proposed as a test for measuring commonsense capabilities of models. Recently, pre-trained language model-based approaches have boosted performance on some WS… 

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Nora KassnerOyvind TafjordH. SchutzeP. Clark
2021
EMNLP

Although pretrained language models (PTLMs) have been shown to contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after using… 

CDLM: Cross-Document Language Modeling

Avi CaciularuArman CohanIz BeltagyIdo Dagan
2021
Findings of EMNLP

We introduce a new pretraining approach for language models that are geared to support multi-document NLP tasks. Our crossdocument language model (CD-LM) improves masked language modeling for these… 

Contrastive Explanations for Model Interpretability

Alon JacoviSwabha SwayamdiptaShauli RavfogelYoav Goldberg
2021
EMNLP

Contrastive explanations clarify why an event occurred in contrast to another. They are more inherently intuitive to humans to both produce and comprehend. We propose a methodology to produce… 

Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus

Jesse DodgeMaarten SapAna MarasovićMatt Gardner
2021
EMNLP

As language models are trained on ever more text, researchers are turning to some of the largest corpora available. Unlike most other types of datasets in NLP, large unlabeled text corpora are often… 

Explaining Answers with Entailment Trees

Bhavana DalviPeter A. JansenOyvind TafjordPeter Clark
2021
EMNLP

Our goal, in the context of open-domain textual question-answering (QA), is to explain answers by not just listing supporting textual evidence (“rationales”), but also showing how such evidence… 

Finetuning Pretrained Transformers into RNNs

Jungo KasaiHao PengYizhe ZhangNoah A. Smith
2021
EMNLP

Transformers have outperformed recurrent neural networks (RNNs) in natural language generation. But this comes with a significant computational cost, as the attention mechanism’s complexity scales…