Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Natural Adversarial Objects

Felix LauNishant SubramaniSasha HarrisonRosanne Liu
2021
NeurIPS 2021 Data Centric AI Workshop

Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset,… 

One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval

Akari AsaiXinyan YuJungo KasaiHanna Hajishirzi
2021
NeurIPS

We present CORA, a Cross-lingual Open-Retrieval Answer Generation model that can answer questions across many languages even when language-specific annotated data or knowledge sources are… 

Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing

Sarah Wiegreffe and Ana Marasović
2021
NeurIPS Datasets & Benchmarks

Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated explanations. These explanations are used downstream in three ways: as data augmentation to improve performance on a… 

Specializing Multilingual Language Models: An Empirical Study

Ethan C. ChauNoah A. Smith
2021
EMNLP • Workshop on Multilingual Representation Learning

Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance,… 

CDLM: Cross-Document Language Modeling

Avi CaciularuArman CohanIz BeltagyIdo Dagan
2021
Findings of EMNLP

We introduce a new pretraining approach for language models that are geared to support multi-document NLP tasks. Our crossdocument language model (CD-LM) improves masked language modeling for these… 

Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus

Jesse DodgeMaarten SapAna MarasovićMatt Gardner
2021
EMNLP

As language models are trained on ever more text, researchers are turning to some of the largest corpora available. Unlike most other types of datasets in NLP, large unlabeled text corpora are often… 

Finetuning Pretrained Transformers into RNNs

Jungo KasaiHao PengYizhe ZhangNoah A. Smith
2021
EMNLP

Transformers have outperformed recurrent neural networks (RNNs) in natural language generation. But this comes with a significant computational cost, as the attention mechanism’s complexity scales… 

Generative Context Pair Selection for Multi-hop Question Answering

Dheeru DuaCicero Nogueira dos SantosPatrick NgSameer Singh
2021
EMNLP

Compositional reasoning tasks like multi-hop question answering, require making latent decisions to get the final answer, given a question. However, crowdsourced datasets often capture only a slice… 

Learning with Instance Bundles for Reading Comprehension

Dheeru DuaPradeep DasigiSameer Singh and Matt Gardner
2021
EMNLP

When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their… 

Measuring Association Between Labels and Free-Text Rationales

Sarah WiegreffeAna MarasovićNoah A. Smith
2021
EMNLP

Interpretable NLP has taking increasing interest in ensuring that explanations are faithful to the model’s decision-making process. This property is crucial for machine learning researchers and…