Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection

Maarten SapSwabha SwayamdiptaLaura ViannaNoah A. Smith
2022
NAACL

Warning : this paper discusses and contains content that is offensive or upsetting. The perceived toxicity of language can vary based on someone’s identity and beliefs, but this variation is often… 

DEMix Layers: Disentangling Domains for Modular Language Modeling

Suchin GururanganMichael LewisAri HoltzmanLuke Zettlemoyer
2022
NAACL

We introduce a new domain expert mixture (DEMIX) layer that enables conditioning a language model (LM) on the domain of the input text. A DEMIX layer is a collection of expert feedforward networks,… 

Long Context Question Answering via Supervised Contrastive Learning

Avi CaciularuIdo DaganJacob GoldbergerArman Cohan
2022
NAACL

Long-context question answering (QA) tasks require reasoning over a long document or multiple documents. Addressing these tasks often benefits from identifying a set of evidence spans (e.g.,… 

Literature-Augmented Clinical Outcome Prediction

Aakanksha NaikS. ParasaSergey FeldmanTom Hope
2022
Findings of NAACL

We present BEEP (Biomedical Evidence-Enhanced Predictions), a novel approach for clinical outcome prediction that retrieves patient-specific medical literature and incorporates it into predictive… 

Efficient Hierarchical Domain Adaptation for Pretrained Language Models

Alexandra ChronopoulouMatthew E. PetersJesse Dodge
2022
NAACL

The remarkable success of large language models has been driven by dense models trained on massive unlabeled, unstructured corpora. These corpora typically contain text from diverse, heterogeneous… 

Paragraph-based Transformer Pre-training for Multi-Sentence Inference

Luca Di LielloSiddhant GargLuca SoldainiAlessandro Moschitti
2022
NAACL

Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show… 

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Aarohi SrivastavaAbhinav RastogiAbhishek B RaoUri Shaham
2022
arXiv

Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet… 

Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities

Zejiang ShenKyle LoLauren YuDoug Downey
2022
arXiv

With the advent of large language models, methods for abstractive summarization have made great strides, creating potential for use in applications to aid knowledge workers processing unwieldy… 

Data Governance in the Age of Large-Scale Data-Driven Language Technology

Yacine JerniteHuu NguyenStella Rose BidermanMargaret Mitchell
2022
FAccT

The recent emergence and adoption of Machine Learning technology, and specifically of Large Language Models, has drawn attention to the need for systematic and transparent management of language… 

Measuring the Carbon Intensity of AI in Cloud Instances

Jesse DodgeTaylor PrewittRémi Tachet des CombesWill Buchanan
2022
FAccT

The advent of cloud computing has provided people around the world with unprecedented access to computational power and enabled rapid growth in technologies such as machine learning, the…