Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Longformer: The Long-Document Transformer

Iz BeltagyMatthew E. PetersArman Cohan
2020
arXiv

Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the… 

Just Add Functions: A Neural-Symbolic Language Model

David DemeterDoug Downey
2019
arXiv

Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these… 

Pretrained Language Models for Sequential Sentence Classification

Arman CohanIz BeltagyDaniel KingDaniel S. Weld
2019
EMNLP

As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in… 

SciBERT: A Pretrained Language Model for Scientific Text

Iz BeltagyKyle LoArman Cohan
2019
EMNLP

Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et al., 2018) to… 

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Mandar JoshiDanqi ChenYinhan LiuOmer Levy
2019
EMNLP

We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random… 

GrapAL: Connecting the Dots in Scientific Literature

Christine BettsJoanna PowerWaleed Ammar
2019
ACL

We introduce GrapAL (Graph database of Academic Literature), a versatile tool for exploring and investigating a knowledge base of scientific literature, that was semi-automatically constructed using… 

ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing

Mark NeumannDaniel KingIz BeltagyWaleed Ammar
2019
ACL • BioNLP Workshop

Despite recent advances in natural language processing, many statistical models for processing text perform extremely poorly under domain shift. Processing biomedical and clinical text is a… 

CEDR: Contextualized Embeddings for Document Ranking

Sean MacAvaneyAndrew YatesArman CohanNazli Goharian
2019
SIGIR

Although considerable attention has been given to neural ranking architectures recently, far less attention has been paid to the term representations that are used as input to these models. In this… 

Ontology-Aware Clinical Abstractive Summarization

Sean MacAvaneySajad SotudehArman CohanRoss W. Filice
2019
SIGIR

Automatically generating accurate summaries from clinical reports could save a clinician's time, improve summary coverage, and reduce errors. We propose a sequence-to-sequence abstractive… 

Quantifying Sex Bias in Clinical Studies at Scale With Automated Data Extraction

Sergey FeldmanWaleed AmmarKyle LoOren Etzioni
2019
JAMA

Importance: Analyses of female representation in clinical studies have been limited in scope and scale. Objective: To perform a large-scale analysis of global enrollment sex bias in clinical…