Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

GenericsKB: A Knowledge Base of Generic Statements

Sumithra BhakthavatsalamChloe AnastasiadesPeter Clark
2020
arXiv

We present a new resource for the NLP community, namely a large (3.5M+ sentence) knowledge base of *generic statements*, e.g., "Trees remove carbon dioxide from the atmosphere", collected from… 

Abductive Commonsense Reasoning

Chandra BhagavatulaRonan Le BrasChaitanya MalaviyaYejin Choi
2020
ICLR

Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can… 

Explain like I am a Scientist: The Linguistic Barriers of Entry to r/science

Tal AugustDallas CardGary HsiehKatharina Reinecke
2020
CHI

As an online community for discussing research findings, r/science has the potential to contribute to science outreach and communication with a broad audience. Yet previous work suggests that most… 

Evaluating Machines by their Real-World Language Use

Rowan ZellersAri HoltzmanElizabeth Anne ClarkYejin Choi
2020
arXiv

There is a fundamental gap between how humans understand and use language – in openended, real-world situations – and today’s NLP benchmarks for language understanding. To narrow this gap, we… 

Ranking Significant Discrepancies in Clinical Reports

Sean MacAvaneyArman CohanNazli GoharianRoss Filice
2020
ECIR

Medical errors are a major public health concern and a leading cause of death worldwide. Many healthcare centers and hospitals use reporting systems where medical practitioners write a preliminary… 

Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks

Xiujun LiXi YinChunyuan LiJianfeng Gao
2020
ECCV

Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region… 

Longformer: The Long-Document Transformer

Iz BeltagyMatthew E. PetersArman Cohan
2020
arXiv

Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the… 

TuringAdvice: A Generative and Dynamic Evaluation of Language Use

Rowan ZellersAri HoltzmanElizabeth ClarkYejin Choi
2020
NAACL

We propose TuringAdvice, a new challenge task and dataset for language understanding models. Given a written situation that a real person is currently facing, a model must generate helpful advice in… 

Evaluating NLP Models via Contrast Sets

M.GardnerY.ArtziV.Basmovaet.al
2020
arXiv

Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading:… 

Soft Threshold Weight Reparameterization for Learnable Sparsity

Aditya KusupatiVivek RamanujanRaghav SomaniAli Farhadi
2020
ICML

Sparsity in Deep Neural Networks (DNNs) is studied extensively with the focus of maximizing prediction accuracy given an overall parameter budget. Existing methods rely on uniform or heuristic…