Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

Lianhui QinVered ShwartzP. WestYejin Choi
2020
EMNLP

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the… 

Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube

Jack HesselZ. ZhuBo PangRadu Soricut
2020
EMNLP

Pretraining from unlabelled web videos has quickly become the de-facto means of achieving high performance on many video understanding tasks. Features are learned via prediction of grounded… 

CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning

Bill Yuchen LinM. ShenWangchunshu ZhouX. Ren
2020
EMNLP

Recently, large-scale pre-trained language models have demonstrated impressive performance on several commonsense benchmark datasets. However, building machines with common-sense to compose… 

Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics

Swabha SwayamdiptaRoy SchwartzNicholas LourieYejin Choi
2020
EMNLP

Large datasets have become commonplace in NLP research. However, the increased emphasis on data quantity has made it challenging to assess the quality of data. We introduce "Data Maps"---a… 

Does my multimodal model learn cross-modal interactions? It’s harder to tell than you might think!

Jack HesselLillian Lee
2020
EMNLP

Modeling expressive cross-modal interactions seems crucial in multimodal tasks, such as visual question answering. However, sometimes high-performing black-box algorithms turn out to be mostly… 

Do Language Embeddings Capture Scales?

Xikun ZhangDeepak RamachandranIan TenneyDan Roth
2020
Findings of EMNLP • BlackboxNLP Workshop

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is… 

Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents

Gregory YauneyJack HesselDavid Mimno
2020
EMNLP

Images can give us insights into the contextual meanings of words, but current imagetext grounding approaches require detailed annotations. Such granular annotation is rare, expensive, and… 

Easy, Reproducible and Quality-Controlled Data Collection with Crowdaq

Qiang NingHao WuPradeep DasigiZ. Nie
2020
EMNLP • Demo

High-quality and large-scale data are key to success for AI systems. However, large-scale data annotation efforts are often confronted with a set of common challenges: (1) designing a user-friendly… 

Fact or Fiction: Verifying Scientific Claims

David WaddenKyle LoLucy Lu WangHannaneh Hajishirzi
2020
EMNLP

We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute… 

Grounded Compositional Outputs for Adaptive Language Modeling

Nikolaos PappasPhoebe MulcaireNoah A. Smith
2020
EMNLP

Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A…