Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

VinVL: Revisiting Visual Representations in Vision-Language Models

Pengchuan ZhangXiujun LiXiaowei HuJianfeng Gao
2021
CVPR

This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of… 

Edited Media Understanding: Reasoning About Implications of Manipulated Images

Jeff DaMaxwell ForbesRowan ZellersYejin Choi
2020
arXiv

Multimodal disinformation, from `deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered… 

Do Neural Language Models Overcome Reporting Bias?

Vered Shwartz and Yejin Choi
2020
Proceedings of the 28th International Conference on Computational Linguistics

Mining commonsense knowledge from corpora suffers from reporting bias, over-representing the rare at the expense of the trivial (Gordon and Van Durme, 2013). We study to what extent pre-trained… 

A Dataset for Tracking Entities in Open Domain Procedural Text

Niket TandonKeisuke SakaguchiBhavana Dalvi MishraEduard Hovy
2020
EMNLP

We present the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. For example, in a text describing fog removal using… 

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

Lianhui QinVered ShwartzP. WestYejin Choi
2020
EMNLP

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the… 

Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube

Jack HesselZ. ZhuBo PangRadu Soricut
2020
EMNLP

Pretraining from unlabelled web videos has quickly become the de-facto means of achieving high performance on many video understanding tasks. Features are learned via prediction of grounded… 

CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning

Bill Yuchen LinM. ShenWangchunshu ZhouX. Ren
2020
EMNLP

Recently, large-scale pre-trained language models have demonstrated impressive performance on several commonsense benchmark datasets. However, building machines with common-sense to compose… 

Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics

Swabha SwayamdiptaRoy SchwartzNicholas LourieYejin Choi
2020
EMNLP

Large datasets have become commonplace in NLP research. However, the increased emphasis on data quantity has made it challenging to assess the quality of data. We introduce "Data Maps"---a… 

Does my multimodal model learn cross-modal interactions? It’s harder to tell than you might think!

Jack HesselLillian Lee
2020
EMNLP

Modeling expressive cross-modal interactions seems crucial in multimodal tasks, such as visual question answering. However, sometimes high-performing black-box algorithms turn out to be mostly… 

Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents

Gregory YauneyJack HesselDavid Mimno
2020
EMNLP

Images can give us insights into the contextual meanings of words, but current imagetext grounding approaches require detailed annotations. Such granular annotation is rare, expensive, and…