Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

From Recognition to Cognition: Visual Commonsense Reasoning

Rowan ZellersYonatan BiskAli FarhadiYejin Choi
2019
CVPR

Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people’s actions, goals,… 

Video Relationship Reasoning using Gated Spatio-Temporal Energy Graph

Yao-Hung TsaiSantosh DivvalaLouis-Philippe MorencyRuslan Salakhutdinov and Ali Farhadi
2019
CVPR

Visual relationship reasoning is a crucial yet challenging task for understanding rich interactions across visual concepts. For example, a relationship \{man, open, door\} involves a complex… 

Assisted Excitation of Activations: A Learning Technique to Improve Object Detectors

Mohammad Mahdi DerakhshaniSaeed MasoudniaAmir Hossein ShakerBabak N. Araabi
2019
CVPR

We present a simple and effective learning technique that significantly improves mAP of YOLO object detectors without compromising their speed. During network training, we carefully feed in… 

Sentence Mover's Similarity: Automatic Evaluation for Multi-Sentence Texts

Elizabeth ClarkAsli ÇelikyilmazNoah A. Smith
2019
ACL

For evaluating machine-generated texts, automatic methods hold the promise of avoiding collection of human judgments, which can be expensive and time-consuming. The most common automatic metrics,… 

Barack's Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling

Robert L. Logan IVNelson F. LiuMatthew E. PetersSameer Singh
2019
ACL

Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge. However, traditional language models are only capable of remembering facts seen at… 

Is Attention Interpretable?

Sofia SerranoNoah A. Smith
2019
ACL

Attention mechanisms have recently boosted performance on a range of NLP tasks. Because attention layers explicitly weight input components' representations, it is also often assumed that attention… 

Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

Lianhui QinMichel GalleyChris BrockettJianfeng Gao
2019
ACL

Although neural conversational models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and… 

SemEval-2019 Task 10: Math Question Answering

Mark HopkinsRonan Le BrasCristian Petrescu-PrahovaRik Koncel-Kedziorski
2019
SemEval

We report on the SemEval 2019 task on math question answering. We provided a question set derived from Math SAT practice exams, including 2778 training questions and 1082 test questions. For a… 

Variational Pretraining for Semi-supervised Text Classification

Suchin GururanganTam DangDallas CardNoah A. Smith
2019
ACL

We introduce VAMPIRE, a lightweight pretraining framework for effective text classification when data and computing resources are limited. We pretrain a unigram document model as a variational… 

Be Consistent! Improving Procedural Text Comprehension using Label Consistency

Xinya DuBhavana Dalvi MishraNiket TandonClaire Cardie
2019
NAACL-HLT

Our goal is procedural text comprehension, namely tracking how the properties of entities (e.g., their location) change with time given a procedural text (e.g., a paragraph about photosynthesis, a…