Allen Institute for AI

Roozbeh Mottaghi

Roozbeh Mottaghi

A photo of Roozbeh Mottaghi

Roozbeh Mottaghi

Research

Roozbeh Mottaghi is the Research Manager of the PRIOR team at AI2 and an Affiliate Associate Professor in the Department of Computer Science & Engineering at the University of Washington. Prior to joining AI2, he was a post-doctoral researcher at the Computer Science Department at Stanford University. He obtained his PhD in Computer Science in 2013 from University of California, Los Angeles. His research is mainly focused on Computer Vision and Machine Learning. More specifically, he is interested in scene understanding, visual models of physics, and visual interaction models.

Semantic ScholarGoogle ScholarContact

Papers

  • Who Let The Dogs Out? Modeling Dog Behavior From Visual Data

    Kiana Ehsani, Hessam Bagherinezhad, Joseph Redmon, Roozbeh Mottaghi, and Ali FarhadiCVPR | 2018

    We introduce the task of directly modeling a visually intelligent agent. Computer vision typically focuses on solving various subtasks related to visual intelligence. We depart from this standard approach to computer vision; instead we directly model a visually intelligent agent. Our model takes visual information as input and directly predicts the actions of the agent. Toward this end we introduce DECADE, a large-scale dataset of ego-centric videos from a dog's perspective as well as her corresponding movements. Using this data we model how the dog acts and how the dog plans her movements. We show under a variety of metrics that given just visual input we can successfully model this intelligent agent in many situations. Moreover, the representation learned by our model encodes distinct information compared to representations trained on image classification, and our learned representation can generalize to other domains. In particular, we show strong results on the task of walkable surface estimation by using this dog modeling task as representation learning.

  • SeGAN: Segmenting and Generating the Invisible

    Kiana Ehsani, Roozbeh Mottaghi, and Ali FarhadiCVPR | 2018

    Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder occludee relations, our method can infer depth layering.