SIGNet: Semantic Instance Aided Unsupervised
3D Geometry Perception
Yue Meng
Yongxi Lu
Aman Raj
Samuel Sunarjo
Rui Guo
Tara Javidi
Gaurav Bansal
Dinesh Bharadia
University of California, San Diego
Toyota InfoTechnology Center
CVPR 2019

Abstract

Unsupervised learning for visual perception of 3D geometry is of great interest to autonomous systems. This paper introduces SIGNet, a novel frameworkthat provides robust geometry perception without requiring geometrically informative labels. Specifically, SIGNet integrates semantic information to make unsupervised robust geometric predictions for objects in low lighting and noisy environments. SIGNet is shown to improve upon the state-of-the-art unsupervised learning for geometry perception by 30% (in squared relative error for depth prediction). In addition, SIGNet improves the dynamic object class performance by 39% in depth prediction and 29% in flow prediction.

Network




Depth Estimation Results on KITTI




Robustness & Category-Specific Improvements





Paper and Code

Yue Meng, Yongxi Lu, Aman Raj, Samuel Sunarjo, Rui Guo, Tara Javidi, Gaurav Bansal, and Dinesh Bharadia.
SIGNet: Semantic Instance Aided Unsupervised 3D Geometry Perception
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019
[PDF][Code]