Intelligent Systems
Note: This research group has relocated. Discover the updated page here

Connecting the Dots: Learning Representations for Active Monocular Depth Estimation

2019

Conference Paper

avg


We propose a technique for depth estimation with a monocular structured-light camera, \ie, a calibrated stereo set-up with one camera and one laser projector. Instead of formulating the depth estimation via a correspondence search problem, we show that a simple convolutional architecture is sufficient for high-quality disparity estimates in this setting. As accurate ground-truth is hard to obtain, we train our model in a self-supervised fashion with a combination of photometric and geometric losses. Further, we demonstrate that the projected pattern of the structured light sensor can be reliably separated from the ambient information. This can then be used to improve depth boundaries in a weakly supervised fashion by modeling the joint statistics of image and depth edges. The model trained in this fashion compares favorably to the state-of-the-art on challenging synthetic and real-world datasets. In addition, we contribute a novel simulator, which allows to benchmark active depth prediction algorithms in controlled conditions.

Author(s): Gernot Riegler and Yiyi Liao and Simon Donne and Vladlen Koltun and Andreas Geiger
Book Title: Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)
Year: 2019
Month: June

Department(s): Autonomous Vision
Bibtex Type: Conference Paper (inproceedings)

Event Name: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019
Event Place: Long Beach, USA

Links: pdf
suppmat
Poster
Project Page
Attachments:

BibTex

@inproceedings{Riegler2019CVPR,
  title = {Connecting the Dots: Learning Representations for Active Monocular Depth Estimation},
  author = {Riegler, Gernot and Liao, Yiyi and Donne, Simon and Koltun, Vladlen and Geiger, Andreas},
  booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2019},
  doi = {},
  month_numeric = {6}
}