Intelligent Systems
Note: This research group has relocated. Discover the updated page here

NoVA: Learning to See in Novel Viewpoints and Domains

2019

Conference Paper

avg


Domain adaptation techniques enable the re-use and transfer of existing labeled datasets from a source to a target domain in which little or no labeled data exists. Recently, image-level domain adaptation approaches have demonstrated impressive results in adapting from synthetic to real-world environments by translating source images to the style of a target domain. However, the domain gap between source and target may not only be caused by a different style but also by a change in viewpoint. This case necessitates a semantically consistent translation of source images and labels to the style and viewpoint of the target domain. In this work, we propose the Novel Viewpoint Adaptation (NoVA) model, which enables unsupervised adaptation to a novel viewpoint in a target domain for which no labeled data is available. NoVA utilizes an explicit representation of the 3D scene geometry to translate source view images and labels to the target view. Experiments on adaptation to synthetic and real-world datasets show the benefit of NoVA compared to state-of-the-art domain adaptation approaches on the task of semantic segmentation.

Author(s): Benjamin Coors and Alexandru Paul Condurache and Andreas Geiger
Book Title: 2019 International Conference on 3D Vision (3DV)
Pages: 116--125
Year: 2019
Month: September
Day: 16--19
Publisher: IEEE

Department(s): Autonomous Vision
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

DOI: 10.1109/3DV.2019.00022
Event Name: 2019 International Conference on 3D Vision (3DV)
Event Place: Quebec City, QC, Canada

State: Published

Links: pdf
suppmat
poster
video
Video:

BibTex

@inproceedings{Coors2019THREEDV,
  title = {NoVA: Learning to See in Novel Viewpoints and Domains},
  author = {Coors, Benjamin and Condurache, Alexandru Paul and Geiger, Andreas},
  booktitle = {2019 International Conference on 3D Vision (3DV)},
  pages = {116--125},
  publisher = {IEEE},
  month = sep,
  year = {2019},
  doi = {10.1109/3DV.2019.00022},
  month_numeric = {9}
}