Abstract

Estimating a dense depth map from a single view is geometrically ill-posed, and state-of-the-art methods rely on learning depth's relation with visual appearance using deep neural networks. On the other hand, Structure from Motion (SfM) leverages multi-view constraints to produce very accurate but sparse maps, as matching across images is typically limited by locally discriminative texture. In this work, we combine the strengths of both approaches by proposing a novel test-time refinement (TTR) method, denoted as SfM-TTR, that boosts the performance of single-view depth networks at test time using SfM multi-view cues. Specifically, and differently from the state of the art, we use sparse SfM point clouds as test-time self-supervisory signal, fine-tuning the network encoder to learn a better representation of the test scene. Our results show how the addition of SfM-TTR to several state-of-the-art self-supervised and supervised networks improves significantly their performance, outperforming previous TTR baselines mainly based on photometric multi-view consistency.

Results

...
...
  • SfM-TTR improves depth estimation. The pointcloud after the alignment (right) is closer to the ground truth (dashed line) than before (left).
  • Using SfM-TTR (blue line) obtains better results for most of the depth range compared to original network (dashed line) and photometric refinement (red line)
  • Specially for further areas, where SfM-TTR can leverage big baselines obtained from SfM.
...
  • Using SfM-TTR (right) reduces the error for different network architectures, supervised and self-supervised.
  • Areas marked with red boxes are specially improved, as they are accurately reconstructed by SfM and thus can be refined.

BibTex

@inproceedings{izquierdo2023sfmttr,
  title={SfM-TTR: Using Structure from Motion for Test-Time Refinement of Single-View Depth Networks},
  author={Izquierdo, Sergio and Civera, Javier},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={21466--21476},
  year={2023}
}