Loading…

Robust Learning for Deep Monocular Depth Estimation

Existing methods for deep monocular depth estimation are often trained with basic loss functions such as mean absolute error (MAE) or reverse Huber (BerHu). We revisit several basic loss functions to explore possibilities for improvement and show that the final depth estimation accuracy is dominated...

Full description

Saved in:
Bibliographic Details
Main Authors: Irie, Go, Kawanishi, Takahito, Kashino, Kunio
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Existing methods for deep monocular depth estimation are often trained with basic loss functions such as mean absolute error (MAE) or reverse Huber (BerHu). We revisit several basic loss functions to explore possibilities for improvement and show that the final depth estimation accuracy is dominated by pixels with small errors, which account for the vast majority. Based on this observation, we propose a new robust loss function that suppresses the contributions of pixels with higher errors by taking their square root. The loss function called the square root Huber (Ruber) is designed to be first-order differentiable on ℝ >0 , so it can be directly applied to the end-to-end learning of a general type of neural network. Unlike the widely-used robust loss function called Huber, the Ruber loss function facilitates further refinement of the pixels with smaller errors by giving larger gradient values to the errors that are close to zero. Moreover, we show that the estimation accuracy can be further improved by introducing a second training step based on an edge-preserving loss. Experimental results with public indoor scene datasets demonstrate that our method outperforms major loss functions and can yield better accuracies than existing approaches in terms of root mean square error (RMSE).
ISSN:2381-8549
DOI:10.1109/ICIP.2019.8803059