Loading…

Disparity Estimation for Camera Arrays Using Reliability Guided Disparity Propagation

Light field cameras become increasingly popular recently, as they can capture 3-D geometry of the scene in a single snap-shot. Many post-capture adjustments can be realized after the disparity map or the equivalent depth map being estimated. Recent studies about light field depth recovery are more d...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2018-01, Vol.6, p.21840-21849
Main Authors: Wang, Yingqian, Yang, Jungang, Mo, Yu, Xiao, Chao, An, Wei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Light field cameras become increasingly popular recently, as they can capture 3-D geometry of the scene in a single snap-shot. Many post-capture adjustments can be realized after the disparity map or the equivalent depth map being estimated. Recent studies about light field depth recovery are more designed for commercial microlens cameras, such as Lytro and Raytrix. However, camera arrays capture scenes with sparser angular samplings and lower angular resolution than microlens cameras. When previous approaches are employed, there will be larger noise and more depth ambiguities in the estimated disparity map, especially, in the textureless regions. In this paper, we propose a method to estimate the disparity from camera arrays. The local disparity with the corresponding reliability is first computed by analyzing the angular variance of the input sub-images. We further optimize the local disparity map by introducing a novel prior and inferring the corresponding implementation algorithm named reliability guided disparity propagation (RGDP). With the global optimization using RGDP, a high-quality disparity map can be generated with noise being suppressed and edges being protected. We conduct experiments on both public data sets and real-world scenes. The effectiveness and outperformance of our method are demonstrated as compared with other state-of-the-art methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2018.2827085