Deep, dense and accurate 3D face correspondence for generating population specific deformable models

•A Deep Network trained on synthetic 3D data to detect facial landmarks is proposed.•The landmarks are used to establish region based 3D face dense correspondence.•Correspondence is established across identities and facial expressions.•A Region based 3D Face deformable model is proposed.•The model o...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2017-09, Vol.69, p.238-250
Main Authors: Gilani, Syed Zulqarnain, Mian, Ajmal, Eastwood, Peter
Format: Article
Language:eng
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A Deep Network trained on synthetic 3D data to detect facial landmarks is proposed.•The landmarks are used to establish region based 3D face dense correspondence.•Correspondence is established across identities and facial expressions.•A Region based 3D Face deformable model is proposed.•The model outperforms others in landmarking and face recognition experiments. We present a multilinear algorithm to automatically establish dense point-to-point correspondence over an arbitrarily large number of population specific 3D faces across identities, facial expressions and poses. The algorithm is initialized with a subset of anthropometric landmarks detected by our proposed Deep Landmark Identification Network which is trained on synthetic images. The landmarks are used to segment the 3D face into Voronoi regions by evolving geodesic level set curves. Exploiting the intrinsic features of these regions, we extract discriminative keypoints on the facial manifold to elastically match the regions across faces for establishing dense correspondence. Finally, we generate a Region based 3D Deformable Model which is fitted to unseen faces to transfer the correspondences. We evaluate our algorithm on the tasks of facial landmark detection and recognition using two benchmark datasets. Comparison with thirteen state-of-the-art techniques shows the efficacy of our algorithm.
ISSN:0031-3203
1873-5142