r-FACE: Reference guided face component editing

Although recent studies have made significant processes in face portrait editing, simple and accurate face component editing remains a challenge. Face components, such as eyes, nose, and mouth, have a shape style that is difficult to transfer. Existing methods either (1) manipulate pre-defined binar...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2024-08, Vol.152, Article 110425
Main Authors: Deng, Qiyao, Cao, Jie, Liu, Yunfan, Li, Qi, Sun, Zhenan
Format: Article
Language:eng
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although recent studies have made significant processes in face portrait editing, simple and accurate face component editing remains a challenge. Face components, such as eyes, nose, and mouth, have a shape style that is difficult to transfer. Existing methods either (1) manipulate pre-defined binary attribute labels, which is difficult to edit the shape of face components with observable changes, or (2) control the shape change by manually editing the intermediate representation (e.g., precise masks or sketches) of face components, which is time-consuming and requires painting skills. To break these limitations, we propose a simple and effective framework for diverse and controllable face component editing with geometric changes, which utilizes an inpainting model to learn the shape of face components from reference images without any manual annotations. In order to guide generated images to learn the shape style of reference face components, an example-guided attention module is designed to help the network focus on target face component regions. Moreover, a novel domain verification discriminator is introduced to pull the realism of the generated facial component close to the source face. Experimental results demonstrate that the proposed method outperforms conventional methods in image quality, editing accuracy, and diversity of results (see Video Demo). •A novel framework for diverse and controllable face component editing.•An example-guided attention module for improving semantic transfer ability.•A domain verification discriminator for improving the quality of generated images.•Out-of-dataset generalization for dealing with unseen images without mask labels.
ISSN:0031-3203
1873-5142