Detail-enhanced cross-modality face synthesis via guided image filtering

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Face images in different modalities are often encountered in many applications, such as face image in photo and sketch style, visible light and near-infrared style. As an active yet challenging task, cross-modality face synthesis aims to transform face images between modalities. Many existing methods successfully recover global features for a given photo, however, fail to capture fine-scale details in the synthesis results. In this paper, we propose a two-step algorithm to tackle this problem. Firstly, KNN is used to select the K most similar patches in training set for an input patch centered on each pixel. Then combination of patches is calculated for initial results. In the second step, guided image filtering is used on initial results with test photo as guidance. Fine-scale details can be transferred to the results via local linear transformation. Comparison experiments on public datasets demonstrated the proposed method is superior to the state-of-the-art method in simultaneously keeping global features and enhancing fine-scale details.

Cite

CITATION STYLE

APA

Dang, Y., Li, F., Li, Z., & Zuo, W. (2015). Detail-enhanced cross-modality face synthesis via guided image filtering. In Communications in Computer and Information Science (Vol. 546, pp. 200–209). Springer Verlag. https://doi.org/10.1007/978-3-662-48558-3_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free