This paper presents a method to accurately segment the hand over the face. The similarity of colours and the important variability of the hand shape make it challenging. We propose a method based on the combination of two features: pixel colour and edges orientation. First, a specific skin model is used to find, before occlusion, the face position and the face template. Then, during occlusion the face template is registered using local gradient orientations to track the face position. Colour information is extracted from changes on pixel colours and edges are classified as belonging to the hand or to the face by mapping edges orientation to the face template. Finally by merging both features and by using an hysteresis threshold, which considers connectivity, a robust hand segmentation is reached. Experiments were performed using the Dicta-Sign corpus and showed the versatility of the proposed approach. © 2012 Springer-Verlag.
CITATION STYLE
Gonzalez, M., Collet, C., & Dubot, R. (2012). Head tracking and hand segmentation during hand over face occlusion in sign language. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6553 LNCS, pp. 234–243). https://doi.org/10.1007/978-3-642-35749-7_18
Mendeley helps you to discover research relevant for your work.