Accurate remote pulse rate measurement from RGB face videos has gained a lot of attention in the past years since it allows for a non-invasive contactless monitoring of a subject’s heart rate, useful in numerous potential applications. Nowadays, there is a global trend to monitor e-health parameters without the use of physical devices enabling at-home daily monitoring and telehealth. This paper includes a comprehensive state-of-the-art on remote heart rate estimation from face images. We extensively tested a new framework to better understand several open questions in the domain that are: which areas of the face are the most relevant, how to manage video color components and which performances are possible to reach on a public relevant dataset. From this study, we extract key elements to design an optimal, up-to-date and reproducible framework that can be used as a baseline for accurately estimating the heart rate of a human subject, in particular from the cheek area using the green (G) channel of a RGB video. The results obtained in the public database COHFACE support our input data choices and our 3D-CNN structure as optimal for a remote HR estimation.
CITATION STYLE
Mirabet-Herranz, N., Mallat, K., & Dugelay, J. L. (2023). Deep Learning for Remote Heart Rate Estimation: A Reproducible and Optimal State-of-the-Art Framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13643 LNCS, pp. 558–573). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-37660-3_39
Mendeley helps you to discover research relevant for your work.