Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electrodermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We compare fusion models using original video features and those using transferred video features which are less sensitive to environmental changes. We demonstrate the benefit of the fusion and the transferred video features with a special test case involving domain adaptation and improved performance relative to using EDA and video features alone.
CITATION STYLE
Xu, X., Susam, B. T., Nezamfar, H., Diaz, D., Craig, K. D., Goodwin, M. S., … de Sa, V. R. (2019). Towards automated pain detection in children using facial and electrodermal activity. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11326 LNAI, pp. 181–189). Springer Verlag. https://doi.org/10.1007/978-3-030-12738-1_13
Mendeley helps you to discover research relevant for your work.