The Role of the Eyes: Investigating Face Cognition Mechanisms Using Machine Learning and Partial Face Stimuli

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Face cognition mechanism has changed throughout the SARS-CoV-2 pandemic because of wearing masks. Previous studies found that holistic face processing enhances face cognition ability, and covering part of the face features lowers such an ability. However, the question of why people can recognize faces regardless of missing some clues about the face feature remains unsolved. To study the face cognition mechanism, event-related potential (ERP) evoked during the rapid serial visual presentation task is used. ERP is often hidden under large artifacts and needs to be averaged across the tremendous number of trials, but increasing the trial number can cause fatigue and affect evoked ERP. To overcome this limitation, we adopt machine learning and aim to investigate the partial face cognition mechanism without directly considering the pattern characteristic of the ERP. We implemented an xDAWN spatial filter covariance matrix method to enhance the data quality and a support vector machine classification model to predict the participant's event of interest using ERP components evoked in the full and partial face cognition tasks. The combination of the missing two face components and the physical response was also investigated to explore the role of each face component and find the possibility of reducing fatigue caused during the experiment. Our results show that the classification accuracy decreased when the eye component was missing and became lowest (p < 0.005) when the eyes and mouth were absent, with an accuracy of 0.748 ± 0.092 in the button press task and 0.746 ± 0.084 in the no button press task (n.s.). We also observed that the button press error rate increased when the eyes were absent and reached its maximum when the eyes and mouth were covered (p < 0.05). These results suggest that the eyes might be the most effective component, the mouth might also play a secondary role in face cognition, and no button press task could be used in substitution of a button press task to reduce the workload.

Cite

CITATION STYLE

APA

Chanpornpakdi, I., & Tanaka, T. (2023). The Role of the Eyes: Investigating Face Cognition Mechanisms Using Machine Learning and Partial Face Stimuli. IEEE Access, 11, 86122–86131. https://doi.org/10.1109/ACCESS.2023.3295118

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free