Classification of Targets and Distractors in an Audiovisual Attention Task Based on Electroencephalography

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed an experiment inspired by pip-and-pop but more appropriate for eliciting attention and P3a-event-related potentials (ERPs). In this study, the aim was to distinguish between targets and distractors based on the subject’s electroencephalography (EEG) data. We achieved this objective by employing different machine learning (ML) methods for both individual-subject (IS) and cross-subject (CS) models. Finally, we investigated which EEG channels and time points were used by the model to make its predictions using saliency maps. We were able to successfully perform the aforementioned classification task for both the IS and CS scenarios, reaching classification accuracies up to 76%. In accordance with the literature, the model primarily used the parietal–occipital electrodes between 200 ms and 300 ms after the stimulus to make its prediction. The findings from this research contribute to the development of more effective P300-based brain–computer interfaces. Furthermore, they validate the EEG data collected in our experiment.

Cite

CITATION STYLE

APA

Mortier, S., Turkeš, R., De Winne, J., Van Ransbeeck, W., Botteldooren, D., Devos, P., … Verdonck, T. (2023). Classification of Targets and Distractors in an Audiovisual Attention Task Based on Electroencephalography. Sensors, 23(23). https://doi.org/10.3390/s23239588

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free