Using brain activity patterns to differentiate real and virtual attended targets during augmented reality scenarios

5Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.

Cite

CITATION STYLE

APA

Vortmann, L. M., Schwenke, L., & Putze, F. (2021). Using brain activity patterns to differentiate real and virtual attended targets during augmented reality scenarios. Information (Switzerland), 12(6). https://doi.org/10.3390/info12060226

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free