Abstract
Silent speech is a convenient and natural way for person authentication as users can imagine speaking their password instead of typing it. However there are inherent noises and complex variations in EEG signals making it difficult to capture correct information and model uncertainty. We propose an EEG-based person authentication framework based on a variational inference framework to learn a simple latent representation for complex data. A variational universal background model is created by pooling the latent models of all users. A likelihood ratio of user claimed model to the background model is constructed for testing whether the claim is valid. Extensive experiments on three datasets show the advantages of our proposed framework.
Cite
CITATION STYLE
Tran, H., Tran, D., Ma, W., & Nguyen, P. (2019). EEG-Based Person Authentication with Variational Universal Background Model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11928 LNCS, pp. 418–432). Springer. https://doi.org/10.1007/978-3-030-36938-5_25
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.