This paper presents a novel system that performs text-independent speaker authentication using new spiking neural network (SNN) architectures. Each speaker is represented by a set of prototype vectors mat is trained with standard Hebbian rule and winner-takes-all approach. For every speaker there is a separated spiking network that computes normalized similarity scores of MFCC (Mel Frequency Cepstrum Coefficients) features considering speaker and background models. Experiments with the VidTimit dataset show similar performance of the system when compared with a benchmark method based on vector quantization. As the main property, the system enables optimization in terms of performance, speed and energy efficiency. A procedure to create/merge neurons is also presented, which enables adaptive and on-line training in an evolvable way. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Wysoski, S. G., Benuskova, L., & Kasabov, N. (2007). Text-independent speaker authentication with spiking neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4669 LNCS, pp. 758–767). Springer Verlag. https://doi.org/10.1007/978-3-540-74695-9_78
Mendeley helps you to discover research relevant for your work.