Text-independent speaker authentication with spiking neural networks

20Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a novel system that performs text-independent speaker authentication using new spiking neural network (SNN) architectures. Each speaker is represented by a set of prototype vectors mat is trained with standard Hebbian rule and winner-takes-all approach. For every speaker there is a separated spiking network that computes normalized similarity scores of MFCC (Mel Frequency Cepstrum Coefficients) features considering speaker and background models. Experiments with the VidTimit dataset show similar performance of the system when compared with a benchmark method based on vector quantization. As the main property, the system enables optimization in terms of performance, speed and energy efficiency. A procedure to create/merge neurons is also presented, which enables adaptive and on-line training in an evolvable way. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Wysoski, S. G., Benuskova, L., & Kasabov, N. (2007). Text-independent speaker authentication with spiking neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4669 LNCS, pp. 758–767). Springer Verlag. https://doi.org/10.1007/978-3-540-74695-9_78

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free