Discussion on score normalization and language robustness in text-independent multi-language speaker verification

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In speaker recognition fields, score normalization is a widely used and effective technique to enhance the recognition performances and is developing further. In this paper, we are focused on the comparison among many kinds of candidates of score normalization methods and a new implementation of the speaker adaptive test normalization (ATnorm) based on a cross similarity measurement is presented which doesn't need an extra corpus for speaker adaptive impostor cohort selection. The use of ATnorm for the language robustness of the multi-language speaker verification is also investigated. Experiments are conducted on the core task of the 2006 NIST Speaker Recognition Evaluation (SRE) corpus. The experimental results indicate that all the score normalization methods mentioned can improve the recognition performances and ATnorm behaves best. Moreover, ATnorm can further contribute to the performance as a means of language robustness. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Zhao, J., Dong, Y., Zhao, X., Yang, H., Lu, L., & Wang, H. (2007). Discussion on score normalization and language robustness in text-independent multi-language speaker verification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4681 LNCS, pp. 1121–1130). Springer Verlag. https://doi.org/10.1007/978-3-540-74171-8_114

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free