VC-AUG: Voice Conversion Based Data Augmentation for Text-Dependent Speaker Verification

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we focus on improving the performance of the text-dependent speaker verification system in the scenario of limited training data. The deep learning based text-dependent speaker verification system generally needs a large-scale text-dependent training data set which could be both labor and cost expensive, especially for customized new wake-up words. In recent studies, voice conversion systems that can generate high quality synthesized speech of seen and unseen speakers have been proposed. Inspired by those works, we adopt two different voice conversion methods as well as the very simple re-sampling approach to generate new text-dependent speech samples for data augmentation purposes. Experimental results show that the proposed method significantly improves the Equal Error Rate performance from 6.51% to 4.48% in the scenario of limited training data. In addition, we also explore the out-of-set and unseen speaker voice conversion based data augmentation.

Cite

CITATION STYLE

APA

Qin, X., Yang, Y., Shi, Y., Yang, L., Wang, X., Wang, J., & Li, M. (2023). VC-AUG: Voice Conversion Based Data Augmentation for Text-Dependent Speaker Verification. In Communications in Computer and Information Science (Vol. 1765 CCIS, pp. 227–237). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-2401-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free