SVSNet: An End-to-End Speaker Voice Similarity Assessment Model

7Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Neural evaluation metrics derived for numerous speech generation tasks have recently attracted great attention. In this paper, we propose SVSNet, the first end-to-end neural network model to assess the speaker voice similarity between converted speech and natural speech for voice conversion tasks. Unlike most neural evaluation metrics that use hand-crafted features, SVSNet directly takes the raw waveform as input to more completely utilize speech information for prediction. SVSNet consists of encoder, co-attention, distance calculation, and prediction modules and is trained in an end-to-end manner. The experimental results on the Voice Conversion Challenge 2018 and 2020 (VCC2018 and VCC2020) datasets show that SVSNet outperforms well-known baseline systems in the assessment of speaker similarity at the utterance and system levels.

Cite

CITATION STYLE

APA

Hu, C. H., Peng, Y. H., Yamagishi, J., Tsao, Y., & Wang, H. M. (2022). SVSNet: An End-to-End Speaker Voice Similarity Assessment Model. IEEE Signal Processing Letters, 29, 767–771. https://doi.org/10.1109/LSP.2022.3152672

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free