Abstract
We propose a segment-based voice conversion technique using hidden Markov model (HMM)-based speech synthesis with nonparallel ttaining data. In the proposed technique, the phoneme information with durations and a quantized FO contotu" are extracted from the input speech of a source speaker, and are transmitted to a synthesis part. In the synthesis part, the quantized FO symbols are used as prosodic context. A phonetically and prosodically context-dependent label sequence is generated from the transmitted phoneme and the FO symbols. Then, converted speech is generated froin the label sequence with durations using the target speaker's pre-trained context-dependent HMMs. In the model training, the models of the source and target speakers can be trained separately, hence there is no need to prepare parallel speech data of the sotirce and target speakers. Objective and stibjective experimental results show that the segment-based voice conversion with phonetic and prosodic contexts works effectively even if the parallel speech data is not available. Copyright © 2010 The Institute of Electronics, Information and Communication Engineers.
Author supplied keywords
Cite
CITATION STYLE
Nose, T., Ota, Y., & Kobayashi, T. (2010). HMM-based voice conversion using quantized F0 context. IEICE Transactions on Information and Systems, E93-D(9), 2483–2490. https://doi.org/10.1587/transinf.E93.D.2483
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.