Transfer Learning From Speech Synthesis to Voice Conversion With Non-Parallel Training Data

Page view(s)
9
Checked on Apr 01, 2025
Transfer Learning From Speech Synthesis to Voice Conversion With Non-Parallel Training Data
Title:
Transfer Learning From Speech Synthesis to Voice Conversion With Non-Parallel Training Data
Journal Title:
IEEE/ACM Transactions on Audio, Speech, and Language Processing
Publication Date:
17 March 2021
Citation:
Zhang, M., Zhou, Y., Zhao, L., & Li, H. (2021). Transfer Learning From Speech Synthesis to Voice Conversion With Non-Parallel Training Data. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 1290–1302. https://doi.org/10.1109/taslp.2021.3066047
Abstract:
This paper presents a novel framework to build a voice conversion (VC) system by learning from a text-to-speech (TTS) synthesis system, that is called TTS-VC transfer learning. We first develop a multi-speaker speech synthesis system with sequence-to-sequence encoder-decoder architecture, where the encoder extracts robust linguistic representations of text, and the decoder, conditioned on target speaker embedding, takes the context vectors and the attention recurrent network cell output to generate target acoustic features. We take advantage of the fact that TTS system maps input text to speaker independent context vectors, and reuse such a mapping to supervise the training of latent representations of an encoder-decoder voice conversion system. In the voice conversion system, the encoder takes speech instead of text as input, while the decoder is functionally similar to TTS decoder. As we condition the decoder on speaker embedding, the system can be trained on non-parallel data for any-to-any voice conversion. During voice conversion training, we present both text and speech to speech synthesis and voice conversion networks respectively. At run-time, the voice conversion network uses its own encoder-decoder architecture. Experiments show that the proposed approach outperforms two competitive voice conversion baselines consistently, namely phonetic posteriorgram and variational autoencoder methods, in terms of speech quality, naturalness, and speaker similarity.
License type:
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Funding Info:
This research / project is supported by the National Research Foundation Singapore - AI Singapore Programme
Grant Reference no. : AISG-100E-2018-006, AISG-GC-2019-002

This research / project is supported by the National Research Foundation Singapore - National Robotics Programme
Grant Reference no. : 192 25 00054

This research / project is supported by the Agency for Science, Technology and Research (A*STAR) - RIE2020 Advanced Manufacturing and Engineering Programmatic Grants
Grant Reference no. : A1687b0033, and A18A2b0046

This research / project is supported by the National Key Research, and Development Program of China - N/A
Grant Reference no. : 2018 YFB1305203, 2020 YFC2004003
Description:
© 2021 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, or for resale or redistribution to servers or lists.
ISSN:
2329-9290
2329-9304