Expressive TTS Training With Frame and Style Reconstruction Loss

Page view(s)
8
Checked on Jan 10, 2025
Expressive TTS Training With Frame and Style Reconstruction Loss
Title:
Expressive TTS Training With Frame and Style Reconstruction Loss
Journal Title:
IEEE/ACM Transactions on Audio, Speech, and Language Processing
Publication Date:
30 April 2021
Citation:
Liu, R., Sisman, B., Gao, G., & Li, H. (2021). Expressive TTS Training With Frame and Style Reconstruction Loss. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 1806–1818. https://doi.org/10.1109/taslp.2021.3076369
Abstract:
We propose a novel training strategy for Tacotron-based text-to-speech (TTS) system to improve the expressiveness of speech. One of the key challenges in prosody modeling is the lack of reference that makes explicit modeling difficult. The proposed technique doesn't require prosody annotations from training data. It doesn't attempt to model prosody explicitly either, but rather encodes the association between input text and its prosody styles using a Tacotron-based TTS framework. Our proposed idea marks a departure from the style token paradigm where prosody is explicitly modeled by a bank of prosody embeddings. The proposed training strategy adopts a combination of two objective functions: 1) frame level reconstruction loss, that is calculated between the synthesized and target spectral features; 2) utterance level style reconstruction loss, that is calculated between the deep style features of synthesized and target speech. The proposed style reconstruction loss is formulated as a perceptual loss to ensure that utterance level speech style is taken into consideration during training. Experiments show that the proposed training strategy achieves remarkable performance and outperforms a state-of-the-art baseline in both naturalness and expressiveness. To our best knowledge, this is the first study to incorporate utterance level perceptual quality as a loss function into Tacotron training for improved expressiveness.
License type:
Attribution 4.0 International (CC BY 4.0)
Funding Info:
This research / project is supported by the SUTD - Start-up Grant Artificial Intelligence for Human Voice Conversion
Grant Reference no. : SRG ISTD 2020 158

This research / project is supported by the National Research Foundation Singapore - AI Singapore Programme
Grant Reference no. : AISG-GC-2019-002, AISG-100E-2018-006

This research / project is supported by the National Research Foundation Singapore - National Robotics Programme
Grant Reference no. : 192 25 00054

This research / project is supported by the National Research Foundation Singapore - RIE2020 Advanced Manufacturing and Engineering Programmatic Grants
Grant Reference no. : A1687b0033, and A18A2b0046
Description:
ISSN:
2329-9290
2329-9304