EPIC TTS Models: Empirical Pruning Investigations Characterizing Text-To-Speech Models

Page view(s)
16
Checked on Jan 03, 2023
EPIC TTS Models: Empirical Pruning Investigations Characterizing Text-To-Speech Models
Title:
EPIC TTS Models: Empirical Pruning Investigations Characterizing Text-To-Speech Models
Other Titles:
Interspeech 2022
Keywords:
Publication Date:
16 September 2022
Citation:
Lam, P., Zhang, H., Chen, N., & Sisman, B. (2022). EPIC TTS Models: Empirical Pruning Investigations Characterizing Text-To-Speech Models. Interspeech 2022. https://doi.org/10.21437/interspeech.2022-10626
Abstract:
Neural models are known to be over-parameterized, and recent work has shown that sparse text-to-speech (TTS) models can outperform dense models. Although a plethora of sparse methods has been proposed for other domains, such methods have rarely been applied in TTS. In this work, we seek to answer the question: what are the characteristics of selected sparse techniques on the performance and model complexity? We compare a Tacotron2 baseline and the results of applying five techniques. We then evaluate the performance via the factors of naturalness, intelligibility and prosody, while reporting model size and training time. Complementary to prior research, we find that pruning before or during training can achieve similar performance to pruning after training and can be trained much faster, while removing entire neurons degrades performance much more than removing parameters. To our best knowledge, this is the first work that compares sparsity paradigms in text-to-speech synthesis.
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done
Description:
ISSN:
2308457X
Files uploaded: