Multilingual Bottle-Neck Feature Learning from Untranscribed Speech

Multilingual Bottle-Neck Feature Learning from Untranscribed Speech
Title:
Multilingual Bottle-Neck Feature Learning from Untranscribed Speech
Other Titles:
ASRU 2017
DOI:
Publication Date:
16 December 2017
Citation:
H. Chen, C.-C. Leung, L. Xie, B. Ma, and H. Li, "Multilingual Bottle-Neck Feature Learning from Untranscribed Data," in Proc. ASRU, 2017, pp. 727-733
Abstract:
We propose to learn a low-dimensional feature representation for multiple languages without access to their manual transcription. The multilingual features are extracted from a shared bottleneck layer of a multi-task learning deep neural network which is trained using unsupervised phoneme-like labels. The unsupervised phoneme-like labels are obtained from language-dependent Dirichlet process Gaussian mixture models (DPGMMs). Vocal tract length normalization (VTLN) is applied to mel-frequency cepstral coefficients to reduce talker variation when DPGMMs are trained. The proposed features are evaluated using the ABX phoneme discriminability test in the Zero Resource Speech Challenge 2017. In the experiments, we show that the proposed features perform well across different languages, and they consistently outperform our previously proposed DPGMM posteriorgrams which topped the performance in the same challenge in 2015.
License type:
PublisherCopyrights
Funding Info:
National Natural Science Foundation of China (Grant No. 61571363) and China Scholarship Council (Grant No. 201606291069)
Description:
ISBN:

Files uploaded:

File Size Format Action
asru2017-chj.pdf 356.94 KB PDF Open