Ma, Y., Lee, K. A., Hautamaki, V., & Li, H. (2021). PL-EESR: Perceptual Loss Based End-to-End Robust Speaker Representation Extraction. 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). https://doi.org/10.1109/asru51503.2021.9688031
Abstract:
Speech enhancement aims to improve the perceptual quality of the speech signal by suppression of the background noise. However, excessive suppression may lead to speech distortion and speaker information loss, which degrades the performance of speaker embedding extraction. To alleviate this problem, we propose an end-to-end deep learning framework, dubbed PL-EESR, for robust speaker representation extraction. This framework is optimized based on the feedback of the speaker identification task and the high-level perceptual deviation between the raw speech signal and its noisy version. We conducted speaker verification tasks in both noisy and clean environment respectively to evaluate our system. Compared to the baseline, our method shows better performance in both clean and noisy environments, which means our method can not only enhance the speaker relative information but also avoid adding distortions.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Agency of Science, Technology and Research - National Robotics Program under Human-Robot Interaction Phase 1
Grant Reference no. : 192 25 00054
This research / project is supported by the Agency of Science, Technology and Research - Feasibility Study Scheme
Grant Reference no. : FS-2021-001
This research / project is supported by the National Research Foundation Singapore - RIE 2020 Advanced Manufacturing and Engineering Human (AME) Programmatic Grant
Grant Reference no. : A18A2b0046
This research / project is supported by the National Research Foundation Singapore - AI Singapore Programme
Grant Reference no. : AISG-100E-2018- 006