Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing Detection

Page view(s)
10
Checked on Dec 03, 2024
Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing Detection
Title:
Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing Detection
Journal Title:
Interspeech 2024
Publication Date:
01 September 2024
Citation:
Pan, Z., Liu, T., Sailor, H. B., & Wang, Q. (2024). Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing Detection. Interspeech 2024, 2090–2094. https://doi.org/10.21437/interspeech.2024-1472
Abstract:
Self-supervised learning (SSL) speech representation models, trained on large speech corpora, have demonstrated effectiveness in extracting hierarchical speech embeddings through multiple transformer layers. However, the behavior of these embeddings in specific tasks remains uncertain. This paper investigates the multi-layer behavior of the WavLM model in anti-spoofing and proposes an attentive merging method to leverage the hierarchical hidden embeddings. Results demonstrate the feasibility of fine-tuning WavLM to achieve the best equal error rate (EER) of 0.65%, 3.50%, and 3.19% on the ASVspoof 2019LA, 2021LA, and 2021DF evaluation sets, respectively. Notably, We find that the early hidden transformer layers of the WavLM large model contribute significantly to anti-spoofing task, enabling computational efficiency by utilizing a partial pre-trained model.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Ministry of Digital Development and Information - Online Trust and Safety (OTS) Research Programme
Grant Reference no. : MCI-OTS-001
Description:
ISSN:
2958-1796
Files uploaded:

File Size Format Action
pan24c-interspeech.pdf 357.42 KB PDF Open