Discriminative speaker embedding with serialized multi-layer multi-head attention

Page view(s)
36
Checked on Apr 11, 2024
Discriminative speaker embedding with serialized multi-layer multi-head attention
Title:
Discriminative speaker embedding with serialized multi-layer multi-head attention
Journal Title:
Speech Communication
Publication Date:
13 September 2022
Citation:
Zhu, H., Lee, K. A., & Li, H. (2022). Discriminative speaker embedding with serialized multi-layer multi-head attention. Speech Communication, 144, 89–100. https://doi.org/10.1016/j.specom.2022.09.003
Abstract:
In this paper, a serialized multi-layer multi-head attention is proposed for extracting neural speaker embedding in text-independent speaker verification task. The majority of the recent approaches apply one attention layer to aggregate frame-level features. Inspired by the Transformer network, the proposed serialized attention contains a stack of self-attention layers. Unlike parallel multi-head attention, we propose to aggregate the attentive statistics in a serialized manner to generate the utterance-level embedding and it is propagated to the next layer by residual connection. We further propose an input-aware query for each utterance with the statistics pooling. To evaluate the quality of learned speaker embeddings, the proposed serialized attention mechanism is applied on two widely used neural speaker embedding architectures and validated on several benchmark datasets of various languages and acoustic conditions, including the VoxCeleb1, SITW, and CN-Celeb. Experimental results demonstrate the use of serialized attention can achieve better speaker verification performance.
License type:
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Funding Info:
There was no specific funding for the research done
Description:
ISSN:
0167-6393
Files uploaded:

File Size Format Action
journal-nn-feb2023.pdf 369.32 KB PDF Request a copy