Serialized Multi-Layer Multi-Head Attention for Neural Speaker Embedding

Page view(s)
20
Checked on Jul 19, 2024
Serialized Multi-Layer Multi-Head Attention for Neural Speaker Embedding
Title:
Serialized Multi-Layer Multi-Head Attention for Neural Speaker Embedding
Journal Title:
Interspeech 2021
Publication Date:
30 August 2021
Citation:
Zhu, H., Lee, K. A., & Li, H. (2021). Serialized Multi-Layer Multi-Head Attention for Neural Speaker Embedding. Interspeech 2021. doi:10.21437/interspeech.2021-2210
Abstract:
This paper proposes a serialized multi-layer multi-head attention for neural speaker embedding in text-independent speaker verification. In prior works, frame-level features from one layer are aggregated to form an utterance-level representation. Inspired by the Transformer network, our proposed method utilizes the hierarchical architecture of stacked self-attention mechanisms to derive refined features that are more correlated with speakers. Serialized attention mechanism contains a stack of self-attention modules to create fixed-dimensional representations of speakers. Instead of utilizing multi-head attention in parallel, the proposed serialized multi-layer multi-head attention is designed to aggregate and propagate attentive statistics from one layer to the next in a serialized manner. In addition, we employ an input-aware query for each utterance with the statistics pooling. With more layers stacked, the neural network can learn more discriminative speaker embeddings. Experiment results on VoxCeleb1 dataset and SITW dataset show that our proposed method outperforms other baseline methods, including x-vectors and other x-vectors + conventional attentive pooling approaches by 9.7% in EER and 8.1% in DCF10-2.
License type:
Publisher Copyright
Funding Info:
This research work is supported by Programmatic Grant from the Singapore Government’s Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain).
Description:
ISSN:
1990-9772
Files uploaded:

File Size Format Action
zhu21c-interspeech.pdf 400.66 KB PDF Open