ADAST: Attentive Cross-Domain EEG-Based Sleep Staging Framework With Iterative Self-Training

Page view(s)
68
Checked on Oct 11, 2024
ADAST: Attentive Cross-Domain EEG-Based Sleep Staging Framework With Iterative Self-Training
Title:
ADAST: Attentive Cross-Domain EEG-Based Sleep Staging Framework With Iterative Self-Training
Journal Title:
IEEE Transactions on Emerging Topics in Computational Intelligence
Publication Date:
10 August 2022
Citation:
Eldele, E., Ragab, M., Chen, Z., Wu, M., Kwoh, C.-K., Li, X., & Guan, C. (2022). ADAST: Attentive Cross-Domain EEG-Based Sleep Staging Framework With Iterative Self-Training. IEEE Transactions on Emerging Topics in Computational Intelligence, 1–12. https://doi.org/10.1109/tetci.2022.3189695
Abstract:
Sleep staging is of great importance in the diagnosis and treatment of sleep disorders. Recently, numerous data-driven deep learning models have been proposed for automatic sleep staging. They mainly train the model on a large public labeled sleep dataset and test it on a smaller one with subjects of interest. However, they usually assume that the train and test data are drawn from the same distribution, which may not hold in real-world scenarios. Unsupervised domain adaption (UDA) has been recently developed to handle this domain shift problem. However, previous UDA methods applied for sleep staging have two main limitations. First, they rely on a totally shared model for the domain alignment, which may lose the domain-specific information during feature extraction. Second, they only align the source and target distributions globally without considering the class information in the target domain, which hinders the classification performance of the model while testing. In this work, we propose a novel adversarial learning framework called ADAST to tackle the domain shift problem in the unlabeled target domain. First, we develop an unshared attention mechanism to preserve the domain-specific features in both domains. Second, we design an iterative self-training strategy to improve the classification performance on the target domain via target domain pseudo labels. We also propose dual distinct classifiers to increase the robustness and quality of the pseudo labels. The experimental results on six cross-domain scenarios validate the efficacy of our proposed framework and its advantage over state-of-the-art UDA methods.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - SINGA Scholarship
Grant Reference no. : N.A
Description:
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
2471-285X
Files uploaded:

File Size Format Action
adast.pdf 2.34 MB PDF Open