Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification

Page view(s)
33
Checked on Sep 07, 2024
Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification
Title:
Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification
Journal Title:
IEEE Transactions on Pattern Analysis and Machine Intelligence
Publication Date:
28 August 2023
Citation:
Eldele, E., Ragab, M., Chen, Z., Wu, M., Kwoh, C.-K., Li, X., & Guan, C. (2023). Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–15. https://doi.org/10.1109/tpami.2023.3308189
Abstract:
Learning time-series representations when only unlabeled data or few labeled samples are available can be a challenging task. Recently, contrastive self-supervised learning has shown great improvement in extracting useful representations from unlabeled data via contrasting different augmented views of data. In this work, we propose a novel Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC) that learns representations from unlabeled data with contrastive learning. Specifically, we propose time-series-specific weak and strong augmentations and use their views to learn robust temporal relations in the proposed temporal contrasting module, besides learning discriminative representations by our proposed contextual contrasting module. Additionally, we conduct a systematic study of time-series data augmentation selection, which is a key part of contrastive learning. We also extend TS-TCC to the semi-supervised learning settings and propose a Class-Aware TS-TCC (CA-TCC) that benefits from the available few labeled data to further improve representations learned by TS-TCC. Specifically, we leverage the robust pseudo labels produced by TS-TCC to realize a class-aware contrastive loss. Extensive experiments show that the linear evaluation of the features learned by our proposed framework performs comparably with the fully supervised training. Additionally, our framework shows high efficiency in few labeled data and transfer learning scenarios.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - AME Programmatic Funds
Grant Reference no. : A20H6b0151

This research / project is supported by the A*STAR - Career Development Award
Grant Reference no. : C210112046

This research / project is supported by the Ministry of Education - Academic Research Tier-2 Grant
Grant Reference no. : MOE2019-T2-2-175
Description:
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
2160-9292
1939-3539
0162-8828
Files uploaded:

File Size Format Action
postprint-catcc.pdf 2.68 MB PDF Request a copy