LE-UDA: Label-efficient unsupervised domain adaptation for medical image segmentation

Page view(s)
115
Checked on Apr 19, 2025
LE-UDA: Label-efficient unsupervised domain adaptation for medical image segmentation
Title:
LE-UDA: Label-efficient unsupervised domain adaptation for medical image segmentation
Journal Title:
IEEE Transactions on Medical Imaging
Publication Date:
13 October 2022
Citation:
Zhao, Z., Zhou, F., Xu, K., Zeng, Z., Guan, C., & Kevin Zhou, S. (2022). LE-UDA: Label-efficient unsupervised domain adaptation for medical image segmentation. IEEE Transactions on Medical Imaging, 1–1. https://doi.org/10.1109/tmi.2022.3214766
Abstract:
While deep learning methods hitherto have achieved considerable success in medical image segmentation, they are still hampered by two limitations: (i) reliance on large-scale well-labeled datasets, which are difficult to curate due to the expert-driven and time-consuming nature of pixel-level annotations in clinical practices, and (ii) failure to generalize from one domain to another, especially when the target domain is a different modality with severe domain shifts. Recent unsupervised domain adaptation (UDA) techniques leverage abundant labeled source data together with unlabeled target data to reduce the domain gap, but these methods degrade significantly with limited source annotations. In this study, we address this underexplored UDA problem, investigating a challenging but valuable realistic scenario, where the source domain not only exhibits domain shift w.r.t. the target domain but also suffers from label scarcity. In this regard, we propose a novel and generic framework called “Label-Efficient Unsupervised Domain Adaptation” (LE-UDA). In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA. To assess the effectiveness of our method, we conduct extensive experiments on two different tasks for cross-modality segmentation between MRI and CT images. Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature.
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done
Description:
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
0278-0062
1558-254X
Files uploaded:

File Size Format Action
tmi-le-uda-fv-amended.pdf 3.11 MB PDF Open