CrossMatch: Source-Free Domain Adaptive Semantic Segmentation via Cross-Modal Consistency Training

Page view(s)
89
Checked on Oct 23, 2024
CrossMatch: Source-Free Domain Adaptive Semantic Segmentation via Cross-Modal Consistency Training
Title:
CrossMatch: Source-Free Domain Adaptive Semantic Segmentation via Cross-Modal Consistency Training
Journal Title:
2023 IEEE/CVF International Conference on Computer Vision (ICCV)
Keywords:
Publication Date:
15 January 2024
Citation:
Yin, Y., Hu, W., Liu, Z., Wang, G., Xiang, S., & Zimmermann, R. (2023, October 1). CrossMatch: Source-Free Domain Adaptive Semantic Segmentation via Cross-Modal Consistency Training. 2023 IEEE/CVF International Conference on Computer Vision (ICCV). https://doi.org/10.1109/iccv51070.2023.01991
Abstract:
Source-free domain adaptive semantic segmentation has gained increasing attention recently. It eases the requirement of full access to the source domain by transferring knowledge only from a well-trained source model. However, reducing the uncertainty of the target pseudo labels becomes inevitably more challenging without the supervision of the labeled source data. In this work, we propose a novel asymmetric two-stream architecture that learns more robustly from noisy pseudo labels. Our approach simultaneously conducts dual-head pseudo label denoising and cross-modal consistency regularization. Towards the former, we introduce a multimodal auxiliary network during training (and discard it during inference), which effectively enhances the pseudo labels' correctness by leveraging the guidance from the depth information. Towards the latter, we enforce a new cross-modal pixel-wise consistency between the predictions of the two streams, encouraging our model to behave smoothly for both modality variance and image perturbations. It serves as an effective regularization to further reduce the impact of the inaccurate pseudo labels in source-free unsupervised domain adaptation. Experiments on GTA5 to Cityscapes and SYNTHIA to Cityscapes benchmarks demonstrate the superiority of our proposed method, obtaining the new state-of-the-art mIoU of 57.7% and 57.5%, respectively.
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done
Description:
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
2380-7504
Files uploaded:

File Size Format Action
crossmatch.pdf 1.67 MB PDF Request a copy