Zhang, W., Liu, Q., Wei Cong, F. O., Ragab, M., & Foo, C.-S. (2024). Universal Semi-Supervised Domain Adaptation by Mitigating Common-Class Bias. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17, 23912–23921. https://doi.org/10.1109/cvpr52733.2024.02257
Abstract:
Domain adaptation is a critical task in machine learning that aims to improve model performance on a target domain by leveraging knowledge from a related source domain. In this work, we introduce Universal Semi-Supervised Domain Adaptation (UniSSDA), a practical yet challenging setting where the target domain is partially labeled, and the source and target label space may not strictly match. UniSSDA is at the intersection of Universal Domain Adaptation (UniDA) and Semi-Supervised Domain Adaptation (SSDA): the UniDA setting does not allow for fine-grained categorization of target private classes not represented in the source domain, while SSDA focuses on the restricted closed-set setting where source and target label spaces match exactly. Existing UniDA and SSDA methods are susceptible to common-class bias in UniSSDA settings, where models overfit to data distributions of classes common to both domains at the expense of private classes. We pro-pose a new prior-guided pseudo-label refinement strategy to reduce the reinforcement of common-class bias due to pseudo-labeling, a common label propagation strategy in domain adaptation. We demonstrate the effectiveness of the proposed strategy on benchmark datasets Office-Home, DomainNet, and VisDA. The proposed strategy attains the best performance across UniSSDA adaptation settings and establishes a new baseline for UniSSDA.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - A*STAR AME Programmatic Funds
Grant Reference no. : A20H6b0151