Xu, Y., Yang, J., Cao, H., Wu, K., Wu, M., & Chen, Z. (2022). Source-Free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition. Computer Vision – ECCV 2022, 147–164. https://doi.org/10.1007/978-3-031-19830-4_9
Abstract:
Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments. However, these methods require constant access to source data during the adaptation process. Yet in many real-world applications, subjects and scenes in the source video domain should be irrelevant to those in the target video domain. With the increasing emphasis on data privacy, such methods that require source data access would raise serious privacy issues. Therefore, to cope with such concern, a more practical domain adaptation scenario is formulated as the Source-Free Video-based Domain Adaptation (SFVDA). Though there are a few methods for Source-Free Domain Adaptation (SFDA) on image data, these methods yield degenerating performance in SFVDA due to the multi-modality nature of videos, with the existence of additional temporal features. In this paper, we propose a novel Attentive Temporal Consistent Network (ATCoN) to address SFVDA by learning temporal consistency, guaranteed by two novel consistency objectives, namely feature consistency and source prediction consistency, performed across local temporal features. ATCoN further constructs effective overall temporal features by attending to local temporal features based on prediction confidence. Empirical results demonstrate the state-of-the-art performance of ATCoN across various cross-domain action recognition benchmarks. Code is provided at https://github.com/xuyu0010/ATCoN.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - AME Programmatic Funds (Grant No. A20H6b0151)
Grant Reference no. : A20H6b0151
This research / project is supported by the A*STAR - Career Development Award (Grant No. C210112046)
Grant Reference no. : C210112046
This research / project is supported by the Nanyang Technology University - Presidential Postdoctoral Fellowship
Grant Reference no. : NA