Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation

Page view(s)
9
Checked on Sep 10, 2025
Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation
Title:
Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation
Journal Title:
2023 IEEE/CVF International Conference on Computer Vision (ICCV)
Keywords:
Publication Date:
15 January 2024
Citation:
Xu, Y., Yang, J., Zhou, Y., Chen, Z., Wu, M., & Li, X. (2023). Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 13399–13410. https://doi.org/10.1109/iccv51070.2023.01237
Abstract:
For video models to be transferred and applied seamlessly across video tasks in varied environments, Video Unsupervised Domain Adaptation (VUDA) has been introduced to improve the robustness and transferability of video models. However, current VUDA methods rely on a vast amount of high-quality unlabeled target data, which may not be available in real-world cases. We thus consider a more realistic \textit{Few-Shot Video-based Domain Adaptation} (FSVDA) scenario where we adapt video models with only a few target video samples. While a few methods have touched upon Few-Shot Domain Adaptation (FSDA) in images and in FSVDA, they rely primarily on spatial augmentation for target domain expansion with alignment performed statistically at the instance level. However, videos contain more knowledge in terms of rich temporal and semantic information, which should be fully considered while augmenting target domains and performing alignment in FSVDA. We propose a novel SSA2lign to address FSVDA at the snippet level, where the target domain is expanded through a simple snippet-level augmentation followed by the attentive alignment of snippets both semantically and statistically, where semantic alignment of snippets is conducted through multiple perspectives. Empirical results demonstrate state-of-the-art performance of SSA2lign across multiple cross-domain action recognition benchmarks.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation, Singapore - AI Singapore Programme
Grant Reference no. : AISG2-RP-2021-027

This research / project is supported by the Agency for Science, Technology and Research - Career Development Award
Grant Reference no. : C210112046
Description:
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
NA
Files uploaded:

File Size Format Action
augmenting-and-aligning.pdf 1.56 MB PDF Request a copy