Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal Intervention

Page view(s)
14
Checked on Jan 17, 2025
Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal Intervention
Title:
Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal Intervention
Journal Title:
BMVC2023
Publication Date:
10 November 2023
Citation:
Satar, B., Zhu, H., Zhang, H., & Lim, J.-H. (2023). Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal Intervention. In 34th British Machine Vision Conference (BMVC), Aberdeen, UK, November 20-24, 2023. BMVA.
Abstract:
Many studies focus on improving pretraining or developing new backbones in text-video retrieval. However, existing methods may suffer from the learning and inference bias issue, as recent research suggests in other text-video-related tasks. For instance, spatial appearance features on action recognition or temporal object co-occurrences on video scene graph generation could induce spurious correlations. In this work, we present a unique and systematic study of a temporal bias due to frame length discrepancy between training and test sets of trimmed video clips, which is the first such attempt for a text-video retrieval task, to the best of our knowledge. We first hypothesise and verify the bias on how it would affect the model illustrated with a baseline study. Then, we propose a causal debiasing approach and perform extensive experiments and ablation studies on the Epic-Kitchens-100, YouCook2, and MSR-VTT datasets. Our model overpasses the baseline and SOTA on nDCG, a semantic-relevancy-focused evaluation metric which proves the bias is mitigated, as well as on the other conventional metrics.
License type:
Attribution 4.0 International (CC BY 4.0)
Funding Info:
This research / project is supported by the A*STAR - MTC Programmatic
Grant Reference no. : A18A2b0046
Description:
ISBN:
arXiv.2309.09311