Implicit Motion-Compensated Network for Unsupervised Video Object Segmentation

Page view(s)
34
Checked on Sep 24, 2023
Implicit Motion-Compensated Network for Unsupervised Video Object Segmentation
Title:
Implicit Motion-Compensated Network for Unsupervised Video Object Segmentation
Journal Title:
IEEE Transactions on Circuits and Systems for Video Technology
Publication Date:
08 April 2022
Citation:
Xi, L., Chen, W., Wu, X., Liu, Z., & Li, Z. (2022). Implicit Motion-Compensated Network for Unsupervised Video Object Segmentation. IEEE Transactions on Circuits and Systems for Video Technology, 32(9), 6279–6292. https://doi.org/10.1109/tcsvt.2022.3165932
Abstract:
Unsupervised video object segmentation (UVOS) aims at automatically separating the primary foreground object(s) from the background in a video sequence. Existing UVOS methods either lack robustness when there are visually similar surroundings (appearance-based) or suffer from deterioration in the quality of their predictions because of dynamic background and inaccurate flow (flow-based). To overcome the limitations, we propose an implicit motion-compensated network (IMCNet) combining complementary cues (i.e., appearance and motion) with aligned motion information from the adjacent frames to the current frame at the feature level without estimating optical flows. The proposed IMCNet consists of an affinity computing module (ACM), an attention propagation module (APM), and a motion compensation module (MCM). The lightweight ACM extracts commonality between neighboring input frames based on appearance features. The APM then transmits global correlation in a top-down manner. Through coarse-to-fine iterative inspiring, the APM will refine object regions from multiple resolutions so as to efficiently avoid losing details. Finally, the MCM aligns motion information from temporally adjacent frames to the current frame which achieves implicit motion compensation at the feature level. We perform extensive experiments on DAVIS16 and YouTube-Objects. Our network achieves favorable performance while running at a faster speed compared to the state-of-the-art methods. Our code is available at https://github.com/xilin1991/IMCNet.
License type:
Publisher Copyright
Funding Info:
This work was supported by the National Nature Science Foundation of China under grant 61620106012 and Beijing Natural Science Foundation under grant 4202042.
Description:
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
ISSN:
1051-8215
1558-2205
Files uploaded:

File Size Format Action
manuscript-0318.pdf 6.34 MB PDF Request a copy