Few-shot Segmentation with Optimal Transport Matching and Message Flow

Page view(s)
72
Checked on Sep 09, 2024
Few-shot Segmentation with Optimal Transport Matching and Message Flow
Title:
Few-shot Segmentation with Optimal Transport Matching and Message Flow
Journal Title:
IEEE Transactions on Multimedia
Publication Date:
01 July 2022
Citation:
Liu, W., Zhang, C., Ding, H., Hung, T.-Y., & Lin, G. (2022). Few-shot Segmentation with Optimal Transport Matching and Message Flow. IEEE Transactions on Multimedia, 1–12. https://doi.org/10.1109/tmm.2022.3187855
Abstract:
We tackle the challenging task of few-shot segmentation in this work. It is essential for few-shot semantic segmentation to fully utilize the support information. Previous methods typically adopt masked average pooling over the support feature to extract the support clues as a global vector, usually dominated by the salient part and lost certain essential clues. In this work, we argue that every support pixel's information is desired to be transferred to all query pixels and propose a Correspondence Matching Network (CMNet) with an Optimal Transport Matching module to mine out the correspondence between the query and support images. Besides, it is critical to fully utilize both local and global information from the annotated support images. To this end, we propose a Message Flow module to propagate the message along the inner-flow inside the same image and cross-flow between support and query images, which greatly helps enhance the local feature representations. Experiments on PASCAL VOC 2012, MS COCO, and FSS-1000 datasets show that our network achieves new state-of-the-art few-shot segmentation performance.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation - AI Singapore Programme
Grant Reference no. : AISG-RP-2018-003

This research / project is supported by the Ministry of Education - Academic Research Fund Tier 2
Grant Reference no. : MOE-T2EP20220-0007

This research / project is supported by the Ministry of Education - Academic Research Fund Tier 1
Grant Reference no. : RG95/20
Description:
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
1520-9210
1941-0077
Files uploaded: