Minimal-Supervised Medical Image Segmentation via Vector Quantization Memory

Page view(s)
54
Checked on Dec 06, 2024
Minimal-Supervised Medical Image Segmentation via Vector Quantization Memory
Title:
Minimal-Supervised Medical Image Segmentation via Vector Quantization Memory
Journal Title:
Lecture Notes in Computer Science
Keywords:
Publication Date:
30 September 2023
Citation:
Xu, Y., Zhou, M., Feng, Y., Xu, X., Fu, H., Goh, R. S. M., & Liu, Y. (2023). Minimal-Supervised Medical Image Segmentation via Vector Quantization Memory. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (pp. 625–636). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-43898-1_60
Abstract:
Medical image segmentation is a critical key task for computer-assisted diagnosis and disease monitoring. However, collecting a large-scale medical dataset with well-annotation is time-consuming and requires domain knowledge. Reducing the number of annotations poses two challenges: obtaining sufficient supervision and generating high-quality pseudo labels. To address these, we propose a universal framework for annotation-efficient medical segmentation, which is capable of handling both scribble-supervised and point-supervised segmentation. Our approach includes an auxiliary reconstruction branch that provides more supervision and backward sufficient gradients for learning visual representations. Besides, a novel pseudo label generation branch utilizes the VQ bank to store texture-oriented and global features for generating pseudo labels. To boost the model training, we generate high-quality pseudo labels by mixing the segmentation prediction and pseudo labels from the VQ bank. The experiments on the ACDC MRI segmentation dataset demonstrate the effectiveness of our proposed method. We obtain a comparable performance (0.86 vs. 0.87 DSC score).
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation - AI Singapore Programme
Grant Reference no. : AISG2-TC-2021-003

This research / project is supported by the A*STAR - AME Programmatic Funding
Grant Reference no. : A20H4b0141

This research / project is supported by the A*STAR - Central Research Fund
Grant Reference no. : N.A
Description:
This version of the article has been accepted for publication, after peer review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/978-3-031-43898-1_60
ISSN:
9783031438981
ISBN:
9783031438974
Files uploaded: