Shape and boundary-aware multi-branch model for semi-supervised medical image segmentation

Page view(s)
48
Checked on Oct 30, 2024
Shape and boundary-aware multi-branch model for semi-supervised medical image segmentation
Title:
Shape and boundary-aware multi-branch model for semi-supervised medical image segmentation
Journal Title:
Computers in Biology and Medicine
Publication Date:
26 January 2022
Citation:
Liu, X., Hu, Y., Chen, J., & Li, K. (2022). Shape and boundary-aware multi-branch model for semi-supervised medical image segmentation. Computers in Biology and Medicine, 143, 105252. https://doi.org/10.1016/j.compbiomed.2022.105252
Abstract:
Supervised learning-based medical image segmentation solutions usually require sufficient labeled training data. Insufficient available labeled training data often leads to the limitations of model performances, such as over-fitting, low accuracy, and poor generalization ability. However, this dilemma may worsen in the field of medical image analysis. Medical image annotation is usually labor-intensive and professional work. In this work, we propose a novel shape and boundary-aware deep learning model for medical image segmentation based on semi-supervised learning. The model makes good use of labeled data and also enables unlabeled data to be well applied by using task consistency loss. Firstly, we adopt V-Net for Pixel-wise Segmentation Map (PSM) prediction and Signed Distance Map (SDM) regression. In addition, we multiply multi-scale features, extracted by Pyramid Pooling Module (PPM) from input X, with 2 − |SDM| to enhance the features around the boundary of the segmented target, and then feed them into the Feature Fusion Module (FFM) for fine segmentation. Besides boundary loss, the high-level semantics implied in SDM facilitate the accurate segmentation of boundary regions. Finally, we get the ultimate result by fusing coarse and boundary-enhanced features. Last but not least, to mine unlabeled training data, we impose consistency constraints on the three core outputs of the model, namely PSM1, SDM, and PSM3. Through extensive experiments over three representative but challenging medical image datasets (LA2018, BraTS2019, and ISIC2018) and comparisons with the existing representative methods, we validate the practicability and superiority of our model.
License type:
Publisher Copyright
Funding Info:
This work was supported in part by the National Key R&D Program of China under Grant 2018YFB1003401 and the National Natural Science Foundation of China under Grant 62002110
Description:
ISSN:
0010-4825
Files uploaded:

File Size Format Action
paper4-xiaowei.pdf 963.22 KB PDF Open