Structure and Intensity Unbiased Translation for 2D Medical Image Segmentation

Page view(s)
17
Checked on Feb 14, 2025
Structure and Intensity Unbiased Translation for 2D Medical Image Segmentation
Title:
Structure and Intensity Unbiased Translation for 2D Medical Image Segmentation
Journal Title:
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keywords:
Publication Date:
29 July 2024
Citation:
Zhang, T., Zheng, S., Cheng, J., Jia, X., Bartlett, J., Cheng, X., Qiu, Z., Fu, H., Liu, J., Leonardis, A., & Duan, J. (2024). Structure and Intensity Unbiased Translation for 2D Medical Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–16. https://doi.org/10.1109/tpami.2024.3434435
Abstract:
Data distribution gaps often pose significant challenges to the use of deep segmentation models. However, retraining models for each distribution is expensive and time-consuming. In clinical contexts, device-embedded algorithms and networks, typically unretrainable and unaccessable post-manufacture, exacerbate this issue. Generative translation methods offer a solution to mitigate the gap by transferring data across domains. However, existing methods mainly focus on intensity distributions while ignoring the gaps due to structure disparities. In this paper, we formulate a new image-to-image translation task to reduce structural gaps. We propose a simple, yet powerful Structure-Unbiased Adversarial (SUA) network which accounts for both intensity and structural differences between the training and test sets for segmentation. It consists of a spatial transformation block followed by an intensity distribution rendering module. The spatial transformation block is proposed to reduce the structural gaps between the two images. The intensity distribution rendering module then renders the deformed structure to an image with the target intensity distribution. Experimental results show that the proposed SUA method has the capability to transfer both intensity distribution and structural content between multiple pairs of datasets and is superior to prior arts in closing the gaps for improving segmentation.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - AI3 HTCO
Grant Reference no. : C231118001
Description:
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
0162-8828
2160-9292
1939-3539
Files uploaded: