Qiu, Y., Zhao, Z., Yao, H., Chen, D., & Wang, Z. (2023). Modal-aware Visual Prompting for Incomplete Multi-modal Brain Tumor Segmentation. Proceedings of the 31st ACM International Conference on Multimedia. https://doi.org/10.1145/3581783.3611712
Abstract:
In the realm of medical imaging, distinct magnetic resonance imaging (MRI) modalities can provide complementary medical insights. However, it is not uncommon for one or more modalities to be absent due to image corruption, artifacts, acquisition protocols, allergies to contrast agents, or cost constraints, posing a significant challenge for perceiving the modality-absent state in incomplete modality segmentation.In this work, we introduce a novel incomplete multi-modal segmentation framework called Modal-aware Visual Prompting (MAVP), which draws inspiration from the widely used pre-training and prompt adjustment protocol employed in natural language processing (NLP). In contrast to previous prompts that typically use textual network embeddings, we utilize embeddings as the prompts generated by a modality state classifier that focuses on the missing modality states. Additionally, we integrate modality state prompts into both the extraction stage of each modality and the modality fusion stage to facilitate intra/inter-modal adaptation. Our approach achieves state-of-the-art performance in various modality-incomplete scenarios compared to incomplete modality-specific solutions.
License type:
Publisher Copyright
Funding Info:
This research is supported by core funding from: Institute for Infocomm Research
Grant Reference no. : Not Specified