Category-Independent Visual Explanation for Medical Deep Network Understanding

Page view(s)
75
Checked on Mar 06, 2025
Category-Independent Visual Explanation for Medical Deep Network Understanding
Title:
Category-Independent Visual Explanation for Medical Deep Network Understanding
Journal Title:
Lecture Notes in Computer Science
Keywords:
Publication Date:
30 September 2023
Citation:
Qian, Y., Li, L., Fu, H., Wang, M., Peng, Q., Tham, Y. C., Cheng, C., Liu, Y., Goh, R. S. M., & Xu, X. (2023). Category-Independent Visual Explanation for Medical Deep Network Understanding. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (pp. 181–191). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-43895-0_17
Abstract:
Visual explanations have the potential to improve our understanding of deep learning models and their decision-making process, which is critical for building transparent, reliable, and trustworthy AI systems. However, existing visualization methods have limitations, including their reliance on categorical labels to identify regions of interest, which may be inaccessible during model deployment and lead to incorrect diagnoses if an incorrect label is provided. To address this issue, we propose a novel category-independent visual explanation method called Hessian-CIAM. Our algorithm uses the Hessian matrix, which is the second-order derivative of the activation function, to weigh the activation weight in the last convolutional layer and generate a region of interest heatmap at inference time. We then apply an SVD-based post-process to create a smoothed version of the heatmap. By doing so, our algorithm eliminates the need for categorical labels and modifications to the deep learning model. To evaluate the effectiveness of our proposed method, we compared it to seven state-of-the-art algorithms using the Chestx-ray8 dataset. Our approach achieved a 55% higher IoU measurement than classical GradCAM and a 17% higher IoU measurement than EigenCAM. Moreover, our algorithm obtained a Judd AUC score of 0.70 on the glaucoma retinal image database, demonstrating its potential applicability in various medical applications. In summary, our category-independent visual explanation method, Hessian-CIAM, can generate high-quality region of interest heatmaps that are not dependent on categorical labels, making it a promising tool for improving our understanding of deep learning models and their decision-making process, particularly in medical applications.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation - AI Singapore Programme
Grant Reference no. : AISG2-TC-2021-003
Description:
This version of the article has been accepted for publication, after peer review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/978-3-031-43895-0_17
ISSN:
9783031438950
ISBN:
9783031438943
Files uploaded: