MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide Image Classification

Page view(s)
34
Checked on May 19, 2024
MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide Image Classification
Title:
MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide Image Classification
Journal Title:
IEEE Transactions on Medical Imaging
Keywords:
Publication Date:
08 September 2023
Citation:
Shi, J., Tang, L., Gao, Z., Li, Y., Wang, C., Gong, T., Li, C., & Fu, H. (2023). MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide Image Classification. IEEE Transactions on Medical Imaging, 42(12), 3871–3883. https://doi.org/10.1109/tmi.2023.3313252
Abstract:
Multiple instance learning (MIL)-based methods have become the mainstream for processing the megapixel-sized whole slide image (WSI) with pyramid structure in the field of digital pathology. The current MIL-based methods usually crop a large number of patches from WSI at the highest magnification, resulting in a lot of redundancy in the input and feature space. Moreover, the spatial relations between patches can not be sufficiently modeled, which may weaken the model’s discriminative ability on fine-grained features. To solve the above limitations, we propose a Multi-scale Graph Transformer (MG-Trans) with information bottleneck for whole slide image classification. MG-Trans is composed of three modules: patch anchoring module (PAM), dynamic structure information learning module (SILM), and multi-scale information bottleneck module (MIBM). Specifically, PAM utilizes the class attention map generated from the multi-head self-attention of vision Transformer to identify and sample the informative patches. SILM explicitly introduces the local tissue structure information into the Transformer block to sufficiently model the spatial relations between patches. MIBM effectively fuses the multi-scale patch features by utilizing the principle of information bottleneck to generate a robust and compact bag-level representation. Besides, we also propose a semantic consistency loss to stabilize the training of the whole model. Extensive studies on three subtyping datasets and seven gene mutation detection datasets demonstrate the superiority of MG-Trans.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation - AI Singapore Programme
Grant Reference no. : AISG2-TC-2021-003
Description:
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
0278-0062
1558-254X
Files uploaded:

File Size Format Action
mg-tra1.pdf 2.07 MB PDF Request a copy