MACG-Net: Multi-axis cross gating network for deformable medical image registration

Page view(s)
13
Checked on Mar 10, 2025
MACG-Net: Multi-axis cross gating network for deformable medical image registration
Title:
MACG-Net: Multi-axis cross gating network for deformable medical image registration
Journal Title:
Computers in Biology and Medicine
Keywords:
Publication Date:
29 May 2024
Citation:
Yuan, W., Cheng, J., Gong, Y., He, L., & Zhang, J. (2024). MACG-Net: Multi-axis cross gating network for deformable medical image registration. Computers in Biology and Medicine, 178, 108673. https://doi.org/10.1016/j.compbiomed.2024.108673
Abstract:
Deformable Image registration is a fundamental yet vital task for preoperative planning, intraoperative information fusion, disease diagnosis and follow-ups. It solves the non-rigid deformation field to align an image pair. Latest approaches such as VoxelMorph and TransMorph compute features from a simple concatenation of moving and fixed images. However, this often leads to weak alignment. Moreover, the convolutional neural network (CNN) or the hybrid CNN-Transformer based backbones are constrained to have limited sizes of receptive field and cannot capture long range relations while full Transformer based approaches are computational expensive. In this paper, we propose a novel multi-axis cross grating network (MACG-Net) for deformable medical image registration, which combats these limitations. MACG-Net uses a dual stream multi-axis feature fusion module to capture both long-range and local context relationships from the moving and fixed images. Cross gate blocks are integrated with the dual stream backbone to consider both independent feature extractions in the moving-fixed image pair and the relationship between features from the image pair. We benchmark our method on several different datasets including 3D atlas-based brain MRI, inter-patient brain MRI and 2D cardiac MRI. The results demonstrate that the proposed method has achieved state-of-the-art performance.
License type:
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Funding Info:
This research / project is supported by the A*STAR - AI3 Horizontal Technology Programme Seed Fund
Grant Reference no. : C231118001

This research / project is supported by the Sichuan University, Yibin City - Strategic Cooperation Special Fund
Grant Reference no. : 2020CDYB-27

This research / project is supported by the Zigong Key Science and Technology Plan Task Fund - NA
Grant Reference no. : 2020YXY02
Description:
ISSN:
0010-4825