Prototypical Cross-domain Knowledge Transfer for Cervical Dysplasia Visual Inspection

Page view(s)
101
Checked on Dec 10, 2024
Prototypical Cross-domain Knowledge Transfer for Cervical Dysplasia Visual Inspection
Title:
Prototypical Cross-domain Knowledge Transfer for Cervical Dysplasia Visual Inspection
Journal Title:
Proceedings of the 31st ACM International Conference on Multimedia
Keywords:
Publication Date:
27 October 2023
Citation:
Zhang, Y., Yin, Y., Zhang, Y., Liu, Z., Wang, Z., & Zimmermann, R. (2023). Prototypical Cross-domain Knowledge Transfer for Cervical Dysplasia Visual Inspection. Proceedings of the 31st ACM International Conference on Multimedia. https://doi.org/10.1145/3581783.3612000
Abstract:
Early detection of dysplasia of the cervix is critical for cervical cancer treatment. However, automatic cervical dysplasia diagnosis via visual inspection, which is more appropriate in low-resource settings, remains a challenging problem. Though promising results have been obtained by recent deep learning models, their performance is significantly hindered by the limited scale of the available cervix datasets. Distinct from previous methods that learn from a single dataset, we propose to leverage cross-domain cervical images that were collected in different but related clinical studies to improve the model's performance on the targeted cervix dataset. To robustly learn the transferable information across datasets, we propose a novel prototype-based knowledge filtering method to estimate the transferability of cross-domain samples. We further optimize the shared feature space by aligning the cross-domain image representations simultaneously on domain level with early alignment and class level with supervised contrastive learning, which endows model training and knowledge transfer with stronger robustness. The empirical results on three real-world benchmark cervical image datasets show that our proposed method outperforms the state-of-the-art cervical dysplasia visual inspection by an absolute improvement of 4.7% in top-1 accuracy, 7.0% in precision, 1.4% in recall, 4.6% in F1 score, and 0.05 in ROC-AUC.
License type:
Attribution 4.0 International (CC BY 4.0)
Funding Info:
This research / project is supported by the Ministry of Education - Research Fund Tier1
Grant Reference no. : T1251RES2029

The National Natural Science Foundation of China No.62272390, and Zhejiang Gongshang University & Digital & Disciplinary Construction Management Project (Project Number SZJ2022C005).
Description:
ISBN:
979-8-4007-0108-5/23/10
Files uploaded:

File Size Format Action
prototypical-mm-2023.pdf 4.45 MB PDF Open