Consistency-Based Semi-supervised Evidential Active Learning for Diagnostic Radiograph Classification

Page view(s)
57
Checked on Dec 04, 2023
Consistency-Based Semi-supervised Evidential Active Learning for Diagnostic Radiograph Classification
Title:
Consistency-Based Semi-supervised Evidential Active Learning for Diagnostic Radiograph Classification
Journal Title:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022
Keywords:
Publication Date:
15 September 2022
Citation:
Balaram, S., Nguyen, C. M., Kassim, A., & Krishnaswamy, P. (2022). Consistency-Based Semi-supervised Evidential Active Learning for Diagnostic Radiograph Classification. Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, 675–685. https://doi.org/10.1007/978-3-031-16431-6_64
Abstract:
Deep learning approaches achieve state-of-the-art performance for classifying radiology images, but rely on large labelled datasets that require resource-intensive annotation by specialists. Both semisupervised learning and active learning can be utilised to mitigate this annotation burden. However, there is limited work on combining the advantages of semi-supervised and active learning approaches for multi-label medical image classification. Here, we introduce a novel Consistency-based Semi-supervised Evidential Active Learning framework (CSEAL). Specifically, we leverage predictive uncertainty based on theories of evidence and subjective logic to develop an end-to-end integrated approach that combines consistency-based semi-supervised learning with uncertainty-based active learning. We apply our approach to enhance four leading consistency-based semi-supervised learning methods: Pseudo-labelling, Virtual Adversarial Training, Mean Teacher and NoTeacher. Extensive evaluations on multi-label Chest X-Ray classification tasks demonstrate that CSEAL achieves substantive performance improvements over two leading semi-supervised active learning baselines. Further, a class-wise breakdown of results shows that our approach can substantially improve accuracy on rarer abnormalities with fewer labelled samples.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - AME Programmatic Funds
Grant Reference no. : A20H6b0151

Research efforts were supported by the Singapore International Graduate Award (SINGA Award), Agency for Science, Technology and Research (A*STAR) as well as funding and infrastructure for deep learning and medical imaging R&D from the Institute for Infocomm Research, Science and Engineering Research Council, A*STAR.
Description:
This version of the article has been accepted for publication, after peer review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-031-16431-6_64
ISSN:
9783031164316
ISBN:
9783031164309
Files uploaded:

File Size Format Action
miccai2022-2792-camera-ready.pdf 655.10 KB PDF Open