Semi-supervised classification of radiology images with NoTeacher: A teacher that is not mean

Semi-supervised classification of radiology images with NoTeacher: A teacher that is not mean
Title:
Semi-supervised classification of radiology images with NoTeacher: A teacher that is not mean
Other Titles:
Medical Image Analysis
Publication Date:
01 July 2021
Citation:
Unnikrishnan, B., Nguyen, C., Balaram, S., Li, C., Foo, C. S., Krishnaswamy, P. (2021). Semi-supervised classification of radiology images with NoTeacher: A teacher that is not mean. Medical Image Analysis, 73, 102148. doi:10.1016/j.media.2021.102148
Abstract:
Deep learning models achieve strong performance for radiology image classification, but their practical application is bottlenecked by the need for large labeled training datasets. Semi-supervised learning (SSL) approaches leverage small labeled datasets alongside larger unlabeled datasets and offer potential for reducing labeling cost. In this work, we introduce NoTeacher, a novel consistency-based SSL framework which incorporates probabilistic graphical models. Unlike Mean Teacher which maintains a teacher network updated via a temporal ensemble, NoTeacher employs two independent networks, thereby eliminating the need for a teacher network. We demonstrate how NoTeacher can be customized to handle a range of challenges in radiology image classification. Specifically, we describe adaptations for scenarios with 2D and 3D inputs, uni and multi-label classification, and class distribution mismatch between labeled and unlabeled portions of the training data. In realistic empirical evaluations on three public benchmark datasets spanning the workhorse modalities of radiology (X-Ray, CT, MRI), we show that NoTeacher achieves over 90–95% of the fully supervised AUROC with less than 5–15% labeling budget. Further, NoTeacher outperforms established SSL methods with minimal hyperparameter tuning, and has implications as a principled and practical option for semi-supervised learning in radiology applications.
License type:
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Funding Info:
This research / project is supported by the Agency for Science, Technology and Research - AME Programmatic Funds
Grant Reference no. : A20H6b0151

Research efforts were supported by funding and infrastructure for deep learning and medical imaging R&D from the Institute for Infocomm Research, Science and Engineering Research Council, A*STAR AME Programmatic Funds (Grant No. A20H6b0151), and by the Singapore International Graduate Award (SINGA Award to Shafa Balaram), Agency for Science, Technology and Research (A*STAR), Singapore. We acknowledge insightful discussions with Jayashree Kalpathy-Cramer and Praveer Singh at the Massachusetts General Hospital, Harvard Medical School, Boston USA. We also thank Ashraf Kassim from the National University of Singapore for his support.
Description:
ISSN:
1361-8415
Files uploaded:


File Size Format Action
noteachersupplementary.pdf 1.74 MB PDF Request a copy
noteacher-preprint.pdf 3.38 MB PDF Request a copy