N. A. Koohbanani, B. Unnikrishnan, S. A. Khurram, P. Krishnaswamy and N. Rajpoot, "Self-Path: Self-supervision for Classification of Pathology Images with Limited Annotations," in IEEE Transactions on Medical Imaging, doi: 10.1109/TMI.2021.3056023.
While high-resolution pathology images lend themselves well to ‘data hungry’ deep learning algorithms, obtaining exhaustive annotations on these images for learning is a major challenge. In this paper, we propose a self-supervised convolutional neural network (CNN) frame-work to leverage unlabeled data for learning generalizable and domain invariant representations in pathology images. Our proposed framework, termed as Self-Path, employs multi-task learning where the main task is tissue classification and pretext tasks are a variety of self-supervised tasks with labels inherent to the input images.We introduce novel pathology-specific self-supervision tasks that leverage contextual, multi-resolution and semantic features in pathology images for semi-supervised learning and domain adaptation. We investigate the effectiveness of Self-Path on 3 different pathology datasets. Our results show that Self-Path with the pathology-specific pretext tasks achieves state-of-the-art performance for semi-supervised learning when small amounts of labeled data are available. Further, we show that Self-Path improves domain adaptation for histopathology image classification when there is no labeled data available for the target domain. This approach can potentially be employed for other applications in computational pathology, where annotation budget is often limited or large amount of unlabeled image data is available.
The work of Navid Alemi Koohbanani was supported by The Alan Turing Institute. The work of Nasir Rajpoot was supported in part by the U.K. Medical Research Council under Grant MR/P015476/1 and in part by the PathLAKE Digital Pathology Consortium, which is funded from the Data to Early Diagnosis and Precision Medicine Strand of the Government’s Industrial Strategy Challenge Fund, managed and delivered by U.K. Research and Innovation (UKRI). The work of Balagopal Unnikrishnan and Pavitra Krishnaswamy were supported by funding and infrastructure for deep learning and medical imaging research from the Institute for Infocomm Research, and Science and Engineering Research Council, A*STAR, Singapore.