Supervised deep learning approaches provide state-of-the-art performance on medical image classiﬁcation tasks for disease screening. However, these methods require large labeled datasets that involve resource-intensive expert annotation. Further, disease screening applications have low prevalence of abnormal samples; this class imbalance makes the task more akin to anomaly detection. While the machine learning community has proposed unsupervised deep learning methods for anomaly detection, they have yet to be characterized on medical images where normal vs. anomaly distinctions may be more subtle and variable. In this work, we characterize existing unsupervised anomaly detection methods on retinal fundus images, and ﬁnd that they require signiﬁcant ﬁne tuning and offer unsatisfactory performance. We thus propose an efﬁcient and effective transfer-learning based approach for unsupervised anomaly detection.Our method employs a deep convolutional neural network trained on ImageNet as a feature extractor, and subsequently feeds the learned feature representations into an existing unsupervised anomaly detection method. We show that our approach signiﬁcantly outperforms baselines on two natural image datasets and two retinal fundus image datasets, all with minimal ﬁne-tuning. We further show the ability to leverage very small numbers of labelled anomalies to improve performance. Our work establishes a strong unsupervised baseline for image-based anomaly detection, alongside a ﬂexible and scalable approach for screening applications.