Deep learning approaches offer strong performance for radiology image classification, but are bottlenecked by the need for large labeled training datasets. Semi-supervised learning (SSL) methods that can leverage small labeled datasets alongside larger unlabeled datasets offer potential for reducing labeling cost. However, few studies have demonstrated gains of SSL for real-world radiology image classification. Here, we adapt three leading SSL methods (Mean Teacher, Virtual Adversarial Training, Pseudo-labeling) for radiograph classification, and characterize their performance on two public X-Ray and CT classification benchmarks. We observe that Mean Teacher can achieve good performance gains in the low labeled data regime, but is sensitive to hyperparameters and susceptible to confirmation bias. To address these issues, we introduce a novel SSL method named NoTeacher. This method incorporates a probabilistic graphical model to maximize mutual agreement between student networks, thereby eliminating the need for a teacher network. We show that NoTeacher outperforms contemporary SSL baselines by enforcing better consistency regularization, and achieves over 90% of the fully supervised AUROC with less than 5% labeling budget.