In many real-world applications, learning a binary classifier relying solely on positive and unlabeled (PU) training sample data is an important and challenging task. The absence of negative training samples leads to the necessity for PU learning methods to extract effective additional information from unlabeled training samples. This paper proposes a method that combines Mean Teacher and Smooth Neighbors on Teacher Graphs (SNTG) to address the issue of model overfitting towards unlabeled data in PU learning, where unlabeled data is typically assigned a lower weight. The use of the Mean Teacher model ensures effective utilization of unlabeled data information. SNTG maps the data distribution from high-dimensional space to low-dimensional manifolds or clustering structures, ensuring that the model captures the complex local structures between data points. Moreover, when using the Mean Teacher for data augmentation of unlabeled data points, the intrinsic relationships between the data points are often overlooked, and SNTG helps to address this limitation. By combining the SNTG loss, consistency loss in the Mean Teacher, and non-negative PU loss, the effectiveness of the proposed framework is validated through experiments on three benchmark datasets (MNIST, Fashion-MNIST, CIFAR-10) and two real-world datasets (Avila, EGSS—Electrical Grid Stability Simulated). The results show accuracies of 95.03%, 95.18%, 89.73%, 81.12%, and 92.55%, surpassing the performance of most state-of-the-art PU learning algorithms.