The performance of machine learning models is closely linked to the quality of training data, underpinning the 'garbage in, garbage out' principle. Label noise in datasets is a key challenge in training and evaluation. This study introduces two innovative ChatGPT-based methods, ChatGPT-Predict and ChatGPT-Detect, for effective noise detection in labeled datasets. We assess the efficacy of these methods against conventional vote-based techniques, focusing on factors like noise characteristics , dataset complexity, and the impact of prompt-engineering. Comprehensive evaluations using both artificial and real-world datasets demonstrate the adaptability of our methods to different noise types and levels. Key findings emphasize the critical role of prompt design in language model performance and the distinct contrasts in handling artificial versus real-world noise. The research acknowledges potential limitations due to prompt design variability and suggests possible enhancements with more advanced models like GPT-4. Future research avenues include applying these methods with GPT-4, exploring diverse prompt templates, and extending the methodology to real-world datasets with high noise levels. This study contributes to the field by refining noise detection methodologies, thereby enhancing the robustness and reliability of machine learning models.