Artificial Intelligence has made significant strides, leading to the creation of advanced models like ChatGPT, developed by OpenAI. ChatGPT interacts with users in a conversational style, generating human-like responses across various domains. Despite its versatility, the accuracy of ChatGPT's responses can vary depending on the context, the complexity of the queries, and the repetition of prompts within a short time frame. To address these challenges and highlight potential issues, propose the Precision Answer Comparison and Evaluation Model (PACEM), which is designed to thoroughly assess and evaluate ChatGPT's performance. PACEM focuses on evaluating ChatGPT's accuracy and consistency across diverse domains, including literature, history, law or ethics, and sports or athletics. By systematically comparing responses and measuring performance, PACEM provides a comprehensive analysis of ChatGPT's capabilities and limitations in delivering reliable information. Additionally, this model is equipped to measure the processing time during evaluation, considering the time taken by ChatGPT to generate responses compared to the actual answers. The model reveals that ChatGPT generally delivers high-performance and accurate responses, often surpassing those generated by humans or composed from alternative ideas. However, the evaluation also indicates that the processing time may increase when the responses are lengthy, despite the accuracy of the answers provided by ChatGPT. Finally, this study concludes with key findings and suggests future work to address new challenges that may arise in similar contexts.