The GPT language model, developed by OpenAI, has gained widespread attention for its impressive ability to engage in conversational interactions with human users, thanks to its extensive training on vast amounts of written data. However, it is crucial to acknowledge that GPTs and their derivatives can exhibit unethical behaviors. Despite this, there has been limited research conducted on the ethical implications of Large Language Models (LLMs) in higher education. To address these concerns, this paper aims to comprehensively examine the harmful behaviors displayed by GPT and its spin-offs. These behaviors encompass social and monolingual biases, issues of reliability and trustworthiness, data privacy and security concerns, toxicity in generated content, challenges in human-computer interaction, and environmental impact.by conducting this study, valuable insights will be provided to educators, administrators, and students, enhancing their awareness of the ethical risks inherent in GPT and its various versions. This will enable them to proactively address and mitigate potential ethical concerns arising from the use of LLMs in educational settings.