Abstract: The rapid advancement of artificial intelligence (AI) has raised concerns about the need for human control over AI systems\cite{Hoellinger2023}. This paper explores the ethical implications and challenges of ensuring human oversight and control of AI. Furthermore, it examines the ethical principles of AI intervention and the management of bias in AI systems. The paper concludes by emphasizing the significance of human-centered AI and the ethical considerations in ensuring human control over AI systems\cite{account}.Introduction:The use of artificial intelligence (AI) has become increasingly prevalent in various fields, including healthcare, decision-making, and research. However, ensuring human control over AI systems is a critical concern. Several scholars have emphasized the need for human oversight and control in the development and application of AI\cite{governance}. In the realm of decision-making, particularly in high-stakes public sector decision-making, human oversight at all stages of the development of AI systems has been emphasized as a safeguard within governance proposals\cite{oversight}. This aligns with the view that AI should be a tool and not a master, with human oversight being a core factor in establishing meaningful human control of such systems\cite{control}. The development and application of AI raise legal and ethical considerations, with the need for trustworthy AI and the regulation of AI systems being recognized. The proposed Artificial Intelligence Act takes a risk-based approach to regulate AI, aiming to foster an ecosystem of trust that gives citizens confidence in AI applications and provides legal certainty for innovation using AI.Methods:The discussion in this paper is informed by a range of scholarly works that address the ethical dimensions of AI governance and human-centric use of AI. For instance,  \cite{intelligence}delve into the ethical assessment of AI systems in comparison to humans, shedding light on the fundamental differences that are relevant to ethical considerations. Additionally, the study  \cite{102249} highlights the potential for algorithmic decision-making processes to lead to more objective decisions than those made by humans, emphasizing the need for human-centric use of AI. Moreover, the paper draws on insights from the literature on AI governance and human rights, such as the work \cite{29pn4q}, which emphasizes the potential of AI to fundamentally alter the human experience.Discussion:It is imperative to emphasize the need for human control over AI systems. The potential impact of AI on various aspects of daily life, including healthcare, decision-making, and societal interactions, underscores the significance of ensuring human oversight and control\cite{ethicsa}. The ethical and practical implications of AI in everyday life necessitate the establishment of frameworks that prioritize human control and responsibility in the development and application of AI systems\cite{ethics}. This aligns with the fundamental principle that AI should serve as a tool under human direction, rather than as an autonomous decision-maker\cite{ethicsa}. The potential risks associated with the misuse or abuse of AI technology further underscore the critical need for human oversight and control to ensure the ethical and responsible deployment of AI in everyday life Therefore, prioritizing human control over AI systems is essential to uphold ethical standards, mitigate potential risks, and ensure that AI serves the best interests of humanity in everyday life\cite{system}.In the context of AI in education: . AI-powered tools can help teachers to personalize learning, identify and address student learning gaps, and provide real-time feedback\cite{technologies}. However, it is important to use AI in a way that does not replace human teachers. AI should be used as a tool to support teachers and students, not as a replacement for them. There are several reasons why AI should only be used as a tool in the education sector.AI cannot replicate the human qualities that are essential for good teaching. These qualities include empathy, compassion, and the ability to build relationships with students\cite{implications}. AI systems can be programmed to recognize and respond to certain emotions, but they cannot understand or empathize with students.AI systems are often biased. This means that they may produce outputs that are unfair or inaccurate. This is especially concerning in the context of education, where AI systems could be used to make decisions that have a significant impact on students’ lives\cite{itfytq}.AI systems can be opaque and difficult for humans to understand. This makes it difficult to hold AI systems accountable for their decisions and to ensure that they are being used in a fair and responsible manner\cite{ethicsb}.Here are some specific examples of how AI can be used as a tool in the education sector:AI can be used to create personalized learning plans for students. These learning plans can be based on the student’s individual needs, interests, and learning style\cite{plan}.AI can be used to identify and address student learning gaps.AI systems can analyze student data to identify areas where students are struggling and to provide them with targeted support\cite{plan}.AI can be used to provide teachers with real-time feedback on student progress. This feedback can help teachers identify students at risk of falling behind and provide them with the additional support they need.Conclusion:Prioritizing human control over AI systems is essential to ensure that AI is used ethically and responsibly in all aspects of our lives, including education, healthcare, decision-making, and societal interactions\cite{ethicsa}. AI-powered tools can be valuable tools for supporting human goals, but they should not be used to replace humans or make decisions on our behalf. AI systems are often biased and opaque, and they can have a significant impact on our lives if they are not used responsibly\cite{survey}.To ensure that AI is used for good, we need to develop frameworks that prioritize human control and responsibility in the development and application of AI systems\cite{performance}. This means ensuring that humans are always in the driver’s seat and that AI systems are used to support our goals, not to replace us.