AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP
Joseph Foley
Joseph Foley
Cybersecurity
Ireland

Public Documents 3
Algorithmic Bias in Machine Learning-Based Cyber Defence: Taxonomy, Mathematical Fram...
Joseph Foley

Joseph Foley

March 16, 2026
As Artificial Intelligence (AI) becomes the operational backbone of Security Operations Centres (SOCs), algorithmic bias poses a dual threat: technical failure and ethical violation. This study expands the taxonomy of bias within AI-driven cybersecurity systems, focusing on Intrusion Detection Systems (IDS) and Automated Threat Hunting pipelines. Drawing on peer-reviewed literature from 2015 to 2026, it analyses the mathematical foundations of fairness constraints-including Equalised Odds [14] and Predictive Parity-and their application to real-time network anomaly detection. Mitigation strategies across the entire machine learning pipeline (pre-processing, in-processing, and post-processing) [7] are surveyed. The role of Explainable AI (XAI) methods, specifically SHAP [18] and LIME [27], as safeguards against bias is evaluated. Governance implications of Regulation (EU) 2024/1689 (EU AI Act) [25] and the NIST AI Risk Management Framework (AI RMF) [24] are examined. Additionally, adversarial bias injection-including data poisoning attacks in Federated Learning [29] is explored as an emerging offensive vector. The synthesis demonstrates that fairness is not only a social imperative but also a fundamental component of system robustness, and that biased models are exploitable.
Cybersecurity Future Threats in Artificial Intelligence: A Comprehensive Analysis
Joseph Foley

Joseph Foley

January 09, 2026
The rapid integration of artificial intelligence (AI) systems into critical infrastructure, healthcare, finance, and national security sectors has introduced unprecedented cybersecurity challenges. This paper examines emerging threats that exploit AI systems and leverage their capabilities for malicious purposes. Adversarial machine learning attacks, AI-powered cyber threats, model poisoning, privacy vulnerabilities, and the weaponisation of generative AI are analysed. The current threat landscape is synthesised, attack vectors specific to AI architectures are examined, and defensive strategies are evaluated. Critical gaps in existing security frameworks are identified, and a multi-layered defence approach is proposed, incorporating adversarial robustness, secure AI development practices, and adaptive threat detection. The findings demonstrate that traditional cybersecurity paradigms are insufficient for AI systems, necessitating novel security architectures that address the unique attack surfaces of machine learning models.
Ethical Implications of Data Privacy in Gamification: Examining Data Collection Pract...
Joseph Foley

Joseph Foley

January 08, 2026
Gamification has become an effective strategy for increasing user engagement in fields such as education, healthcare, fitness, and enterprise applications. The incorporation of artificial intelligence into gamified systems has enhanced their impact but also raised significant concerns about data privacy and ethical data use. This paper investigates the moral implications of data collection and privacy in AI-driven gamified systems, focusing on the balance between the advantages of personalisation and the protection of privacy rights. By conducting a comprehensive review of the literature, case studies, and regulatory frameworks, we identify major ethical issues, including informed consent, data minimisation, algorithmic transparency, and the risk of manipulation. We present a multi-layered ethical framework that incorporates privacy-by-design principles, user empowerment strategies, and accountability mechanisms to support the responsible development and implementation of gamified systems. Our analysis indicates that although gamification provides notable benefits, its adoption must be guided by careful attention to privacy principles, regulatory requirements, and ethical responsibilities to users. This research advances the ongoing discussion on responsible AI and offers actionable recommendations for stakeholders within the gamification ecosystem.

| Powered by Authorea.com

  • Home