One-pixel attacks reveal critical vulnerabilities in deep neural networks, where minimal, imperceptible pixel modifications can lead to misclassifications causing significant harm depending on the field. While existing research primarily addresses the technical aspects, attack models, and defense strategies, a comprehensive review synthesizing the advantages, limitations, and evolving solutions of OPAs has been lacking. Adhering to the PRISMA framework, this study bridges that gap by critically reviewing 30 high-impact studies from 2017 to 2025, offering an in-depth analysis of OPA mechanisms, applications, and countermeasures. The review emphasizes the growing sophistication of black-box evolutionary algorithms in crafting highly effective, stealthy attacks often targeting high-saliency regions across benchmark datasets. Particular attention is given to domain-specific applications, such as medical imaging where attacks can manipulate cancer diagnoses and quantum communication, highlighting the broader implications for critical systems. Current defense strategies are predominantly reactive and face challenges in generalizability, often compromising accuracy on benchmark data. This review identifies key research gaps and proposes various future recommendations. By offering a structured and organized taxonomy, this review aims to guide researchers and practitioners in advancing secure, interpretable, and robust AI systems in the face of adversarial threats.