Sun Wei

and 1 more

This paper presents a comprehensive hybrid architecture for mobile health (mHealth) applications designed to overcome three fundamental limitations in current telemedicine systems: severe data scarcity in medical imaging domains, computational constraints inherent to handheld consumer devices, and the critical need for reliable, safety-guaranteed automated patient guidance. We introduce and experimentally validate a novel, integrated pipeline that strategically combines advanced synthetic data generation using Generative Adversarial Networks (GANs) to create expansive, privacy-preserving dermatological datasets, which in turn train robust diagnostic convolutional neural networks (CNNs). Subsequently, we develop and benchmark an ultra-lightweight, quantization-aware skin segmentation model optimized for real-time inference on standard smartphone hardware, serving as a crucial pre-processing step for reliable analysis. Finally, we integrate a deterministic, rule-based Expert System that processes multimodal patient data-including diagnostic outputs from the visual AI and user-reported metrics-to generate personalized, logically-verifiable management recommendations for chronic conditions, with diabetes mellitus serving as our primary case study. Extensive mathematical formalization, architectural details, and experimental results across synthetic data fidelity (Frechet Inception Distance scores below 15.2), segmentation accuracy (Dice coefficients exceeding 0.95 on mobile CPUs), and decision logic validation are provided. The proposed framework demonstrates that the synergistic integration of probabilistic deep learning and deterministic symbolic reasoning can create a scalable, privacy-conscious, and clinically actionable closed-loop system for remote patient care.