The increasing reliance on machine learning (ML) models in critical applications has heightened concerns about model poisoning attacks, where adversaries manipulate training data to compromise model integrity. To address this challenge, we propose a comprehensive framework for detecting ML model poisoning using a novel integration of forensic analysis techniques. Unlike traditional approaches that focus on input data anomalies, our framework analyzes the model itself to uncover signs of poisoning. We integrate reverse engineering (model inversion), topological data analysis, Shapley values, and Benford’s Law analysis to systematically identify subtle deviations indicative of adversarial manipulation. We evaluate our proposed framework on multiple deep learning architectures, including convolutional neural networks, ResNet-18 and YOLOv8, used to train models using the CIFAR-10 dataset. Further, we benchmark our framework against existing forensic analysis and ML model poisoning detection techniques, highlighting its improved detection capabilities. Our findings establish a robust and interpretable methodology for securing ML models against sophisticated model poisoning attacks, paving the way for enhanced resilience in real-world deployments.