This project investigates the effects of weight quantization on deep neural network performance, focusing on image datasets. Weight quantization reduces the precision of model weights which leads to reduction in model size and computational requirements, making them suitable for resource-constrained devices. The project explores the impact of these techniques on model accuracy, training efficiency, and generalization capabilities. The research is motivated by the need to develop efficient and effective deep learning models for image classification tasks that can be easily deployed in real-world applications.