Zhaoyang Zuo

and 5 more

To improve the accuracy of identifying armor damage points in battlefield environments, we developed an advanced semantic segmentation model based on SegNet, tailored specifically for segmenting damage points in armor images. The original SegNet model exhibited limitations, such as unclear segmentation and feature loss during armor image processing. To address these issues, we integrated the DenseNet architecture, which facilitates direct connections between feature maps across different network layers. This innovation enables efficient reuse of image features, significantly enhancing segmentation accuracy. Our improved model demonstrates greater flexibility in feature utilization compared to traditional architectures like U-Net and Fully Convolutional Networks (FCN), allowing for effective integration and transmission of feature information across layers. To validate our approach, we constructed a dataset comprising images of three distinct types of armor and trained the model using the PyTorch deep learning framework. We conducted a comprehensive comparison between the original SegNet model and our enhanced version using standard evaluation metrics. The experimental results indicate that the improved model achieved a precision of 85.32%, recall of 83.87%, specificity of 84.36%, and a Dice coefficient of 84.81%. Additionally, the enhanced SegNet model demonstrated a 3.53% increase in recognition success rate compared to the original model, while maintaining similar processing times for batches of 100 images. These findings underscore the effectiveness of our model in accurately segmenting damage points under challenging battlefield conditions, thereby contributing to more reliable assessments of armor integrity and improved decision-making in military applications.