Skin cancer, especially melanoma, is one of the deadliest types of cancer. Finding it early is very crucial for improving survival rates. Handmade feature extraction, an essential aspect of traditional methods for skin cancer classification, may encounter limitations due to extraneous patterns and background noise. Deep learning models like convolutional neural networks (CNNs) have showed promise in medical imaging, however they are typically hard to understand and don't have enough data. This paper proposes Attention Enhanced Convolution Net (AECNet), a hybrid attention-based CNN model designed to enhance feature learning by focusing on relevant regions of skin cancer images, hence increasing classification accuracy. To address data scarcity, we employed an Auxiliary Classifier Generative Adversarial Network (ACGAN) to generate synthetic images, therefore enhancing the model's generalization capability. We also used Grad-CAM Explainable AI (XAI) approaches to give visual explanations of how the model makes decisions, which made it more clinically relevant and easier to understand. Findings demonstrate that using synthetic images significantly improves classification accuracy, precision, recall, and F1-score. The model's test accuracy is 95.63%. The proposed method not only enhances the precision of skin cancer diagnosis but also provides a more transparent and comprehensible framework for practical applications.