Alzheimer’s disease (AD) is one of the most severe diseases worldwide. This disease affects the cognitive abilities of a patient. Treating AD in its early stage can prolong the independence of the patient. This early intervention can considerably delay the institutionalization of the patient. In this regard, artificial intelligence-based early detection of AD and its stages has recently gained significant attention. Thus, this study proposed an approach to detect the stage of AD that involves six steps: image acquisition, preprocessing, image augmentation, segmentation, feature extraction, and classification. First, input images are obtained from a publicly available standard dataset, ADNI and curated according to need. Then, preprocessing is performed to remove unwanted non-brain tissues and noise using the skull-stripping algorithm and adaptive median-filter approach. Subsequently, image augmentation is performed by image rotation (90 °, 180 °, 270 °) and image cropping (right, left, bottom, corner, and top) to avoid class imbalance. From the augmented images, gray matter, white matter, and cerebrospinal fluid are segmented using a residual squeeze-excited U-network architecture. Moreover, the required features from three planes: axial, sagittal, and coronal, are extracted using an extended sign magnitude-based local binary pattern method to improve accuracy. The features derived from each plane are fused, and Alzheimer’s disease classification is performed by position attention-based dense capsule network models. Python and its libraries are used for model implementation. The performance measure achieves an impressive accuracy of 98.40%, proving that the proposed model provides higher accuracy than that of existing techniques.