As the e-commerce platforms grew exponentially, the volume of multilingual customer reviews increased, indicating that sentiment analysis is a priceless tool for finding consumer sentiment, enhancing marketing strategies, and improving customer experience. Nevertheless, emotion classification in multilingual reviews is very hard, and for one causes the variability of the language, the ambiguity of the sentiment, hierarchical word dependencies, and class imbalance which can skew traditional models. In order to resolve such challenges, this paper introduces a T5-CapsNet meta-ensembles model, which combines the T5 transformer for context embedded feature extraction and with Capsule Networks (CapsNet) for hierarchical sentiment learning. Furthermore, the model is further enhanced by a GAN-based data augmentation technique which increases the number of minority class reviews in a dataset by adding synthetic minority class reviews in an effort to correct dataset imbalance and promote classification fairness. As a metaensemble fusion strategy, weighted voting and stacking ensemble learning are used to improve sentiment prediction by making good use of the advantages of T5 and CapsNet. Experimental evaluations on the Multilingual Amazon Reviews Corpus (MARC) confirm that the proposed model surpasses the best sentiment classifier to reach an accuracy of 97.56% and an F1-score of 95.5%. It turns out that this hybrid deep learning approach very well captures the complex sentiment structures, or to put it differently, the multilingual e-commerce sentiment analysis is largely benefited from such a hybrid deep learning approach. The findings from this study will be a foundation for building more advanced emotion classification models, that can assist in improving customer sentiment analysis, automated feedback systems as well as decision-making in global e-commerce ecosystems.
The early identification of Autism Spectrum Disorder (ASD) is crucial for facilitating prompt therapies that can markedly enhance quality of life. This work presents an innovative EEG-based Brain-Computer Interface (BCI) system aimed at improving the precision and dependability of ASD classification. Utilizing the BCIAUT P300 dataset, comprising EEG recordings from 15 participants across 105 sessions, we established a novel multimodal framework. This system incorporates Vision Transformer (ViT) for spatial feature extraction and Long Short-Term Memory (LSTM) networks for temporal analysis, merging the advantages of both architectures. The ViT-LSTM model attained a validation accuracy of 94.75%, but the EfficientNet-LSTM model exhibited superior performance with a validation accuracy of 95.61%. Statistical investigations, including a P-value of 0.0288 and a T-value of 5.76 for the EfficientNet-based design, validated the reliability and robustness of these findings. ViT adeptly captures global spatial relationships using self-attention processes, whereas EfficientNet improves spatial representation through its pretrained feature extraction. To enhance usability in clinical environments, Explainable AI (XAI) methodologies, particularly Local Interpretable Model-Agnostic Explanations (LIME), were utilized. LIME offers transparent, feature-specific insights into model predictions, enabling physicians to comprehend and have confidence in the decision-making process. This system sets a new standard in AI-driven brain-computer interfaces by merging cutting-edge accuracy with interpretability, providing scalable and significant solutions for the diagnosis and treatment of ASD.
As a staple food feeding over half of the world's population, rice needs well defined classification techniques to improve agricultural yields, the supply chain, and food safety. To meet these needs, this work presents an Attention-Based Hybrid Model that can accurately and effectively classify Bangladeshi rice varieties. This research covers complex variations of feature space in shape, texture, and color based on 20 different rice varieties by using a comprehensive dataset of 27,000 high-resolution images that include real-world agricultural issues. The core innovation is proposed based on the Attention-Based CNN and CBAM structure. This in fact effectively highlights and enhances spatially and channel-orientated features, allowing the model to tell apart morphologically similar types of rice with high accuracy. The proposed Attention-Based CNN had achieved an accuracy of 91.92%, which leads to an improvement in both generalization aspects and robustness across different categories of testing conditions. Moreover, extending the proposed framework, feature extraction combined with KNN classifier reported the top accuracy of 99.35% proving that modern feature extraction and classification algorithms go hand in hand. This new combined approach does better than Random Forest and Support Vector Classifier because it solves problems that are normally associated with it such as finegrained features and scaling. Apart from that, the model represents one level up from the current paradigm of automated agriculture, bringing a robust, standardized, and flexible solution for rice variety identification. That way, this study provides a way of connecting a technological standpoint with the requirements that farmers have to address in order to further the issue of sustainability, support precision agriculture, and address the growing need for food quality and security in the world.