Chafik Boulealam

and 4 more

This paper introduces Feedback-Based Validation Learning (FBVL), a groundbreaking mechanism that reimagines the role of validation datasets in deep learning frameworks. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process itself. It employs real-time feedback to optimize the model’s weight adjustments, leading to enhanced prediction accuracy and overall model performance. Importantly, FBVL preserves the integrity of the validation process by using prediction outcomes on the validation dataset to guide training adjustments, without directly accessing the dataset. Our empirical study, conducted using the Iris dataset, highlights the transformative potential of FBVL. The Iris dataset, comprising 150 samples from three species of Iris flowers, each characterized by four features, served as an ideal testbed for demonstrating FBVL’s effectiveness. The implementation of FBVL led to substantial performance improvements, surpassing the accuracy of the best result by approximately 7.14% and achieving a reduction in loss greater than the other by approximately 49.18%. When FBVL was applied to the Multimodal EmotionLines Dataset (MELD), it showcased its wide applicability across various datasets and domains. The model achieved a test set Accuracy of 70.08%, surpassing the best reported accuracy by approximately 3.12%. These remarkable results underscore FBVL’s ability to optimize performance on established datasets and its capacity to minimize loss. Using our method FBVL, we achieved a test set f1_score micro of 70.07%, which is higher than the best reported value for f1_score micro of 67.59%. These findings position FBVL as a powerful tool set to redefine neural network training paradigms, heralding a new era of efficiency and accuracy in deep learning and artificial intelligence research.