Shubhanshi Singhal

and 1 more

Recommender systems (RSs) aim to streamline navigation through vast product repositories, with personalization as a critical development. However, modeling user preferences remains challenging due to their dynamic and complexity. Cross-domain learning (CDL) has emerged as a promising approach to enhance personalization by leveraging inter-domain knowledge. Despite advancements, modeling inter-domain knowledge is difficult due to the semantic heterogeneity of participating domains. This paper presents a cross-domain recommendation (CDR) framework and evaluates its effectiveness on ’Book-Movie’ item pairs, leveraging books as auxiliary data to enhance movie recommendations. Books offer richer contextual insights into users’ cognitive states, thereby addressing legacy challenges such as data sparsity and the cold-start problem. Using pre-trained models such as RoBERTa and DistilBERT, we propose a novel approach for inter-domain knowledge modeling, leveraging the capability of generative models to effectively capture inter-domain knowledge and transfer it to achieve a higher level of personalization in the target domain. RoBERTa, fine-tuned on book data, effectively captures contextual relationships and bridging semantic gaps between domains, whereas DistilBERT captures deep semantic relationships between the textual content of the book and movie domains. Evaluation metrics include LRAP, cross-entropy loss, and precision. RoBERTa outperformed DistilBERT, achieving LRAP (0.908), cross-entropy (0.269), and precision (0.96). Similarity measures, including domain overlap (0.992), domain generalization (0.7895), pairwise difference on Genre (1.418), and Exact Match Ratio (0.60) highlight strong alignment between book and movie genres. This work emphasizes a multi-label classification strategy and novel algorithm for cross-domain knowledge modeling, offering a robust solution for CDR challenges with effective trade-off metrics.

Shubhanshi Singhal

and 1 more

Recommender systems (RSs) aim to streamline navigation through vast product repositories, with personalization as a critical development. However, modeling user preferences remains challenging due to their dynamic and complexity. Cross-domain learning (CDL) has emerged as a promising approach to enhance personalization by leveraging inter-domain knowledge. Despite advancements, modeling inter-domain knowledge is difficult due to the semantic heterogeneity of participating domains. This paper presents a cross-domain recommendation (CDR) framework and evaluates its effectiveness on 'Book-Movie' item pairs, leveraging books as auxiliary data to enhance movie recommendations. Books offer richer contextual insights into users' cognitive states, thereby addressing legacy challenges such as data sparsity and the cold-start problem. Using pre-trained models such as RoBERTa and DistilBERT, we propose a novel approach for inter-domain knowledge modeling, leveraging the capability of generative models to effectively capture inter-domain knowledge and transfer it to achieve a higher level of personalization in the target domain. RoBERTa, fine-tuned on book data, effectively captures contextual relationships and bridging semantic gaps between domains, whereas DistilBERT captures deep semantic relationships between the textual content of the book and movie domains. Evaluation metrics include LRAP, cross-entropy loss, and precision. RoBERTa outperformed DistilBERT, achieving \textit{LRAP} (0.908), \textit{cross-entropy} (0.269), and \textit{precision} (0.96). Similarity measures, including \textit{domain overlap} (0.992), \textit{domain generalization} (0.7895), \textit{pairwise difference on Genre} (1.418), and \textit{Exact Match Ratio} (0.60) highlight strong alignment between book and movie genres. This work emphasizes a multi-label classification strategy and novel algorithm for cross-domain knowledge modeling, offering a robust solution for CDR challenges with effective trade-off metrics.