AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP
Dingzhou Wang
Dingzhou Wang
United States

Public Documents 1
Improving Sequential Recommendations with Token-Level LLM Representation
Dingzhou Wang

Dingzhou Wang

and 5 more

November 12, 2025
Sequential recommendation systems often use ID-based embeddings, which are efficient but lack semantic richness and generalization. Large language models (LLMs) can capture contextual information, yet their use in recommendation is limited. We propose a model that initializes recommendation sequences with token-based LLM representations, transferring linguistic knowledge into item representations. Unlike traditional embeddings, our method applies subword and contextual encoding to preserve semantic detail across diverse items. On benchmarks like Amazon-Books and MovieLens-1M, our approach achieves higher accuracy with less memory and training time than SASRec, BERT4Rec, and GPT-based recommenders. Ablation studies further show faster convergence and reduced overfitting. These results demonstrate that LLM token-based initialization is an efficient and cost-effective paradigm for advancing sequential recommendation.

| Powered by Authorea.com

  • Home