Enhancing LLMs for Sequential Recommendation With Reversed User History and User Embeddings
Inspired by the successful applications of Large Language Models (LLMs) in various fields, LLMs for sequential recommendation have also become an active research area. Recent studies have focused on leveraging the powerful capabilities of LLMs to enhance their alignment with sequential recommendatio...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11050368/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Inspired by the successful applications of Large Language Models (LLMs) in various fields, LLMs for sequential recommendation have also become an active research area. Recent studies have focused on leveraging the powerful capabilities of LLMs to enhance their alignment with sequential recommendation. While LLMs excel in sequence modeling and have inspired adaptations for sequential recommendation tasks, their potential to fully exploit sequential nature remains underexplored. In this paper, we propose two key strategies: Reversed User History Generation (RUHG) and Recency-based User Embedding. The first method, RUHG, forces LLM to generate the next item and then regenerate the user history in reverse order. Due to the autoregressive nature of LLM, this method allows for better understanding of the next item and user history. Our second method, Recency-based User Embedding, captures the dynamics of user preference by emphasizing recent interactions. This user embedding provides a global view of user history to LLM, rather than utilizing only individual items. Moreover, we leverage Curriculum Learning (CL) for effectively training, and provide insights into defining easy and hard tasks for LLMs within CL. Our methods effectively bridge the modality gap between LLMs and sequential recommendation while maximizing the capabilities of LLMs through our scheme. Extensive experiments demonstrate the effectiveness of proposed methods and show performance improvements across three benchmark datasets. Our code is available at <uri>https://github.com/Yeo-Jun-Choi/llmfor</uri> |
|---|---|
| ISSN: | 2169-3536 |