Dual intent view contrastive learning for knowledge aware recommender systems
Abstract Knowledge-aware recommendation systems often face challenges owing to sparse supervision signals and redundant entity relations, which can diminish the advantages of utilizing knowledge graphs for enhancing recommendation performance. To tackle these challenges, we propose a novel recommend...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-025-86416-x |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832594750394335232 |
---|---|
author | Jianhua Guo Zhixiang Yin Shuyang Feng Donglin Yao Shaopeng Liu |
author_facet | Jianhua Guo Zhixiang Yin Shuyang Feng Donglin Yao Shaopeng Liu |
author_sort | Jianhua Guo |
collection | DOAJ |
description | Abstract Knowledge-aware recommendation systems often face challenges owing to sparse supervision signals and redundant entity relations, which can diminish the advantages of utilizing knowledge graphs for enhancing recommendation performance. To tackle these challenges, we propose a novel recommendation model named Dual-Intent-View Contrastive Learning network (DIVCL), inspired by recent advancements in contrastive and intent learning. DIVCL employs a dual-view representation learning approach using Graph Neural Networks (GNNs), consisting of two distinct views: a local view based on the user-item interaction graph and a global view based on the user-item-entity knowledge graph. To further enhance learning, a set of intents are integrated into each user-item interaction as a separate class of nodes, fulfilling three crucial roles in the GNN learning process: (1) providing fine-grained representations of user-item interaction features, (2) acting as evaluators for filtering relevant relations in the knowledge graph, and (3) participating in contrastive learning to strengthen the model’s ability to handle sparse signals and redundant relations. Experimental results on three benchmark datasets demonstrate that DIVCL outperforms state-of-the-art models, showcasing its superior performance. The implementation is available at: https://github.com/yzxx667/DIVCL . |
format | Article |
id | doaj-art-31608f114b2c4e41979dec86c3193d71 |
institution | Kabale University |
issn | 2045-2322 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj-art-31608f114b2c4e41979dec86c3193d712025-01-19T12:22:44ZengNature PortfolioScientific Reports2045-23222025-01-0115111310.1038/s41598-025-86416-xDual intent view contrastive learning for knowledge aware recommender systemsJianhua Guo0Zhixiang Yin1Shuyang Feng2Donglin Yao3Shaopeng Liu4School of Computer Science, Guangdong Polytechnic Normal UniversitySchool of Computer Science, Guangdong Polytechnic Normal UniversitySchool of Computer Science, Guangdong Polytechnic Normal UniversitySchool of Education, Guangzhou UniversitySchool of Computer Science, Guangdong Polytechnic Normal UniversityAbstract Knowledge-aware recommendation systems often face challenges owing to sparse supervision signals and redundant entity relations, which can diminish the advantages of utilizing knowledge graphs for enhancing recommendation performance. To tackle these challenges, we propose a novel recommendation model named Dual-Intent-View Contrastive Learning network (DIVCL), inspired by recent advancements in contrastive and intent learning. DIVCL employs a dual-view representation learning approach using Graph Neural Networks (GNNs), consisting of two distinct views: a local view based on the user-item interaction graph and a global view based on the user-item-entity knowledge graph. To further enhance learning, a set of intents are integrated into each user-item interaction as a separate class of nodes, fulfilling three crucial roles in the GNN learning process: (1) providing fine-grained representations of user-item interaction features, (2) acting as evaluators for filtering relevant relations in the knowledge graph, and (3) participating in contrastive learning to strengthen the model’s ability to handle sparse signals and redundant relations. Experimental results on three benchmark datasets demonstrate that DIVCL outperforms state-of-the-art models, showcasing its superior performance. The implementation is available at: https://github.com/yzxx667/DIVCL .https://doi.org/10.1038/s41598-025-86416-x |
spellingShingle | Jianhua Guo Zhixiang Yin Shuyang Feng Donglin Yao Shaopeng Liu Dual intent view contrastive learning for knowledge aware recommender systems Scientific Reports |
title | Dual intent view contrastive learning for knowledge aware recommender systems |
title_full | Dual intent view contrastive learning for knowledge aware recommender systems |
title_fullStr | Dual intent view contrastive learning for knowledge aware recommender systems |
title_full_unstemmed | Dual intent view contrastive learning for knowledge aware recommender systems |
title_short | Dual intent view contrastive learning for knowledge aware recommender systems |
title_sort | dual intent view contrastive learning for knowledge aware recommender systems |
url | https://doi.org/10.1038/s41598-025-86416-x |
work_keys_str_mv | AT jianhuaguo dualintentviewcontrastivelearningforknowledgeawarerecommendersystems AT zhixiangyin dualintentviewcontrastivelearningforknowledgeawarerecommendersystems AT shuyangfeng dualintentviewcontrastivelearningforknowledgeawarerecommendersystems AT donglinyao dualintentviewcontrastivelearningforknowledgeawarerecommendersystems AT shaopengliu dualintentviewcontrastivelearningforknowledgeawarerecommendersystems |