MB-AGCL: multi-behavior adaptive graph contrast learning for recommendation

Abstract Graph Convolutional Networks (GCNs) have achieved remarkable success in recommendation systems by leveraging higher-order neighborhoods. In recent years, multi-behavior recommendation has addressed the challenges of data sparsity and cold start problems to some extent. However, the introduc...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaowen lv, Yiwei Zhao, Zhihu Zhou, Yifeng Zhang, Yourong Chen
Format: Article
Language:English
Published: Springer 2025-04-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-025-01880-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Graph Convolutional Networks (GCNs) have achieved remarkable success in recommendation systems by leveraging higher-order neighborhoods. In recent years, multi-behavior recommendation has addressed the challenges of data sparsity and cold start problems to some extent. However, the introduction of noise from multi-behavior tasks into the user-item graph exacerbates the impact of noise from a few active users and popularity bias from popular items. To tackle these challenges, graph augmentation has emerged as a promising approach in recommendation systems. However, existing augmentation methods may generate suboptimal graph structures, and maximizing correspondence may capture information unrelated to the recommendation task. To address these issues, we propose a novel approach called the Multi-Behavior Adaptive Graph Contrastive Learning Model (MB-AGCL) for recommendation. Our approach integrates auxiliary behaviors to compensate for data sparsity and utilizes adaptive learning to determine whether to drop edges or nodes, thus obtaining an optimized graph structure that reduces the impact of noise. We then train the original and generated graphs using supervised tasks. Furthermore, we propose an efficient adaptive graph augmentation method that integrates graph augmentation with down-stream tasks to reduce the impact of popularity bias. Finally, we jointly optimize these two tasks. Through extensive experiments on public datasets, we validate the effectiveness of our recommendation model.
ISSN:2199-4536
2198-6053