Adaptive Frequency Domain Data Augmentation for Sequential Recommendation

The sequential recommendation aims to predict users’ future interests or needs by analyzing their behavioral data over some time. Most existing approaches model user preference in the time domain, ignoring the impact of different frequency patterns (periodic features) on users’...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhibin Yang, Jiwei Qin, Donghao Zhang, Jie Ma, Peichen Ji
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10753583/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The sequential recommendation aims to predict users’ future interests or needs by analyzing their behavioral data over some time. Most existing approaches model user preference in the time domain, ignoring the impact of different frequency patterns (periodic features) on users’ behaviors. These frequency patterns are often intertwined in the time domain and are difficult to distinguish. However, few studies have explored how to extract frequency domain information from user behavior sequences. In addition, according to the F-principle, deep learning models pay more attention to low-frequency information, which may lead to poor performance in high-frequency tasks. To alleviate these problems, we propose a new self-supervised learning framework (AFSRec) to extract frequency features from user behavioral data. Specifically, we devise a learnable Fourier layer and a mixing for information preservation module to adaptively learn the user’s period features. In the mixed module, we utilize self-supervised signals from different frequency bands as mix samples to accommodate events of different frequency sizes while alleviating the limitation of low-frequency preference. Finally, we design a frequency domain alignment loss to align different views of the same user and jointly optimize the recommendation loss. Extensive experiments on five benchmark datasets demonstrate the superior performance of the model compared to state-of-the-art baseline methods.
ISSN:2169-3536