A contrastive learning framework with dual gates and noise awareness for temporal knowledge graph reasoning

Abstract Temporal knowledge graph reasoning(TKGR) has attracted widespread attention due to its ability to handle dynamic temporal features. However, existing methods face three major challenges: (1) the difficulty of capturing long-distance dependencies in information sparse environments; (2) the p...

Full description

Saved in:
Bibliographic Details
Main Authors: Siling Feng, Bolin Chen, Qian Liu, Mengxing Huang
Format: Article
Language:English
Published: Nature Portfolio 2025-05-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-00314-w
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Temporal knowledge graph reasoning(TKGR) has attracted widespread attention due to its ability to handle dynamic temporal features. However, existing methods face three major challenges: (1) the difficulty of capturing long-distance dependencies in information sparse environments; (2) the problem of noise interference; (3) the complexity of modeling temporal relationships. These seriously impact the accuracy and robustness of reasoning. To address these challenges, we proposes a framework based on Dual-gate and Noise-aware Contrastive Learning (DNCL) to improve the performance of TKGR. The framework consists of three core modules: (1) We employ a multi-dimensional gated update module, which flexibly selects key information and suppresses redundant information through a dual-gate mechanism, thereby alleviating the long-distance dependencies problem; (2) We construct a noise-aware adversarial modeling module, which improves robustness and reduces the impact of noise through adversarial training; (3) We design a multi-layer embedding contrastive learning module, which enhances the representation ability through intra-layer and inter-layer contrastive learning strategies to better capture the latent relationships in the temporal dimension. Experimental results on four benchmark datasets show that the DNCL model is better than the current methods, especially for ICEWS14, ICEWS05-15 and ICEWS18 datasets, Hit@1 has improved by 6.91%, 4.31% and 5.30% respectively.
ISSN:2045-2322