Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC

Edge caching in the Internet of Vehicles (IoV) can reduce backhaul strain and content access delay. However, due to the constant changes in vehicle requests, offloading applications to edge servers is crucial for efficiently anticipating and caching popular content. Additionally, conventional data-s...

Full description

Saved in:
Bibliographic Details
Main Authors: Afsana Kabir Sinthia, Nosin Ibna Mahbub, Eui-Nam Huh
Format: Article
Language:English
Published: Elsevier 2025-04-01
Series:ICT Express
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2405959524001449
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Edge caching in the Internet of Vehicles (IoV) can reduce backhaul strain and content access delay. However, due to the constant changes in vehicle requests, offloading applications to edge servers is crucial for efficiently anticipating and caching popular content. Additionally, conventional data-sharing techniques are inadequate for this task due to their inability to preserve the privacy of vehicular users (VU). To overcome these issues, we propose a cooperative proactive content caching system incorporating Asynchronous federated learning and Deep reinforcement learning named PCAD that leverages the strengths of Dueling Deep Q-Networks and Prioritized Experience Replay in vehicular edge computing. PCAD lowers the latency of content access by prefetching contents that are popular beforehand caching them on edge nodes and cutting the waiting time for every vehicle to complete training as well as uploading local models before updating the global model. Additionally, we investigate intelligent caching decisions based on content prediction. Comprehensive experimental evaluations indicate that our proposed approach significantly outperforms existing benchmark caching techniques. More specifically, our suggested approach works better than DDQN, c-ϵ-greedy, and PCAD without DRL methods and the cache hit rate improves by approximately 4.25%, 11.23%, and 25.82%, respectively, as the cache capacity hits 400 MB.
ISSN:2405-9595