Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC
Edge caching in the Internet of Vehicles (IoV) can reduce backhaul strain and content access delay. However, due to the constant changes in vehicle requests, offloading applications to edge servers is crucial for efficiently anticipating and caching popular content. Additionally, conventional data-s...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-04-01
|
| Series: | ICT Express |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2405959524001449 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850047222708699136 |
|---|---|
| author | Afsana Kabir Sinthia Nosin Ibna Mahbub Eui-Nam Huh |
| author_facet | Afsana Kabir Sinthia Nosin Ibna Mahbub Eui-Nam Huh |
| author_sort | Afsana Kabir Sinthia |
| collection | DOAJ |
| description | Edge caching in the Internet of Vehicles (IoV) can reduce backhaul strain and content access delay. However, due to the constant changes in vehicle requests, offloading applications to edge servers is crucial for efficiently anticipating and caching popular content. Additionally, conventional data-sharing techniques are inadequate for this task due to their inability to preserve the privacy of vehicular users (VU). To overcome these issues, we propose a cooperative proactive content caching system incorporating Asynchronous federated learning and Deep reinforcement learning named PCAD that leverages the strengths of Dueling Deep Q-Networks and Prioritized Experience Replay in vehicular edge computing. PCAD lowers the latency of content access by prefetching contents that are popular beforehand caching them on edge nodes and cutting the waiting time for every vehicle to complete training as well as uploading local models before updating the global model. Additionally, we investigate intelligent caching decisions based on content prediction. Comprehensive experimental evaluations indicate that our proposed approach significantly outperforms existing benchmark caching techniques. More specifically, our suggested approach works better than DDQN, c-ϵ-greedy, and PCAD without DRL methods and the cache hit rate improves by approximately 4.25%, 11.23%, and 25.82%, respectively, as the cache capacity hits 400 MB. |
| format | Article |
| id | doaj-art-4a70691f15df481a85600e284edebb5f |
| institution | DOAJ |
| issn | 2405-9595 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | Elsevier |
| record_format | Article |
| series | ICT Express |
| spelling | doaj-art-4a70691f15df481a85600e284edebb5f2025-08-20T02:54:15ZengElsevierICT Express2405-95952025-04-0111229329810.1016/j.icte.2024.11.006Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VECAfsana Kabir Sinthia0Nosin Ibna Mahbub1Eui-Nam Huh2Department of Computer Science and Engineering, Kyung Hee University, Global Campus, Yongin-si 17104, Republic of KoreaDepartment of Computer Science and Engineering, Kyung Hee University, Global Campus, Yongin-si 17104, Republic of KoreaCorresponding author.; Department of Computer Science and Engineering, Kyung Hee University, Global Campus, Yongin-si 17104, Republic of KoreaEdge caching in the Internet of Vehicles (IoV) can reduce backhaul strain and content access delay. However, due to the constant changes in vehicle requests, offloading applications to edge servers is crucial for efficiently anticipating and caching popular content. Additionally, conventional data-sharing techniques are inadequate for this task due to their inability to preserve the privacy of vehicular users (VU). To overcome these issues, we propose a cooperative proactive content caching system incorporating Asynchronous federated learning and Deep reinforcement learning named PCAD that leverages the strengths of Dueling Deep Q-Networks and Prioritized Experience Replay in vehicular edge computing. PCAD lowers the latency of content access by prefetching contents that are popular beforehand caching them on edge nodes and cutting the waiting time for every vehicle to complete training as well as uploading local models before updating the global model. Additionally, we investigate intelligent caching decisions based on content prediction. Comprehensive experimental evaluations indicate that our proposed approach significantly outperforms existing benchmark caching techniques. More specifically, our suggested approach works better than DDQN, c-ϵ-greedy, and PCAD without DRL methods and the cache hit rate improves by approximately 4.25%, 11.23%, and 25.82%, respectively, as the cache capacity hits 400 MB.http://www.sciencedirect.com/science/article/pii/S2405959524001449Deep reinforcement learningFederated learningEdge cachingCooperative caching |
| spellingShingle | Afsana Kabir Sinthia Nosin Ibna Mahbub Eui-Nam Huh Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC ICT Express Deep reinforcement learning Federated learning Edge caching Cooperative caching |
| title | Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC |
| title_full | Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC |
| title_fullStr | Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC |
| title_full_unstemmed | Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC |
| title_short | Optimizing proactive content caching with mobility aware deep reinforcement & asynchronous federate learning in VEC |
| title_sort | optimizing proactive content caching with mobility aware deep reinforcement amp asynchronous federate learning in vec |
| topic | Deep reinforcement learning Federated learning Edge caching Cooperative caching |
| url | http://www.sciencedirect.com/science/article/pii/S2405959524001449 |
| work_keys_str_mv | AT afsanakabirsinthia optimizingproactivecontentcachingwithmobilityawaredeepreinforcementampasynchronousfederatelearninginvec AT nosinibnamahbub optimizingproactivecontentcachingwithmobilityawaredeepreinforcementampasynchronousfederatelearninginvec AT euinamhuh optimizingproactivecontentcachingwithmobilityawaredeepreinforcementampasynchronousfederatelearninginvec |