A Latency-Aware and Resource-Efficient Content Caching Scheme for Content-Centric Networks

The proliferation of smart devices and bandwidth-intensive applications challenges traditional IP networks, driving interest in Content-Centric Networking (CCN). CCN decouples content from location using persistent names and leverages in-network caching to improve delivery efficiency. However, exist...

Full description

Saved in:
Bibliographic Details
Main Authors: Yasar Khan, Rao Naveed Bin Rais, Osman Khalid, Imran Ali Khan
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11115027/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The proliferation of smart devices and bandwidth-intensive applications challenges traditional IP networks, driving interest in Content-Centric Networking (CCN). CCN decouples content from location using persistent names and leverages in-network caching to improve delivery efficiency. However, existing caching strategies often optimize isolated metrics like hit ratio, leading to suboptimal performance in latency or resource usage, employ inefficient eviction policies like Least Recently Used (LRU)/Least Frequently Used (LFU) unsuitable for dynamic content, or create excessive redundancy. They struggle to balance conflicting factors such as content popularity, freshness, latency, diversity, and resource constraints. To address this, we propose Latency-Aware and Resource-efficient Content-caching (LARC), a multi-objective heuristic approach that effectively improves latency performance while balancing resource constraints. LARC utilizes a lightweight, self-tuning heuristic featuring a Content Access Score (CAS) that dynamically integrates request frequency, content freshness (via adaptive decay), and hop distance. This score guides both an adaptive cache placement strategy, sensitive to real-time network conditions, and a novel freshness-aware eviction mechanism. By enhancing latency performance while considering resource efficiency, LARC implicitly enhances hit ratio, content freshness, and diversity. Extensive simulations on large-scale topologies showed that LARC achieves up to 9% improvement in hit ratio, up to 3% reduction in average latency, up to 3% enhancement in path-stretch, and up to 4% improvement in link-load compared to the state-of-the-art caching schemes.
ISSN:2169-3536