Dynamic load balancing in cloud computing using predictive graph networks and adaptive neural scheduling

Abstract Load balancing is one of the significant challenges in cloud environments due to the heterogeneity, dynamic nature of resource states and workloads. The traditional load balancing procedures struggle to adapt the real-time variations which leads to inefficient resource utilization and incre...

Full description

Saved in:
Bibliographic Details
Main Authors: K. Rajammal, M. Chinnadurai
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-97494-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Load balancing is one of the significant challenges in cloud environments due to the heterogeneity, dynamic nature of resource states and workloads. The traditional load balancing procedures struggle to adapt the real-time variations which leads to inefficient resource utilization and increased response times. To overcome these issues, a novel approach is presented in this research work utilizing Spiking Neural Networks (SNNs) for adaptive decision-making and Temporal Graph Neural Networks (TGNNs) for dynamic resource state modeling. The proposed SNN model identifies the short-term workload fluctuations and long-term trends whereas TGNN represents the cloud environment as a dynamic graph to predict future resource availability. Additionally, reinforcement learning is incorporated in the proposed work to optimize SNN decisions based on feedback from the TGNN’s state predictions. Experimental evaluations of the proposed model with diverse workload scenarios demonstrate significant improvements in terms of throughput, energy efficiency, make span and response time. Additionally, comparative analyses with existing optimization algorithms exhibit the proposed model ability in managing the loads in cloud computing. The results exhibit the 20% higher throughput, reduced makespan by 35%, minimized response time by 40%, and lowered energy consumption by 30–40% of the proposed model compared to the existing methods.
ISSN:2045-2322