Optimizing resource allocation in industrial IoT with federated machine learning and edge computing integration

The study explores resource allocation in Federated Machine Learning (FedML) for the Industrial Internet of Things (IIoT), focusing on efficient and privacy-conscious data processing. It proposes optimizing the FedML training process to enhance system performance and ensure data privacy. The researc...

Full description

Saved in:
Bibliographic Details
Main Authors: Ala'a R. Al-Shamasneh, Faten Khalid Karim, Yu Wang
Format: Article
Language:English
Published: Elsevier 2025-09-01
Series:Results in Engineering
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2590123025024570
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The study explores resource allocation in Federated Machine Learning (FedML) for the Industrial Internet of Things (IIoT), focusing on efficient and privacy-conscious data processing. It proposes optimizing the FedML training process to enhance system performance and ensure data privacy. The research also presents a long-term average unified cost minimization problem considering energy constraints, device heterogeneity, and limited bandwidth in federated edge learning contexts. To address these issues, a novel Lyapunov-driven optimization algorithm for device selection and bandwidth allocation is introduced. This algorithm adeptly balances resource expenditures with model quality, employing Lyapunov-driven optimization theory to convert long-term stochastic challenges into short-term deterministic resolutions. Furthermore, the study presents a multi-tier federated edge learning architecture that integrates cloud collaboration with edge servers to manage the increasing number of industrial devices and the demand for timely local model training. Simulations confirm the methods' low complexity and superior efficacy, highlighting reduced system delay and enhanced model accuracy. The proposed method reduced system delay by up to 30%, achieved a model accuracy of 98% on the MNIST dataset and 91% on CIFAR-10, and improved convergence speed, with training loss decreasing by 25% within the first 10 rounds. The method also achieved a 40.5% improvement in computational efficiency and a 30-50% reduction in system costs, demonstrating its practicality and scalability. These results augment the performance of federated machine learning applications in practical IIoT settings. The insights garnered facilitate the development of intelligent industrial systems that prioritize efficiency and data privacy.
ISSN:2590-1230