ML-Empowered Microservice Workload Prediction by Dual-Regularized Matrix Factorization
A technical challenge for workload prediction in microservice systems is how to capture both the dynamic features of workload and evolving dependencies among microservices. The existing work focused mainly on modeling dynamic features without taking adequate account of evolving dependencies due to t...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/11/5946 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | A technical challenge for workload prediction in microservice systems is how to capture both the dynamic features of workload and evolving dependencies among microservices. The existing work focused mainly on modeling dynamic features without taking adequate account of evolving dependencies due to their unpredictable temporal dynamics. To fill this gap, as an illustration of bridging theory and real-work solutions by integrating machine learning with data analysis, we propose a novel framework of Temporality-Dependence Dual-Regularized Matrix Factorization (TDDRMF) by combining matrix factorization with regularization on both workload temporality and microservice dependencies. It models the workload matrix as the product of a microservice dependency matrix <i>W</i> and workload feature matrix <i>X</i> applying matrix factorization, and computes <i>X</i> by temporal regularization and <i>W</i> by low-rank norm regularization as a convex relaxation of rank minimization. To further enhance its adaptability to workload variations in real-time environments, we deploy a dynamic error detection and update mechanism. Experiments on the Alibaba dataset show that TDDRMF achieves 18.5% lower RMSE than TAMF in 10-step prediction, improving the existing matrix factorization methods in accuracy. In comparison with ML-based methods, as TDDRMF uses only 5% of their training data, it requires only a small fraction of their training time. |
|---|---|
| ISSN: | 2076-3417 |