Time Complexity of Training DNNs With Parallel Computing for Wireless Communications
Deep neural networks (DNNs) have been widely used for learning various wireless communication policies. While DNNs have demonstrated the ability to reduce the time complexity of inference, their training often incurs a high computational cost. Since practical wireless systems require retraining due...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Open Journal of Vehicular Technology |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10830510/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832590332343091200 |
---|---|
author | Pengyu Cong Chenyang Yang Shengqian Han Shuangfeng Han Xiaoyun Wang |
author_facet | Pengyu Cong Chenyang Yang Shengqian Han Shuangfeng Han Xiaoyun Wang |
author_sort | Pengyu Cong |
collection | DOAJ |
description | Deep neural networks (DNNs) have been widely used for learning various wireless communication policies. While DNNs have demonstrated the ability to reduce the time complexity of inference, their training often incurs a high computational cost. Since practical wireless systems require retraining due to operating in open and dynamic environments, it is crucial to analyze the factors affecting the training complexity, which can guide the DNN architecture selection and the hyper-parameter tuning for efficient policy learning. As a metric of time complexity, the number of floating-point operations (FLOPs) for inference has been analyzed in the literature. However, the time complexity of training DNNs for learning wireless communication policies has only been evaluated in terms of runtime. In this paper, we introduce the number of serial FLOPs (se-FLOPs) as a new metric of time complexity, accounting for the ability of parallel computing. The se-FLOPs metric is consistent with actual runtime, making it suitable for measuring the time complexity of training DNNs. Since graph neural networks (GNNs) can learn a multitude of wireless communication policies efficiently and their architectures depend on specific policies, no universal GNN architecture is available for analyzing complexities across different policies. Thus, we first use precoder learning as an example to demonstrate the derivation of the numbers of se-FLOPs required to train several DNNs. Then, we compare the results with the se-FLOPs for inference of the DNNs and for executing a popular numerical algorithm, and provide the scaling laws of these complexities with respect to the numbers of antennas and users. Finally, we extend the analyses to the learning of general wireless communication policies. We use simulations to validate the analyses and compare the time complexity of each DNN trained for achieving the best learning performance and achieving an expected performance. |
format | Article |
id | doaj-art-6c6af33619fe42598f25f8de2a9eec19 |
institution | Kabale University |
issn | 2644-1330 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Open Journal of Vehicular Technology |
spelling | doaj-art-6c6af33619fe42598f25f8de2a9eec192025-01-24T00:02:18ZengIEEEIEEE Open Journal of Vehicular Technology2644-13302025-01-01635938410.1109/OJVT.2025.352684710830510Time Complexity of Training DNNs With Parallel Computing for Wireless CommunicationsPengyu Cong0https://orcid.org/0000-0002-1542-563XChenyang Yang1https://orcid.org/0000-0003-0058-0765Shengqian Han2https://orcid.org/0000-0002-2085-3292Shuangfeng Han3https://orcid.org/0000-0001-7221-9739Xiaoyun Wang4https://orcid.org/0000-0002-3574-9746Beihang University, Beijing, ChinaBeihang University, Beijing, ChinaBeihang University, Beijing, ChinaChina Mobile Research Institute, Beijing, ChinaChina Mobile Research Institute, Beijing, ChinaDeep neural networks (DNNs) have been widely used for learning various wireless communication policies. While DNNs have demonstrated the ability to reduce the time complexity of inference, their training often incurs a high computational cost. Since practical wireless systems require retraining due to operating in open and dynamic environments, it is crucial to analyze the factors affecting the training complexity, which can guide the DNN architecture selection and the hyper-parameter tuning for efficient policy learning. As a metric of time complexity, the number of floating-point operations (FLOPs) for inference has been analyzed in the literature. However, the time complexity of training DNNs for learning wireless communication policies has only been evaluated in terms of runtime. In this paper, we introduce the number of serial FLOPs (se-FLOPs) as a new metric of time complexity, accounting for the ability of parallel computing. The se-FLOPs metric is consistent with actual runtime, making it suitable for measuring the time complexity of training DNNs. Since graph neural networks (GNNs) can learn a multitude of wireless communication policies efficiently and their architectures depend on specific policies, no universal GNN architecture is available for analyzing complexities across different policies. Thus, we first use precoder learning as an example to demonstrate the derivation of the numbers of se-FLOPs required to train several DNNs. Then, we compare the results with the se-FLOPs for inference of the DNNs and for executing a popular numerical algorithm, and provide the scaling laws of these complexities with respect to the numbers of antennas and users. Finally, we extend the analyses to the learning of general wireless communication policies. We use simulations to validate the analyses and compare the time complexity of each DNN trained for achieving the best learning performance and achieving an expected performance.https://ieeexplore.ieee.org/document/10830510/DNNparallel computingprecodingse-FLOPstime complexity |
spellingShingle | Pengyu Cong Chenyang Yang Shengqian Han Shuangfeng Han Xiaoyun Wang Time Complexity of Training DNNs With Parallel Computing for Wireless Communications IEEE Open Journal of Vehicular Technology DNN parallel computing precoding se-FLOPs time complexity |
title | Time Complexity of Training DNNs With Parallel Computing for Wireless Communications |
title_full | Time Complexity of Training DNNs With Parallel Computing for Wireless Communications |
title_fullStr | Time Complexity of Training DNNs With Parallel Computing for Wireless Communications |
title_full_unstemmed | Time Complexity of Training DNNs With Parallel Computing for Wireless Communications |
title_short | Time Complexity of Training DNNs With Parallel Computing for Wireless Communications |
title_sort | time complexity of training dnns with parallel computing for wireless communications |
topic | DNN parallel computing precoding se-FLOPs time complexity |
url | https://ieeexplore.ieee.org/document/10830510/ |
work_keys_str_mv | AT pengyucong timecomplexityoftrainingdnnswithparallelcomputingforwirelesscommunications AT chenyangyang timecomplexityoftrainingdnnswithparallelcomputingforwirelesscommunications AT shengqianhan timecomplexityoftrainingdnnswithparallelcomputingforwirelesscommunications AT shuangfenghan timecomplexityoftrainingdnnswithparallelcomputingforwirelesscommunications AT xiaoyunwang timecomplexityoftrainingdnnswithparallelcomputingforwirelesscommunications |