A fiber channel modeling method based on complex neural networks

Abstract Channel modeling plays a pivotal role in the field of communications, particularly in the optical communication networks of backbone communication systems. Recent studies on optical channel modeling have utilized real-valued neural network (RVNN) to extract channel characteristics, an appro...

Full description

Saved in:
Bibliographic Details
Main Authors: Haifeng Yang, Yongjun Wang, Chao Li, Lu Han, Qi Zhang, Xiangjun Xin
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-07595-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Channel modeling plays a pivotal role in the field of communications, particularly in the optical communication networks of backbone communication systems. Recent studies on optical channel modeling have utilized real-valued neural network (RVNN) to extract channel characteristics, an approach that does not fully account for the properties of complex-valued signals. To address this limitation, we propose a complex-valued conditional generative adversarial network (C-CGAN) in this paper to comprehensively learn channel features. We describe the architecture and parameters of the C-CGAN and employ complex-valued windowed construction for input data. Subsequently, we evaluate the model’s accuracy and generalization capabilities using the normalized mean square error (NMSE) and benchmark it against the real-valued conditional generative adversarial network (R-CGAN). The results indicate that the C-CGAN achieves better generalization across various scenarios, including different dataset sizes, noise levels, and input feature complexities, while also exhibiting a more stable training process. The NMSE achieved by the C-CGAN remains below $$2\times 10^{-2}$$ and outperforms the R-CGAN. Additionally, analysis from the perspective of floating-point operations (FLOPs) reveals that the computational complexity of the C-CGAN is relatively low. To further validate scalability, we introduce a self-loop cascading mechanism that, under constrained training datasets, improves NMSE performance by 22.48% compared to the R-CGAN.
ISSN:2045-2322