Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks

The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the mo...

Full description

Saved in:
Bibliographic Details
Main Authors: Huisheng Zhang, Dongpo Xu, Zhiping Wang
Format: Article
Language:English
Published: Wiley 2010-01-01
Series:Discrete Dynamics in Nature and Society
Online Access:http://dx.doi.org/10.1155/2010/829692
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850167596748374016
author Huisheng Zhang
Dongpo Xu
Zhiping Wang
author_facet Huisheng Zhang
Dongpo Xu
Zhiping Wang
author_sort Huisheng Zhang
collection DOAJ
description The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.
format Article
id doaj-art-e7ac1d13c961490780d884297ee812c7
institution OA Journals
issn 1026-0226
1607-887X
language English
publishDate 2010-01-01
publisher Wiley
record_format Article
series Discrete Dynamics in Nature and Society
spelling doaj-art-e7ac1d13c961490780d884297ee812c72025-08-20T02:21:10ZengWileyDiscrete Dynamics in Nature and Society1026-02261607-887X2010-01-01201010.1155/2010/829692829692Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural NetworksHuisheng Zhang0Dongpo Xu1Zhiping Wang2Department of Mathematics, Dalian Maritime University, Dalian 116026, ChinaDepartment of Applied Mathematics, Harbin Engineering University, Harbin 150001, ChinaDepartment of Mathematics, Dalian Maritime University, Dalian 116026, ChinaThe online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.http://dx.doi.org/10.1155/2010/829692
spellingShingle Huisheng Zhang
Dongpo Xu
Zhiping Wang
Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
Discrete Dynamics in Nature and Society
title Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
title_full Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
title_fullStr Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
title_full_unstemmed Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
title_short Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
title_sort convergence of an online split complex gradient algorithm for complex valued neural networks
url http://dx.doi.org/10.1155/2010/829692
work_keys_str_mv AT huishengzhang convergenceofanonlinesplitcomplexgradientalgorithmforcomplexvaluedneuralnetworks
AT dongpoxu convergenceofanonlinesplitcomplexgradientalgorithmforcomplexvaluedneuralnetworks
AT zhipingwang convergenceofanonlinesplitcomplexgradientalgorithmforcomplexvaluedneuralnetworks