Learning Rates for -Regularized Kernel Classifiers
We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decompositi...
Saved in:
| Main Authors: | Hongzhi Tong, Di-Rong Chen, Fenghong Yang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2013-01-01
|
| Series: | Journal of Applied Mathematics |
| Online Access: | http://dx.doi.org/10.1155/2013/496282 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Simpler Approach to Coefficient Regularized Support Vector Machines Regression
by: Hongzhi Tong, et al.
Published: (2014-01-01) -
Application of Non-Sparse Manifold Regularized Multiple Kernel Classifier
by: Tao Yang
Published: (2025-03-01) -
The Learning Rates of Regularized Regression Based on Reproducing Kernel Banach Spaces
by: Baohuai Sheng, et al.
Published: (2013-01-01) -
Indefinite Kernel Network with lq-Norm Regularization
by: Zhongfeng Qu, et al.
Published: (2016-01-01) -
Theory Analysis for the Convergence of Kernel-Regularized Online Binary Classification Learning Associated with RKBSs
by: Lin Liu, et al.
Published: (2023-01-01)