Learning Rates for -Regularized Kernel Classifiers

We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decompositi...

Full description

Saved in:
Bibliographic Details
Main Authors: Hongzhi Tong, Di-Rong Chen, Fenghong Yang
Format: Article
Language:English
Published: Wiley 2013-01-01
Series:Journal of Applied Mathematics
Online Access:http://dx.doi.org/10.1155/2013/496282
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850172603571896320
author Hongzhi Tong
Di-Rong Chen
Fenghong Yang
author_facet Hongzhi Tong
Di-Rong Chen
Fenghong Yang
author_sort Hongzhi Tong
collection DOAJ
description We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.
format Article
id doaj-art-d0a69bfed8dd4f57803fe8975da1f3ff
institution OA Journals
issn 1110-757X
1687-0042
language English
publishDate 2013-01-01
publisher Wiley
record_format Article
series Journal of Applied Mathematics
spelling doaj-art-d0a69bfed8dd4f57803fe8975da1f3ff2025-08-20T02:20:02ZengWileyJournal of Applied Mathematics1110-757X1687-00422013-01-01201310.1155/2013/496282496282Learning Rates for -Regularized Kernel ClassifiersHongzhi Tong0Di-Rong Chen1Fenghong Yang2School of Statistics, University of International Business and Economics, Beijing 100029, ChinaDepartment of Mathematics and LMIB, Beijing University of Aeronautics and Astronautics, Beijing 100083, ChinaSchool of Applied Mathematics, Central University of Finance and Economics, Beijing 100081, ChinaWe consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.http://dx.doi.org/10.1155/2013/496282
spellingShingle Hongzhi Tong
Di-Rong Chen
Fenghong Yang
Learning Rates for -Regularized Kernel Classifiers
Journal of Applied Mathematics
title Learning Rates for -Regularized Kernel Classifiers
title_full Learning Rates for -Regularized Kernel Classifiers
title_fullStr Learning Rates for -Regularized Kernel Classifiers
title_full_unstemmed Learning Rates for -Regularized Kernel Classifiers
title_short Learning Rates for -Regularized Kernel Classifiers
title_sort learning rates for regularized kernel classifiers
url http://dx.doi.org/10.1155/2013/496282
work_keys_str_mv AT hongzhitong learningratesforregularizedkernelclassifiers
AT dirongchen learningratesforregularizedkernelclassifiers
AT fenghongyang learningratesforregularizedkernelclassifiers