Error Bounds for lp-Norm Multiple Kernel Learning with Least Square Loss

The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing an lp-norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective...

Full description

Saved in:
Bibliographic Details
Main Authors: Shao-Gao Lv, Jin-De Zhu
Format: Article
Language:English
Published: Wiley 2012-01-01
Series:Abstract and Applied Analysis
Online Access:http://dx.doi.org/10.1155/2012/915920
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing an lp-norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective for theoretical analysis and practical applications (Kloft et al., 2009, 2011). In this paper, we present a theoretical analysis on the approximation error and learning ability of the lp-norm MKL. Our analysis shows explicit learning rates for lp-norm MKL and demonstrates some notable advantages compared with traditional kernel-based learning algorithms where the kernel is fixed.
ISSN:1085-3375
1687-0409