Error Bounds for lp-Norm Multiple Kernel Learning with Least Square Loss
The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing an lp-norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective...
Saved in:
| Main Authors: | Shao-Gao Lv, Jin-De Zhu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2012-01-01
|
| Series: | Abstract and Applied Analysis |
| Online Access: | http://dx.doi.org/10.1155/2012/915920 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Lp Bounds for the Commutators of Oscillatory Singular Integrals with Rough Kernels
by: Yanping Chen, et al.
Published: (2014-01-01) -
Kernel generalized least squares regression for network-structured data
by: Edward Antonian, et al.
Published: (2025-01-01) -
Estimating a Bounded Normal Mean Relative
to Squared Error Loss Function
by: A. Karimnezhad
Published: (2011-09-01) -
Constructive Analysis for Least Squares Regression with Generalized K-Norm Regularization
by: Cheng Wang, et al.
Published: (2014-01-01) -
Bounds for distribution functions of sums of squares and radial errors
by: Roger B. Nelsen, et al.
Published: (1991-01-01)