REPORT
Orabona_Idiap-RR-07-2010/IDIAP
Online-Batch Strongly Convex Multi Kernel Learning
Orabona, Francesco
Luo, Jie
Caputo, Barbara
EXTERNAL
https://publications.idiap.ch/attachments/reports/2010/Orabona_Idiap-RR-07-2010.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Orabona_CVPR_2010
Related documents
Idiap-RR-07-2010
2010
Idiap
April 2010
Several object categorization algorithms use kernel methods over multiple cues, as they offer a principled ap- proach to combine multiple cues, and to obtain state-of-the- art performance. A general drawback of these strategies is the high computational cost during training, that prevents their application to large-scale problems. They also do not provide theoretical guarantees on their convergence rate.
Here we present a Multiclass Multi Kernel Learning (MKL) algorithm that obtains state-of-the-art performance in a considerably lower training time. We generalize the standard MKL formulation to introduce a parameter that al- lows us to decide the level of sparsity of the solution. Thanks to this new setting, we can directly solve the problem in the primal formulation. We prove theoretically and experimen- tally that 1) our algorithm has a faster convergence rate as the number of kernels grow; 2) the training complexity is linear in the number of training examples; 3) very few iter- ations are enough to reach good solutions. Experiments on three standard benchmark databases support our claims.
CONF
Orabona_CVPR_2010/IDIAP
Online-Batch Strongly Convex Multi Kernel Learning
Orabona, Francesco
Luo, Jie
Caputo, Barbara
EXTERNAL
https://publications.idiap.ch/attachments/papers/2010/Orabona_CVPR_2010.pdf
PUBLIC
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
2010
June 2010
Several object categorization algorithms use kernel methods over multiple cues, as they offer a principled ap- proach to combine multiple cues, and to obtain state-of-the- art performance. A general drawback of these strategies is the high computational cost during training, that prevents their application to large-scale problems. They also do not provide theoretical guarantees on their convergence rate.
Here we present a Multiclass Multi Kernel Learning (MKL) algorithm that obtains state-of-the-art performance in a considerably lower training time. We generalize the standard MKL formulation to introduce a parameter that al- lows us to decide the level of sparsity of the solution. Thanks to this new setting, we can directly solve the problem in the primal formulation. We prove theoretically and experimen- tally that 1) our algorithm has a faster convergence rate as the number of kernels grow; 2) the training complexity is linear in the number of training examples; 3) very few iter- ations are enough to reach good solutions. Experiments on three standard benchmark databases support our claims.