CONF
grandvalet:nips:2005/IDIAP
A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification
Grandvalet, Yves
MariƩthoz, Johnny
Bengio, Samy
EXTERNAL
https://publications.idiap.ch/attachments/papers/2005/grandvalet-nips-2005.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/grandvalet:rr05-26
Related documents
Advances in Neural Information Processing Systems, NIPS 15
2005
IDIAP-RR 05-26
In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem when decisions result in unequal losses. Experiments on an unbalanced classification loss show improvements over state-of-the-art procedures.
REPORT
grandvalet:rr05-26/IDIAP
A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification
Grandvalet, Yves
MariƩthoz, Johnny
Bengio, Samy
EXTERNAL
https://publications.idiap.ch/attachments/reports/2005/grandvalet-idiap-rr-05-26.pdf
PUBLIC
Idiap-RR-26-2005
2005
IDIAP
Published in Advances in Neural Information Processing Systems, {NIPS} 15, 2005
In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem when decisions result in unequal losses. Experiments on an unbalanced classification loss show improvements over state-of-the-art procedures.