CONF
monayACM:2004/IDIAP
PLSA-based Image Auto-Annotation: Constraining the Latent Space
Monay, Florent
Gatica-Perez, Daniel
EXTERNAL
https://publications.idiap.ch/attachments/papers/2004/monay-acm-1568937089.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/monay02
Related documents
Proc. ACM Int. Conf. on Multimedia (ACM MM)
2004
IDIAP-RR 04-30
We address the problem of unsupervised image auto-annotation with probabilistic latent space models. Unlike most previous works, which build latent space representations assuming equal relevance for the text and visual modalities, we propose a new way of modeling multi-modal co-occurrences, constraining the definition of the latent space to ensure its consistency in semantic terms (words,',','),
while retaining the ability to jointly model visual information. The concept is implemented by a linked pair of Probabilistic Latent Semantic Analysis (PLSA) models. On a 16000-image collection, we show with extensive experiments and using various performance measures, that our approach significantly outperforms previous joint models.
REPORT
monay02/IDIAP
PLSA-based Image Auto-Annotation: Constraining the Latent Space
Monay, Florent
Gatica-Perez, Daniel
EXTERNAL
https://publications.idiap.ch/attachments/reports/2004/rr04-30.pdf
PUBLIC
Idiap-RR-30-2004
2004
IDIAP
Published in ``Proc. ACM Multimedia 2004'', 2004
We address the problem of unsupervised image auto-annotation with probabilistic latent space models. Unlike most previous works, which build latent space representations assuming equal relevance for the text and visual modalities, we propose a new way of modeling multi-modal co-occurrences, constraining the definition of the latent space to ensure its consistency in semantic terms (words,',','),
while retaining the ability to jointly model visual information. The concept is implemented by a linked pair of Probabilistic Latent Semantic Analysis (PLSA) models. On a 16000-image collection, we show with extensive experiments and using various performance measures, that our approach significantly outperforms previous joint models.