CONF
Lebret_ICML_2015/IDIAP
Phrase-based Image Captioning
Lebret, Rémi
Pinheiro, Pedro H. O.
Collobert, Ronan
EXTERNAL
https://publications.idiap.ch/attachments/papers/2015/Lebret_ICML_2015.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Lebret_Idiap-RR-08-2015
Related documents
International Conference on Machine Learning (ICML)
Lille, France
37
2085–2094
2015
JMLR
http://jmlr.org/proceedings/papers/v37/lebret15.html
URL
Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results in two popular datasets for the task: Flickr30k and the recently proposed Microsoft COCO.
REPORT
Lebret_Idiap-RR-08-2015/IDIAP
Phrase-based Image Captioning
Lebret, Rémi
Pinheiro, Pedro H. O.
Collobert, Ronan
EXTERNAL
https://publications.idiap.ch/attachments/reports/2015/Lebret_Idiap-RR-08-2015.pdf
PUBLIC
Idiap-RR-08-2015
2015
Idiap
May 2015
Under review by the International Conference on Machine Learning (ICML).
Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing.
In this paper, we present a simple model that is able to generate descriptive sentences given a sample image.
This model has a strong focus on the syntax of the descriptions.
We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results in two popular datasets for the task: Flickr30k and the recently proposed Microsoft COCO.