Co-occurrence Models for Image Annotation and Retrieval
Type of publication: | Idiap-RR |
Citation: | Garg_Idiap-RR-22-2009 |
Number: | Idiap-RR-22-2009 |
Year: | 2009 |
Month: | 8 |
Institution: | Idiap |
Note: | Ecole Polytechnique Fédérale de Lausanne - Master Thesis |
Abstract: | We present two models for content-based automatic image annotation and retrieval in web image repositories, based on the co-occurrence of tags and visual features in the images. In particular, we show how additional measures can be taken to address the noisy and limited tagging problems, in datasets such as Flickr, to improve performance. As in many state-of-the-art works, an image is represented as a bag of visual terms computed using edge and color information. The cooccurrence information of visual terms and tags is used to create models for image annotation and retrieval. The first model begins with a naive Bayes approach and then improves upon it by using image pairs as single documents to significantly reduce the noise and increase annotation performance. The second method models the visual terms and tags as a graph, and uses query expansion techniques to improve the retrieval performance. We evaluate our methods on the commonly used 150 concept Corel dataset, and a much harder 2000 concept Flickr dataset. |
Keywords: | |
Projects |
Idiap |
Authors | |
Added by: | [ADM] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|