<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
	<record>
		<datafield tag="980" ind1=" " ind2=" ">
			<subfield code="a">REPORT</subfield>
		</datafield>
		<datafield tag="970" ind1=" " ind2=" ">
			<subfield code="a">Tommasi_Idiap-RR-77-2008/IDIAP</subfield>
		</datafield>
		<datafield tag="245" ind1=" " ind2=" ">
			<subfield code="a">CLEF2008 Image Annotation Task: an SVM Confidence-Based Approach</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Tommasi, Tatiana</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Orabona, Francesco</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Caputo, Barbara</subfield>
		</datafield>
		<datafield tag="856" ind1="4" ind2="0">
			<subfield code="i">EXTERNAL</subfield>
			<subfield code="u">http://publications.idiap.ch/attachments/reports/2008/Tommasi_Idiap-RR-77-2008.pdf</subfield>
			<subfield code="x">PUBLIC</subfield>
		</datafield>
		<datafield tag="088" ind1=" " ind2=" ">
			<subfield code="a">Idiap-RR-77-2008</subfield>
		</datafield>
		<datafield tag="260" ind1=" " ind2=" ">
			<subfield code="c">2008</subfield>
			<subfield code="b">Idiap</subfield>
		</datafield>
		<datafield tag="771" ind1="2" ind2=" ">
			<subfield code="d">December 2008</subfield>
		</datafield>
		<datafield tag="500" ind1=" " ind2=" ">
			<subfield code="a">CLEF 2008 Working Notes</subfield>
		</datafield>
		<datafield tag="520" ind1=" " ind2=" ">
			<subfield code="a">This paper presents the algorithms and results of our participation to the medi-
cal image annotation task of ImageCLEFmed 2008. Our previous experience in the
same task in 2007 suggests that combining multiple cues with diï¬€erent SVM-based
approaches is very eï¬€ective in this domain. Moreover it points out that local features
are the most discriminative cues for the problem at hand. On these basis we decided
to integrate two diï¬€erent local structural and textural descriptors. Cues are combined
through simple concatenation of the feature vectors and through the Multi-Cue Ker-
nel. The trickiest part of the challenge this year was annotating images coming mainly
from classes with only few examples in the training set. We tackled the problem on
two fronts: (1) we introduced a further integration strategy using SVM as an opinion
maker. It consists in combining the ï¬rst two opinions on the basis of a technique
to evaluate the conï¬dence of the classiï¬erâ€™s decisions. This approach produces class
labels with â€œdonâ€™t knowâ€ wildcards opportunely placed; (2) we enriched the poorly
populated training classes adding virtual examples generated slightly modifying the
original images. We submitted several runs considering diï¬€erent combination of the
proposed techniques. Our team was called â€œidiapâ€. The run using jointly the low cue-
integration technique, the conï¬dence-based opinion fusion and the virtual examples,
scored 74.92 ranking ï¬rst among all submissions.</subfield>
		</datafield>
	</record>
</collection>