<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
	<record>
		<datafield tag="980" ind1=" " ind2=" ">
			<subfield code="a">CONF</subfield>
		</datafield>
		<datafield tag="970" ind1=" " ind2=" ">
			<subfield code="a">Jie_NIPS2009/IDIAP</subfield>
		</datafield>
		<datafield tag="245" ind1=" " ind2=" ">
			<subfield code="a">Who's Doing What: Joint Modeling of Names and Verbs for Simultaneous Face and Pose Annotation</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Luo, Jie</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Caputo, Barbara</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Ferrari, Vittorio</subfield>
		</datafield>
		<datafield tag="856" ind1="4" ind2="0">
			<subfield code="i">EXTERNAL</subfield>
			<subfield code="u">http://publications.idiap.ch/attachments/papers/2011/Jie_NIPS2009.pdf</subfield>
			<subfield code="x">PUBLIC</subfield>
		</datafield>
		<datafield tag="711" ind1="2" ind2=" ">
			<subfield code="a">NIPS Foundation - Advances in Neural Information Processing Systems 22 (NIPS09)</subfield>
			<subfield code="c">Vancouver, B.C., Canada</subfield>
		</datafield>
		<datafield tag="260" ind1=" " ind2=" ">
			<subfield code="c">2009</subfield>
			<subfield code="b">MIT Press</subfield>
		</datafield>
		<datafield tag="771" ind1="2" ind2=" ">
			<subfield code="d">December 2009</subfield>
		</datafield>
		<datafield tag="520" ind1=" " ind2=" ">
			<subfield code="a">Given a corpus of news items consisting of images accompanied by text captions, we want to find out â€œwhoâ€™s doing whatâ€, i.e. associate names and action verbs in the captions to the face and body pose of the persons  in the images. We present a joint model for simultaneously solving the image-caption correspondences and learning visual appearance models for the face and pose classes occurring in the corpus. These models can then be used to recognize people and actions in novel images without captions. We demonstrate experimentally that our joint â€˜face and poseâ€™ model solves the correspondence problem better than earlier models covering only the face, and that it can perform recognition of new uncaptioned images.</subfield>
		</datafield>
	</record>
</collection>