<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
	<record>
		<datafield tag="980" ind1=" " ind2=" ">
			<subfield code="a">ARTICLE</subfield>
		</datafield>
		<datafield tag="970" ind1=" " ind2=" ">
			<subfield code="a">Ba_IEEESMC-B_2008/IDIAP</subfield>
		</datafield>
		<datafield tag="245" ind1=" " ind2=" ">
			<subfield code="a">Recognizing Human Visual Focus of Attention from Head Pose in Meetings</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Ba, Silèye O.</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Odobez, Jean-Marc</subfield>
		</datafield>
		<datafield tag="856" ind1="4" ind2="0">
			<subfield code="i">EXTERNAL</subfield>
			<subfield code="u">http://publications.idiap.ch/attachments/papers/2008/Ba_IEEESMC-B_2008.pdf</subfield>
			<subfield code="x">PUBLIC</subfield>
		</datafield>
		<datafield tag="773" ind1=" " ind2=" ">
			<subfield code="p">IEEE Transactions on Systems, Man, Cybernetics, Part-B</subfield>
			<subfield code="v">Vol. 39</subfield>
			<subfield code="n">No. 1</subfield>
			<subfield code="c">16-34</subfield>
		</datafield>
		<datafield tag="260" ind1=" " ind2=" ">
			<subfield code="c">2009</subfield>
		</datafield>
		<datafield tag="771" ind1="2" ind2=" ">
			<subfield code="d">February 2009</subfield>
		</datafield>
		<datafield tag="520" ind1=" " ind2=" ">
			<subfield code="a">We address the problem of recognizing the visual  focus of  attention (VFOA) of meeting participants based on their  head  pose. To this end, the head pose observations are 
modeled using a Gaussian Mixture Model (GMM) or a Hidden Markov Model (HMM) whose hidden states corresponds to the VFOA. The novelties of this work are threefold. First, contrary to previous studies on the topic, in our set-up,  the potential VFOA of a person is not restricted to other participants only. It includes environmental targets as well (a table and a projection screen,',','),
 which increases the complexity of the task, with more VFOA targets spread  in the pan  as well as tilt gaze
space.  Second, we propose a geometric model to set the GMM or HMM parameters by exploiting results from cognitive science on saccadic eye motion, which allows the prediction of the head pose given a gaze target. Third, an unsupervised parameter adaptation step not using any labeled data 
is proposed which accounts for the specific gazing behaviour of each participant.</subfield>
		</datafield>
	</record>
</collection>