CONF Ba_ICME_2009/IDIAP Visual Activity Context For Focus of Attention Estimation in Dynamic Meetings Ba, Silèye O. Hung, Hayley Odobez, Jean-Marc EXTERNAL https://publications.idiap.ch/attachments/papers/2009/Ba_ICME_2009.pdf PUBLIC https://publications.idiap.ch/index.php/publications/showcite/Ba_Idiap-RR-02-2009 Related documents International Conference on Multimedia & Expo 2009 June 2009 We address the problem of recognizing, in dynamic meetings in which people do not remain seated all the time, the visual focus of attention (VFOA) of seated people from their head pose and contextual activity cues. We propose a model that comprises the VFOA of a meeting participant as the hidden state, and his head pose as the observation. To account for the presence of moving visual targets due to the dynamic nature of the meeting, the locations of the visual targets are used as an input variables to the head pose observation model. Contextual information is introduced in the VFOA dynamics through a slide activity variable and speaking or visual activity variables that relate people's focus to the meeting activity context. The main novelty of this paper is the introduction of visual activity context for FOA recognition to account for the correlation between a person's focus and the other people's gestures, hand and body motions. We evaluate our model on a large dataset of 5 hours. Our results show that, for VFOA estimation in meetings, visual activity contextual information can be as effective as speaking context. REPORT Ba_Idiap-RR-02-2009/IDIAP Visual activity context for focus of attention estimation in dynamic meetings Ba, Silèye O. Hung, Hayley Odobez, Jean-Marc EXTERNAL https://publications.idiap.ch/attachments/reports/2009/Ba_Idiap-RR-02-2009.pdf PUBLIC Idiap-RR-02-2009 2009 Idiap rue marconi 19, 1920, martigny switzerland January 2009 idiap-rr We address the problem of recognizing, in dynamic meetings in which people do not remain seated all the time, the visual focus of attention (VFOA) of seated people from their head pose and contextual activity cues. We propose a model that comprises the VFOA of a meeting participant as the hidden state, and his head pose as the observation. To account for the presence of moving visual targets due to the dynamic nature of the meeting, the locations of the visual targets are used as an input variables to the head pose observation model. Contextual information is introduced in the VFOA dynamics through a slide activity variable and speaking or visual activity variables that relate people’s focus to the meeting activity context. The main novelty of this paper is the introduction of visual activity context for FOA recognition to account for the correlation between a person’s focus and the other people’s gestures, hand and body motions. We evaluate our model on a large dataset of 5 hours. Our results show that, for VFOA estimation in meetings, visual activity contextual information can be as effective as speaking context.