Automatic Analysis of Multimodal Group Actions in Meetings
Type of publication: | Idiap-RR |
Citation: | mccowan-rr-03-27 |
Number: | Idiap-RR-27-2003 |
Year: | 2003 |
Institution: | IDIAP |
Address: | Martigny, Switzerland |
Note: | To appear in IEEE Transactions of Pattern Analysis and Machine Intelligence |
Abstract: | This paper investigates the recognition of group actions in meetings. A statistical framework is proposed in which group actions result from the interactions of the individual participants. The group actions are modelled using different HMM-based approaches, where the observations are provided by a set of audio-visual features monitoring the actions of individuals. Experiments demonstrate the importance of taking interactions into account in modelling the group actions. It is also shown that the visual modality contains useful information, even for predominantly audio-based events, motivating a multimodal approach to meeting analysis. |
Userfields: | ipdmembership={speech, learning, vision}, language={English}, |
Keywords: | |
Projects |
Idiap |
Authors | |
Crossref by |
mccowan-rr-03-27b |
Added by: | [UNK] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|