logo Idiap Research Institute        
 [BibTeX] [Marc21]
Robust Unsupervised Gaze Calibration using Conversation and Manipulation Attention Priors
Type of publication: Journal paper
Citation: Siegfried_TOMM_2021
Publication status: Accepted
Journal: ACM Transactions on Multimedia Computing, Communications, and Applications
Volume: 18
Number: 1
Year: 2022
Month: January
Pages: 26
ISSN: 1551-6857
URL: https://doi.org/10.1145/347262...
DOI: 10.1145/3472622
Abstract: Gaze estimation is a difficult task, even for humans. However, as humans, we are good at understanding a situation and exploiting it to guess the expected visual focus of attention (VFOA) of people, and we usually use this information to retrieve people’s gaze. In this paper, we propose to leverage such situation-based expectation about people’s VFOA to collected weakly labeled gaze samples and perform person-specific calibration of gaze estimators in an unsupervised and online way. In this context, our contributions are the following: i) we show how task contextual attention priors can be used to gather reference gaze samples, which is a cumbersome process otherwise; ii) we propose a robust estimation framework to exploit these weak labels for the estimation of the calibration model parameters; iii) we demonstrate the applicability of this approach on two Human-Human and Human-Robot interaction settings, namely conversation, and manipulation. Experiments on three datasets validate our approach, providing insights on the effectiveness of the prior and on the impact of different calibration models, in particular the usefulness of taking head pose into account.
Keywords: conversation, Gaze estimation, manipulation, online calibration., remote sensor, RGB-D camera, unsupervised calibration, visual focus of attention
Projects Idiap
MUMMER
Authors Siegfried, Remy
Odobez, Jean-Marc
Added by: [UNK]
Total mark: 0
Attachments
  • Siegfried_TOMM_2021.pdf
Notes