CONF
Siegfried_ECEM_2017/IDIAP
Supervised Gaze Bias Correction for Gaze Coding in Interactions
Siegfried, Remy
Odobez, Jean-Marc
EXTERNAL
https://publications.idiap.ch/attachments/papers/2018/Siegfried_ECEM_2017.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Siegfried_Idiap-RR-23-2017
Related documents
ECEM COGAIN Symposium
2017
3
Understanding the role of gaze in conversations and social interactions or exploiting it for HRI applications is an ongoing research subject. In these contexts, vision-based eye trackers are preferred as they are non-invasive and allow people to behave more naturally. In particular, appearance-based methods (ABM) are very promising, as they can perform online gaze estimation and have the potential to be head pose and person invariant, accommodate more situations as well as user mobility and the resulting low-resolution images. However, they may also suffer from a lack of robustness when several of these challenges are jointly present. In this work, we address gaze coding in human-human interactions and present a simple method based on a few manually annotated frames that is able to much reduce the error of a head pose invariant ABM method, as shown on a dataset of 6 interactions.
REPORT
Siegfried_Idiap-RR-23-2017/IDIAP
Supervised Gaze Bias Correction for Gaze Coding in Interactions
Siegfried, Remy
Odobez, Jean-Marc
appearance model
attention
bias correction
eye tracking
Gaze
usability
EXTERNAL
https://publications.idiap.ch/attachments/reports/2017/Siegfried_Idiap-RR-23-2017.pdf
PUBLIC
Idiap-RR-23-2017
2017
Idiap
September 2017
Understanding the role of gaze in conversations and social interactions or exploiting it for
HRI applications is an ongoing research subject. In these contexts, vision based eye trackers
are preferred as they are non-invasive and allow people to behave more naturally. In particular,
appearance based methods (ABM) are very promising, as they can perform online gaze estima-
tion and have the potential to be head pose and person invariant, accommodate more situations
as well as user mobility and the resulting low resolution images. However, they may also suffer
from a lack of robustness when several of these challenges are jointly present. In this work,
we address gaze coding in human-human interactions, and present a simple method based on
a few manually annotated frames that is able to much reduce the error of a head pose invariant
ABM method, as shown on a dataset of 6 interactions.