logo Idiap Research Institute        
 [BibTeX] [Marc21]
Extracting and Locating Temporal Motifs in Video Scenes Using a Hierarchical Non Parametric Bayesian Model
Type of publication: Conference paper
Citation: Emonet_CVPR_2011
Publication status: Accepted
Booktitle: IEEE Conference on Computer Vision and Pattern Recognition
Year: 2011
Month: June
Abstract: In this paper, we present an unsupervised method for mining activities in videos. From unlabeled video sequences of a scene, our method can automatically recover what are the recurrent temporal activity patterns (or motifs) and when they occur. Using non parametric Bayesian methods, we are able to automatically find both the underlying number of motifs and the number of motif occurrences in each document. The model’s robustness is first validated on synthetic data. It is then applied on a large set of video data from state-of-the-art papers. We show that it can effectively recover temporal activities with high semantics for humans and strong temporal information. The model is also used for prediction where it is shown to be as efficient as other approaches. Although illustrated on video sequences, this model can be directly applied to various kinds of time series where multiple activities occur simultaneously.
Keywords:
Projects Idiap
SNSF-MULTI
VANAHEIM
Authors Emonet, Remi
Varadarajan, Jagannadan
Odobez, Jean-Marc
Added by: [UNK]
Total mark: 0
Attachments
  • Emonet_CVPR_2011.pdf
       (pdf of the published paper)
Notes