CONF
nscaringella:ismir:2008/IDIAP
Timbre and Rhythmic TRAP-TANDEM features for music information retrieval
Scaringella, Nicolas
EXTERNAL
https://publications.idiap.ch/attachments/papers/2008/scaringella-ismir-2008.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/nscaringella:rr08-46
Related documents
"Int. Conf. on Music Information Retrieval (ISMIR)"
2008
The enormous growth of digital music databases has led to a comparable growth in the need for methods that help users organize and access such information. One area in particular that has seen much recent research activity is the use of automated techniques to describe audio content and to allow for its identification, browsing and retrieval. Conventional approaches to music content description rely on features characterizing the shape of the signal spectrum in relatively short-term frames. In the context of Automatic Speech Recognition (ASR,',','),
Hermansky \cite{Hermansky_1} described an interesting alternative to short-term spectrum features, the TRAP-TANDEM approach which uses long-term band-limited features trained in a supervised fashion. We adapt this idea to the specific case of music signals and propose a generic system for the description of temporal patterns. The same system with different settings is able to extract features describing either timbre or rhythmic content. The quality of the generated features is demonstrated in a set of music retrieval experiments and compared to other state-of-the-art models.
REPORT
nscaringella:rr08-46/IDIAP
Timbre and Rhythmic TRAP-TANDEM features for music information retrieval
Scaringella, Nicolas
EXTERNAL
https://publications.idiap.ch/attachments/reports/2008/scaringella-idiap-rr-08-46.pdf
PUBLIC
Idiap-RR-46-2008
2008
IDIAP
To appear in ISMIR 2008
The enormous growth of digital music databases has led to a comparable growth in the need for methods that help users organize and access such information. One area in particular that has seen much recent research activity is the use of automated techniques to describe audio content and to allow for its identification, browsing and retrieval. Conventional approaches to music content description rely on features characterizing the shape of the signal spectrum in relatively short-term frames. In the context of Automatic Speech Recognition (ASR,',','),
Hermansky \cite{Hermansky_1} described an interesting alternative to short-term spectrum features, the TRAP-TANDEM approach which uses long-term band-limited features trained in a supervised fashion. We adapt this idea to the specific case of music signals and propose a generic system for the description of temporal patterns. The same system with different settings is able to extract features describing either timbre or rhythmic content. The quality of the generated features is demonstrated in a set of music retrieval experiments and compared to other state-of-the-art models.