ARTICLE
cemgil-kappen-barber-2004/IDIAP
A Generative Model for Music Transcription
Cemgil, A. T.
Kappen, B.
Barber, David
EXTERNAL
https://publications.idiap.ch/attachments/papers/2005/pianoroll_tsap_final.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/barber:rr05-89
Related documents
IEEE Transactions on Speech and Audio Processing
2004
Accepted for publication
In this paper we present a graphical model for polyphonic music transcription. Our model, formulated as a Dynamical Bayesian Network, embodies a transparent and computationally tractable approach to this acoustic analysis problem. An advantage of our approach is that it places emphasis on explicitly modelling the sound generation procedure. It provides a clear framework in which both high level (cognitive) prior information on music structure can be coupled with low level (acoustic physical) information in a principled manner to perform the analysis. The model is a special case of the, generally intractable, switching Kalman filter model. Where possible, we derive, exact polynomial time inference procedures, and otherwise efficient approximations. We argue that our generative model based approach is computationally feasible for many music applications and is readily extensible to more general auditory scene analysis scenarios.
REPORT
barber:rr05-89/IDIAP
A Generative Model for Music Transcription
Cemgil, A. T.
Kappen, B.
Barber, David
EXTERNAL
https://publications.idiap.ch/attachments/reports/2005/barber-idiap-rr-05-89.pdf
PUBLIC
Idiap-RR-89-2005
2005
IDIAP
Accepted to IEEE Transactions on Speech and Audio Processing
In this paper we present a graphical model for polyphonic music transcription. Our model, formulated as a Dynamical Bayesian Network, embodies a transparent and computationally tractable approach to this acoustic analysis problem. An advantage of our approach is that it places emphasis on explicitly modelling the sound generation procedure. It provides a clear framework in which both high level (cognitive) prior information on music structure can be coupled with low level (acoustic physical) information in a principled manner to perform the analysis. The model is a special case of the, generally intractable, switching Kalman filter model. Where possible, we derive, exact polynomial time inference procedures, and otherwise efficient approximations. We argue that our generative model based approach is computationally feasible for many music applications and is readily extensible to more general auditory scene analysis scenarios.