A Bayesian dynamic model is developed to model complex sequential data, with a focus on audio signals from music. The music is represented in terms of a sequence of discrete observations, and the sequence is modeled using a hidden Markov model (HMM) with time-evolving parameters. The model imposes the belief that observations that are temporally proximate are more likely to be drawn from HMMs with similar parameters, while also allowing for "innovation" associated with abrupt changes in the music texture. Segmentation of a given musical piece is constituted via the model inference and the results are compared with other models and also to a conventional music-theoretic analysis. ©2009 IEEE.
|Original language||English (US)|
|Title of host publication||ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings|
|Number of pages||4|
|State||Published - Sep 23 2009|