Human perception of musical structure is supposed to depend on the generation of hierarchies, which is inherently related to the actual organisation of sounds in music. Musical structures are indeed best retained by listeners when they form hierarchical patterns, with consequent implications on the appreciation of music and its performance. The automatic detection of musical structure in audio recordings is one of the most challenging problems in the field of music information retrieval, since even human experts tend to disagree on the structural decomposition of a piece of music. However, most of the current music segmentation algorithms in literature can only produce flat segmentations, meaning that they cannot segment music at different levels in order to reveal its hierarchical structure. We propose a novel methodology for the hierarchical analysis of music structure that is based on graph theory and multi-resolution community detection. This unsupervised method can perform both the tasks of boundary detection and structural grouping, without the need of particular constraints that would limit the resulting segmentation. To evaluate our approach, we designed an experiment that allowed us to compare its segmentation performance with that of the current state of the art algorithms for hierarchical segmentation. Our results indicate that the proposed methodology can achieve state of the art performance on a well-known benchmark dataset, thus providing a deeper analysis of musical structure.
CITATION STYLE
De Berardinis, J., Vamvakaris, M., Cangelosi, A., & Coutinho, E. (2020). Unveiling the Hierarchical Structure of Music by Multi-Resolution Community Detection. Transactions of the International Society for Music Information Retrieval, 3(1), 82–97. https://doi.org/10.5334/tismir.41
Mendeley helps you to discover research relevant for your work.