Unsupervised analysis and generation of audio percussion sequences

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A system is presented that learns the structure of an audio recording of a rhythmical percussion fragment in an unsupervised manner and that synthesizes musical variations from it. The procedure consists of 1) segmentation, 2) symbolization (feature extraction, clustering, sequence structure analysis, temporal alignment), and 3) synthesis. The symbolization step yields a sequence of event classes. Simultaneously, representations are maintained that cluster the events into few or many classes. Based on the most regular clustering level, a tempo estimation procedure is used to preserve the metrical structure in the generated sequence. Employing variable length Markov chains, the final synthesis is performed, recombining the audio material derived from the sample itself. Representations with different numbers of classes are used to trade off statistical significance (short context sequence, low clustering refinement) versus specificity (long context, high clustering refinement) of the generated sequence. For a broad variety of musical styles the musical characteristics of the original are preserved. At the same time, considerable variability is introduced in the generated sequence. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Marchini, M., & Purwins, H. (2011). Unsupervised analysis and generation of audio percussion sequences. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6684 LNCS, pp. 205–218). https://doi.org/10.1007/978-3-642-23126-1_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free