Pitch gestures in generative modeling of music

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generative models of music are in need of performance and gesture additions, i.e. inclusions of subtle temporal and dynamic alterations, and gestures so as to render the music musical. While much of the research regarding music generation is based on music theory, the work presented here is based on the temporal perception, which is divided into three parts, the immediate (subchunk), the short-term memory (chunk), and the superchunk. By review of the relevant temporal perception literature, the necessary performance elements to add in the metrical generative model, related to the chunk memory, are obtained. In particular, the pitch gestures are modeled as rising, falling, or as arches with positive or negative peaks. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Jensen, K. (2011). Pitch gestures in generative modeling of music. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6684 LNCS, pp. 51–59). https://doi.org/10.1007/978-3-642-23126-1_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free