This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus. © 2013 Springer-Verlag.
CITATION STYLE
Bernardes, G., Guedes, C., & Pennycook, B. (2013). EarGram: An application for interactive exploration of concatenative sound synthesis in pure data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7900 LNCS, pp. 110–129). https://doi.org/10.1007/978-3-642-41248-6_7
Mendeley helps you to discover research relevant for your work.