EarGram: An application for interactive exploration of concatenative sound synthesis in pure data

9Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Bernardes, G., Guedes, C., & Pennycook, B. (2013). EarGram: An application for interactive exploration of concatenative sound synthesis in pure data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7900 LNCS, pp. 110–129). https://doi.org/10.1007/978-3-642-41248-6_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free