Abstract
This paper outlines the creative and technical considerations behind earGram, an application built as a Pure Data patch for real-time concatenative sound synthesis. The system encompasses four generative strategies that automatically re-arrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than its original temporal order into musically coherent outputs. Of notice are the system’s machine-learning capabilities that reveal musical patterns and temporal organizations, as well as several visualization tools that assist the user in making decisions during performance.
Author supplied keywords
Cite
CITATION STYLE
Bernardes, G., Guedes, C., & Pennycook, B. (2012). EarGram : an Application for Interactive Exploration of Large Databases of Audio Snippets for Creative Purposes. In Proceedings of the 9th International Symposium on Computer Music Modelling and Retrieval (CMMR) (pp. 265–277). London: Springer Verlag.
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.