Computer programs such as MUSIC V or CSOUND lead to a huge number of sound examples, either in the synthesis or in the processing domain. The translation of such algorithms to real-time programs such as MAX-MSP allows these digitally created sounds to be used effectively in performance. This includes interpretation, expressivity, or even improvisation and creativity. This particular bias of our project (from sound to gesture) brings about new questions such as the choice of strategies for gesture control and feedback, as well as the mapping of peripherals data to synthesis and processing data. The learning process is required for these new controls and the issue of virtuosity versus simplicity is an everyday challenge.
CITATION STYLE
Arfib, D., & Kessous, L. (2002). Gestural control of sound synthesis and processing algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2298, pp. 285–295). Springer Verlag. https://doi.org/10.1007/3-540-47873-6_30
Mendeley helps you to discover research relevant for your work.