Motion-driven concatenative synthesis of cloth sounds

18Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a practical data-driven method for automatically synthesizing plausible soundtracks for physics-based cloth animations running at graphics rates. Given a cloth animation, we analyze the deformations and use motion events to drive crumpling and friction sound models estimated from cloth measurements. We synthesize a low-quality sound signal, which is then used as a target signal for a concatenative sound synthesis (CSS) process. CSS selects a sequence of microsound units, very short segments, from a database of recorded cloth sounds, which best match the synthesized target sound in a low-dimensional feature-space after applying a handtuned warping function. The selected microsound units are concatenated together to produce the final cloth sound with minimal filtering. Our approach avoids expensive physics-based synthesis of cloth sound, instead relying on cloth recordings and our motiondriven CSS approach for realism. We demonstrate its effectiveness on a variety of cloth animations involving various materials and character motions, including first-person virtual clothing with binaural sound. © 2012 ACM 0730-0301/2012/08-ART102.

Cite

CITATION STYLE

APA

An, S. S., James, D. L., & Marschner, S. (2012). Motion-driven concatenative synthesis of cloth sounds. ACM Transactions on Graphics, 31(4). https://doi.org/10.1145/2185520.2185598

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free