Mesostructures: Beyond Spectrogram Loss in Differentiable Time–Frequency Analysis

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Computer musicians refer to mesostructures as the intermediate levels of articulation between the microstructure of waveshapes and the macrostructure of musical forms. Examples of mesostructures include melody, arpeggios, syncopation, polyphonic grouping, and textural contrast. Despite their central role in musical expression, they have received limited attention in recent applications of deep learning to the analysis and synthesis of musical audio. Currently, autoencoders and neural audio synthesizers are only trained and evaluated at the scale of microstructure, i.e., local amplitude variations up to 100 ms or so. In this paper, the authors formulate and address the problem of mesostructural audio modeling via a composition of a differentiable arpeggiator and time-frequency scattering. The authors empirically demonstrate that time–frequency scattering serves as a differentiable model of similarity between synthesis parameters that govern mesostructure. By exposing the sensitivity of short-time spectral distances to time alignment, the authors motivate the need for a time-invariant and multiscale differentiable time–frequency model of similarity at the level of both local spectra and spectrotemporal modulations.

Cite

CITATION STYLE

APA

Vahidi, C., Han, H., Wang, C., Lagrange, M., Fazekas, G., & Lostanlen, V. (2023). Mesostructures: Beyond Spectrogram Loss in Differentiable Time–Frequency Analysis. AES: Journal of the Audio Engineering Society, 71(9), 577–585. https://doi.org/10.17743/jaes.2022.0103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free