Symbolic Music Generation with Transformer-GANs

56Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

Autoregressive models using Transformers have emerged as the dominant approach for music generation with the goal of synthesizing minute-long compositions that exhibit large-scale musical structure. These models are commonly trained by minimizing the negative log-likelihood (NLL) of the observed sequence in an autoregressive manner. Unfortunately, the quality of samples from these models tends to degrade significantly for long sequences, a phenomenon attributed to exposure bias. Fortunately, we are able to detect these failures with classifiers trained to distinguish between real and sampled sequences, an observation that motivates our exploration of adversarial losses to complement the NLL objective. We use a pre-trained Span-BERT model for the discriminator of the GAN, which in our experiments helped with training stability. We use the Gumbel-Softmax trick to obtain a differentiable approximation of the sampling process. This makes discrete sequences amenable to optimization in GANs. In addition, we break the sequences into smaller chunks to ensure that we stay within a given memory budget. We demonstrate via human evaluations and a new discriminative metric that the music generated by our approach outperforms a baseline trained with likelihood maximization, the state-of-the-art Music Transformer, and other GANs used for sequence generation. 57% of people prefer music generated via our approach while 43% prefer Music Transformer.

References Powered by Scopus

Long Short-Term Memory

77027Citations
N/AReaders
Get full text

Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

6307Citations
N/AReaders
Get full text

Spanbert: Improving pre-training by representing and predicting spans

1328Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends

81Citations
N/AReaders
Get full text

A Survey on Deep Learning for Symbolic Music Generation: Representations, Algorithms, Evaluations, and Challenges

44Citations
N/AReaders
Get full text

A Review of Generative Adversarial Networks (GANs) and Its Applications in a Wide Variety of Disciplines: From Medical to Remote Sensing

42Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Muhamed, A., Li, L., Shi, X., Yaddanapudi, S., Chi, W., Jackson, D., … Smola, A. J. (2021). Symbolic Music Generation with Transformer-GANs. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 1, pp. 408–417). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i1.16117

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 15

71%

Researcher 5

24%

Professor / Associate Prof. 1

5%

Readers' Discipline

Tooltip

Computer Science 24

80%

Engineering 3

10%

Mathematics 2

7%

Nursing and Health Professions 1

3%

Save time finding and organizing research with Mendeley

Sign up for free