Transfer Learning for Audio Waveform to Guitar Chord Spectrograms Using the Convolution Neural Network

3Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Automatic chord recognition has always been approached as a broad music audition task. The desired output is a succession of time-aligned discrete chord symbols, such as GMaj and Asus2. Automatic music transcription is the process of converting a musical recording into a human-readable and interpretable representation. When dealing with polyphonic sounds or removing certain limits, automatic music transcription remains a difficult undertaking. A guitar, for example, presents a greater challenge, as guitarists can play the same note in a variety of places. The study makes use of CNN functionality to generate the guitar tab; initially, the constant-Q transform was used to turn the input audio file into short time spectrograms that the CNN model utilises to analyse the chord. The paper developed a method for extracting chord sequences and notes from audio recordings of solo guitar performances. For intervals in the supplied audio, the proposed approach outputs chord names and fret-board notes. The model described here has been refined to achieve an accuracy of 88.7%. The model's ability to properly tag audio clips is an incredible advancement.

Cite

CITATION STYLE

APA

Jadhav, Y., Patel, A., Jhaveri, R. H., & Raut, R. (2022). Transfer Learning for Audio Waveform to Guitar Chord Spectrograms Using the Convolution Neural Network. Mobile Information Systems, 2022. https://doi.org/10.1155/2022/8544765

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free