Isolated guitar transcription using a deep belief network

6Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Music transcription involves the transformation of an audio recording to common music notation, colloquially referred to as sheet music. Manually transcribing audio recordings is a difficult and time-consuming process, even for experienced musicians. In response, several algorithms have been proposed to automatically analyze and transcribe the notes sounding in an audio recording; however, these algorithms are often general-purpose, attempting to process any number of instruments producing any number of notes sounding simultaneously. This paper presents a polyphonic transcription algorithm that is constrained to processing the audio output of a single instrument, specifically an acoustic guitar. The transcription system consists of a novel note pitch estimation algorithmthat uses a deep belief network andmulti-label learning techniques to generate multiple pitch estimates for each analysis frame of the input audio signal. Using a compiled dataset of synthesized guitar recordings for evaluation, the algorithmdescribed in this work results in an 11%increase in the f-measure of note transcriptions relative to Zhou et al.'s (2009) transcription algorithm in the literature. This paper demonstrates the effectiveness of deep, multi-label learning for the task of polyphonic transcription.

Cite

CITATION STYLE

APA

Burlet, G., & Hindle, A. (2017). Isolated guitar transcription using a deep belief network. PeerJ Computer Science, 2017(3). https://doi.org/10.7717/peerj-cs.109

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free