MusicYOLO: A Vision-based Framework for Automatic Singing Transcription
Xianke Wang, Bowen Tian, Weiming Yang, Wei Xu, and Wenqing Cheng
Huazhong University Of Science And Technology
Abstract—Automatic singing transcription (AST), which refers to the process of inferring the onset, offset, and pitch from the singing audio, is of great significance in music information retrieval. Most AST models use the convolutional neural network to extract spectral features and predict the onset and offset moments separately. The frame-level probabilities are inferred first, and then the note-level transcription results are obtained through post-processing. In this paper, a new AST framework called MusicYOLO is proposed, which obtains the note-level transcription results directly. The onset/offset detection is based on the object detection model YOLOX, and the pitch labeling is completed by a spectrogram peak search. Compared with previous methods, the MusicYOLO detects note objects rather than isolated onset/offset moments, thus greatly enhancing the transcription performance. On the sight-singing vocal dataset (SSVD) established in this paper, the MusicYOLO achieves an 84.60% transcription F1-score, which is the state-of-the-art method.
Keywords: Feature extractionLabelingEvent detectionSpectrogramEstimationDeep learningObject detectionASTnote object detectionspectrogram peak search
Funding:National Key Research and Development Program of China, 2021YFC3340803
更多内容见 Web of Science