Detailed Information

Cited 21 time in webofscience Cited 28 time in scopus
Metadata Downloads

DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification

Full metadata record
DC Field Value Language
dc.contributor.authorQiu, Lvyang-
dc.contributor.authorLi, Shuyu-
dc.contributor.authorSung, Yunsick-
dc.date.accessioned2023-04-27T18:40:43Z-
dc.date.available2023-04-27T18:40:43Z-
dc.date.issued2021-03-
dc.identifier.issn2227-7390-
dc.identifier.issn2227-7390-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/5293-
dc.description.abstractMusic is a type of time-series data. As the size of the data increases, it is a challenge to build robust music genre classification systems from massive amounts of music data. Robust systems require large amounts of labeled music data, which necessitates time- and labor-intensive data-labeling efforts and expert knowledge. This paper proposes a musical instrument digital interface (MIDI) preprocessing method, Pitch to Vector (Pitch2vec), and a deep bidirectional transformers-based masked predictive encoder (MPE) method for music genre classification. The MIDI files are considered as input. MIDI files are converted to the vector sequence by Pitch2vec before being input into the MPE. By unsupervised learning, the MPE based on deep bidirectional transformers is designed to extract bidirectional representations automatically, which are musicological insight. In contrast to other deep-learning models, such as recurrent neural network (RNN)-based models, the MPE method enables parallelization over time-steps, leading to faster training. To evaluate the performance of the proposed method, experiments were conducted on the Lakh MIDI music dataset. During MPE training, approximately 400,000 MIDI segments were utilized for the MPE, for which the recovery accuracy rate reached 97%. In the music genre classification task, the accuracy rate and other indicators of the proposed method were more than 94%. The experimental results indicate that the proposed method improves classification performance compared with state-of-the-art models.-
dc.format.extent17-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleDBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/math9050530-
dc.identifier.scopusid2-s2.0-85102577434-
dc.identifier.wosid000628363600001-
dc.identifier.bibliographicCitationMATHEMATICS, v.9, no.5, pp 1 - 17-
dc.citation.titleMATHEMATICS-
dc.citation.volume9-
dc.citation.number5-
dc.citation.startPage1-
dc.citation.endPage17-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaMathematics-
dc.relation.journalWebOfScienceCategoryMathematics-
dc.subject.keywordAuthormusic genre classification-
dc.subject.keywordAuthorMIDI-
dc.subject.keywordAuthortransformer model-
dc.subject.keywordAuthorunsupervised learning-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Sung, Yunsick photo

Sung, Yunsick
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE