Comparative Performance Evaluation of Transformer GPT and DCGAN Models for Monophonic Music Generation Using ABC Notation


Date Published : 7 January 2026

Contributors

Milind Nemade

University of Mumbai
Author

Satheesh Babu

Lincoln University College
Author

Keywords

Monophonic Music Generation Transformer GPT DCGAN ABC Notation Symbolic Music

Proceeding

Track

Engineering, Sciences, Mathematics & Computations

License

Copyright (c) 2026 Sustainable Global Societies Initiative

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Abstract

This paper presents a comparative performance evaluation of Transformer-based GPT and Deep Convolutional Generative Adversarial Network (DCGAN) models for monophonic music generation using symbolic ABC notation. The study focuses on analyzing structural accuracy, tonal diversity, repetition control, and musical coherence. Experimental results demonstrate that the Transformer GPT model significantly outperforms DCGAN in terms of melodic consistency, transition learning, and resistance to mode collapse. Objective metrics such as repetition score, length similarity, pitch histogram distribution, and transition matrices are used along with qualitative musical observations.

References

No References

Downloads

How to Cite

Nemade, M., & Babu, S. (2026). Comparative Performance Evaluation of Transformer GPT and DCGAN Models for Monophonic Music Generation Using ABC Notation. Sustainable Global Societies Initiative, 1(1). https://vectmag.com/sgsi/paper/view/116