Comparative Performance Evaluation of Transformer GPT and DCGAN Models for Monophonic Music Generation Using ABC Notation
Contributors
Milind Nemade
Satheesh Babu
Keywords
Proceeding
Track
Engineering, Sciences, Mathematics & Computations
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
This paper presents a comparative performance evaluation of Transformer-based GPT and Deep Convolutional Generative Adversarial Network (DCGAN) models for monophonic music generation using symbolic ABC notation. The study focuses on analyzing structural accuracy, tonal diversity, repetition control, and musical coherence. Experimental results demonstrate that the Transformer GPT model significantly outperforms DCGAN in terms of melodic consistency, transition learning, and resistance to mode collapse. Objective metrics such as repetition score, length similarity, pitch histogram distribution, and transition matrices are used along with qualitative musical observations.