Hyperspherical HyperFace-GAN: Generating Discriminative Synthetic Face Data via Multi-task and Angular Softmax Learning
Contributors
Dr Dattatreya P Mankame
Amiya Bhaumik
Hemalatha
Keywords
Proceeding
Track
Engineering, Sciences, Mathematics & Computations
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Now-a-days, producing high-quality synthetic facial data is crucial to the dependability and privacy of face recognition systems. This paper introduces a novel approach that combines hyperspherical embedding with HyperFace-based multi-task learning to make artificial images appear more realistic and easier to distinguish. This technique makes use of HyperFace to simultaneously estimate significant facial characteristics including gender, position, and landmarks, which contributes to the creation of more detailed and relevant feature representations. These features are then mapped into a hyperspherical space using an angular softmax loss function, which aids in highlighting individual differences. These hyperspherical embeddings are then used to train a generative adversarial network (GAN), which enables the creation of facial images that maintain the same identity but exhibit numerous variances within the same group. The results demonstrate that this approach outperforms current methods like StyleGAN2 and FaceID-GAN in terms of face verification accuracy, F1-score, and how well the generated embeddings can distinguish between different identities when tested on well-known benchmark datasets like LFW, CelebA-HQ, and VGGFace2. These findings demonstrate how hyperspherical geometry combined with multi-task learning produces extremely realistic and identifiable synthetic face data, which enhances face recognition systems