Generalization bounds for a generator-regularized InfoGAN-inspired adversarial objective
Abstrak
The Information Maximizing Generative Adversarial Network (InfoGAN) can be formulated as a minimax problem involving a generator and a discriminator, augmented by a mutual information regularization term. Despite strong empirical performance, rigorous generalization guarantees for InfoGAN-type objectives remain limited, particularly when additional structural components are introduced. In this paper, we study an InfoGAN-inspired adversarial framework obtained by removing the latent code component and introducing an explicit regularization term on the generator, yielding an analytically tractable generator-regularized adversarial objective. We establish generalization error bounds by analyzing the gap between empirical and population objective functions using Rademacher complexity arguments for the discriminator, the generator, and their composition. The resulting bounds reveal explicit n−1/2 and m−1/2 decay rates with respect to the discriminator and generator sample sizes and clarify the role of the generator regularization parameter. The theory is further specialized to two-layer neural networks with Lipschitz continuous and non-decreasing activation functions, where explicit entropy-based complexity bounds are derived. Experiments on the CIFAR-10 dataset validate the predicted scaling behavior and demonstrate that the generalization gap decreases systematically as sample size increases, highlighting the stabilizing effect of generator regularization. Overall, this work provides one of the first rigorous generalization analyses for an InfoGAN-inspired adversarial objective with explicit generator regularization.
Topik & Kata Kunci
Penulis (3)
Mahmud Hasan
Mathias Nthiani Muia
Md Mahmudul Islam
Akses Cepat
- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.3389/frai.2026.1731256
- Akses
- Open Access ✓