A framework where two neural networks compete to generate realistic synthetic data.
A Generative Adversarial Network (GAN) is a deep learning framework in which two neural networks — a generator and a discriminator — are trained simultaneously through an adversarial process. The generator's goal is to produce synthetic data samples that are indistinguishable from real data, while the discriminator's goal is to correctly classify inputs as either real or generated. This dynamic creates a feedback loop: as the discriminator becomes better at detecting fakes, the generator is forced to produce more convincing outputs, and vice versa. The process continues until the generator produces samples realistic enough to reliably fool the discriminator.
Training a GAN is framed as a minimax game drawn from game theory. The generator tries to minimize the discriminator's ability to distinguish real from fake, while the discriminator tries to maximize its classification accuracy. In practice, both networks are updated through backpropagation using a shared loss signal. The generator never directly sees real data — it only receives gradient feedback from the discriminator, which guides it toward producing more realistic outputs over time.
GANs have proven remarkably versatile across a wide range of applications. In computer vision, they power image synthesis, super-resolution, inpainting, and style transfer. Architectures like DCGAN, StyleGAN, and CycleGAN extended the original framework to handle high-resolution images, domain translation, and fine-grained control over generated content. Beyond images, GANs have been applied to video generation, music synthesis, text-to-image translation, and even drug discovery, where they help generate novel molecular structures.
Despite their power, GANs are notoriously difficult to train. Common failure modes include mode collapse — where the generator produces only a narrow range of outputs — and training instability caused by imbalances between the two networks. Researchers have proposed numerous remedies, including Wasserstein loss, spectral normalization, and progressive growing of networks. While diffusion models have recently emerged as strong competitors in generative modeling, GANs remain foundational to the field and continue to influence modern generative AI research.