Skip to content

Architecture

Generative Adversarial Network (GAN)

A neural network architecture where two models — a generator and a discriminator — compete, training each other to produce realistic synthetic data.

A Generative Adversarial Network (GAN) is a neural network architecture introduced by Ian Goodfellow in 2014, made up of two networks trained in opposition. The **generator** tries to produce fake data (e.g. images) that look real, while the **discriminator** tries to tell real data from generated data. Both improve through their competition until the generator's output is indistinguishable from the real distribution. GANs were the breakthrough that made photorealistic AI image generation possible before diffusion models took over. They power things like StyleGAN's eerily realistic fake faces (thispersondoesnotexist.com), deepfakes, image-to-image translation (CycleGAN), and super-resolution. They're still widely used in research, medical imaging, and any task needing fast one-shot generation. The classic analogy: a forger (generator) tries to paint fake Picassos, while an art detective (discriminator) tries to spot the fakes. As the detective gets sharper, the forger has to get better — and vice versa. Eventually the forger paints so well even the detective can only guess. GANs are notoriously tricky to train — they suffer from mode collapse (generator outputs only a few variations) and unstable gradients. That's part of why diffusion models have largely replaced them for high-end image generation since around 2022. Related concepts: diffusion models, VAE (variational autoencoder), StyleGAN, deepfake, mode collapse, Ian Goodfellow.

Last updated: 2026-04-29

We use cookies

Anonymous analytics help us improve the site. You can opt out anytime. Learn more

Generative Adversarial Network (GAN) · BuilderWorld