Modèle Updated 2026-04
GAN (Generative Adversarial Network)
Generative Adversarial Network
Definition
A GAN is a deep learning architecture composed of two competing neural networks — a generator and a discriminator — to produce realistic synthetic data.
See also in the glossary
D
Deep Learning
Deep Learning is a subset of Machine Learning using multi-layered neural networks to learn complex representations from raw data.
N
Neural Network
A neural network is a computing model inspired by the human brain, composed of layers of interconnected nodes that process information to learn patterns.
G
Generative AI
Generative AI refers to artificial intelligence systems capable of creating original content: text, images, video, audio, code.
D
Diffusion Model
A diffusion model is an AI architecture that generates images starting from random noise and progressively refining it.
T
Text-to-Image
Text-to-Image refers to generating images from text descriptions using generative AI models.
E
Embedding
An embedding is a numerical representation (vector) of text or data, capturing its semantic meaning.
Tools that use gan
S
Stable Diffusion
The open source reference for AI image generation
4.4/5
M
Midjourney
The reference in AI image generation
4.4/5
D
DALL-E
The most used AI image generator, built into ChatGPT
4/5
F
Flux
The image generation model rivaling Midjourney
4.8/5
H
Hugging Face
The reference open source platform for AI models
4.6/5
Frequently Asked Questions
What's the difference between a GAN and a diffusion model?
A GAN uses two competing networks (generator vs discriminator) and generates in a single pass. A diffusion model progressively denoises an image over multiple steps. Diffusion models dominate in 2026 for image quality, but GANs remain faster at inference.
Are GANs still used in 2026?
Yes, but in specific niches. Diffusion models replaced them for mainstream image generation, but GANs remain dominant for real-time super-resolution, video style transfer, and tabular synthetic data generation.