Introduction to
Generative Adversarial Networks
Luke de Oliveira
Vai Technologies
Lawrence Berkeley National Laboratory
! @lukede0
" @lukedeo
# [email protected]
$ https://ldo.io
1
Outline
• Why Generative Modeling?
• Taxonomy of Generative Models
• Generative Adversarial Networks
• Pitfalls
• Modifications
• Questions
2
Generative Models
3
Generative Modeling
• Asks question - can we build a model to approximate a data
distribution?
• Formally we are given and a finite sample from this
distribution
• Problem: can we find a model such that
• Why might this be useful?
4
Why care about Generative Models?
Oft over-used quote:
“What I cannot create, I do not understand”
-R. Feynman
5
Why care about Generative Models?
• Classic uses:
• Through maximum likelihood, can fit to some interpretable
parameters for a hand-designed
• Learn a joint distribution with labels
and transform to
• More interesting uses:
• Fast-simulation of compute-heavy tasks
• Interpolation between distributions
6
Traditional MLE Approach
• We are given a finite sample from a data distribution
• We construct a parametric model for the distribution, and build
a likelihood
• In practice, we optimize through MCMC or other means, and
obtain
7
Generative Model Taxonomy
Direct
Maximum Likelihood
GAN
Explicit density Implicit density
Markov Chain
Tractable density Approximate density
GSN
Fully visible belief nets:
NADE
MADE
PixelRNN Variational Markov Chain
Change of variables models (nonlinear
ICA) VAE Boltzmann machine
From I. Goodfellow
8
Generative
Adversarial Networks
9
Generative Adversarial Networks
• As before, we have a data distribution
• We cast the process of building a model of the data distribution
as a two-player game between a generator and a discriminator
• Our generator has a latent prior and maps
this to sample space
• Implicitly defines a distribution
• Our discriminator tells how fake or real a sample looks
via a score (in practice, Prob[Fake])
10
Generative Adversarial Networks
Distinguish real samples from fake
samples
Transform noise into a realistic
sample
Real data
11
Vanilla GAN formulation
• How can we jointly optimize G and D?
• Construct a two-person zero-sum minimax game
with a value V
• We have an inner maximization by D and an outer
minimization by G
12
Theoretical Guarantees
• Let’s step through the proof for equilibrium and
implicit minimization of JSD
13
Theoretical Guarantees
• From original paper, know that
• Define generator solving for infinite capacity discriminator,
• We can rewrite value as
• Simplifying notation, and applying some algebra
• But we recognize this as a summation of two KL-divergences
• And can combine these into the Jenson-Shannon divergence
• This yields a unique global minimum precisely when
14
Theoretical Guarantees
• TL;DR from the previous proof is as follows
• If D and G are allowed to come from the space of all continuous functions,
then we have:
• Unique equilibrium
• The discriminator admits a flat posterior, i.e.,
• The implicit distribution defined by the generator exactly recovers
the data distribution
15
Pitfalls
16
GANs in Practice
• This minimax formulation saturates quickly, causing
gradients propagating from the discriminator to vanish when
the generator does poorly. Non saturating formulation:
• Before:
• After:
17
Failure Modes (ways in which GANs fail)
• Mode Collapse — i.e., learn to produce one mode
of data distribution, stop there
• Vanishing/Exploding Gradients from discriminator
to generator
• Generator produces garbage that fools
discriminator
18
Introspection
• GANs do not naturally have a metric for
convergence
• Ideally, all losses go to
• Often does not happen in practice
19
Modifications
20
GANs in Practice
• Even when using “non-saturating” heuristic,
convergence still difficult
• “Tricks” needed to make things work on real data
• Two major (pre-2017) categories
• Architectural Guidelines
• Side information / Information theoretic
21
DCGAN
• Deep Convolutional Generative Adversarial
Networks provide a set of ad-hoc guidelines for
building architectures for images
• Enabled a lot of current progress in GAN research
22
DCGAN
23
Side Information
• Conditional GAN, Auxiliary Classifier GAN,
InfoGAN etc.
• Key idea: can we leverage side information (a label,
description, trait, etc.) to produce either better
quality or conditional samples?
• The discriminator can either be shown the side
information or tasked with reconstructing it
24
Conditional GAN (CGAN)
• Latent variable is passed to the
generator and the discriminator.
• The generator learns side-
information conditional
distributions, as it is able to
disentangle this from the overall
latent space
25
Auxiliary Classifier GAN (ACGAN)
• Similar to CGAN, latent variable is
passed to the generator
• Discriminator is tasked with jointly
learning real-vs-fake and the ability to
reconstruct the latent variable being
passed in
26
InfoGAN
• Instead of the latent variables being known a priori
from a dataset, make parts of latent space randomly
drawn from different distributions
• Bernoulli, Normal, multiclass, etc.
• Make the discriminator reconstruct these arbitrary
elements of latent space that are passed into generator
• Learns disentangled features (maximizes mutual
information)
27
Conclusion
• Showed theoretical guarantees of GANs (in unrealistic
settings) and convergence properties
• Discussed pitfalls of GANS
• Explored a few basic methods with ~no theory that try to
improve GANs
• Architecture improvements, side information
• I didn’t talk about Minibatch Discrimination or Feature
Matching
28
Questions?
29
Thanks!
30
References
(1) Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair,
Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. NIPS, 2014.
(2) A. Radford, L. Metz and S. Chintala, Unsupervised representation learning with deep
convolutional generative adversarial networks. 2015.
(3) Mirza, Mehdi and Osindero, Simon. Conditional generative adversarial nets. 2014.
(4) Augustus Odena, Christopher Olah, Jonathon Shlens, Conditional Image Synthesis with
Auxiliary Classifier GANs. ICML, 2017.
(5) Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, Pieter Abbeel. InfoGAN:
Interpretable Representation Learning by Information Maximizing Generative Adversarial
Nets. 2016.
(6) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen.
Improved Techniques for Training GANs. NIPS, 2016.
31