Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views1 page

V2 - Gen AI

Generative Adversarial Networks (GANs) are a generative modeling approach developed by Ian Goodfellow that utilizes two competing neural networks: a generator that creates data resembling training data, and a discriminator that distinguishes between real and generated data. Stable Diffusion, a project stemming from the latent diffusion model, was developed by researchers at Ludwig Maximilian University and later advanced by Stability AI. The training process of GANs involves an adversarial game that enhances the performance of both networks over time.

Uploaded by

meghanasweety2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views1 page

V2 - Gen AI

Generative Adversarial Networks (GANs) are a generative modeling approach developed by Ian Goodfellow that utilizes two competing neural networks: a generator that creates data resembling training data, and a discriminator that distinguishes between real and generated data. Stable Diffusion, a project stemming from the latent diffusion model, was developed by researchers at Ludwig Maximilian University and later advanced by Stability AI. The training process of GANs involves an adversarial game that enhances the performance of both networks over time.

Uploaded by

meghanasweety2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

GENERATIVE ADVERSARIAL NETWORKS [GANS] STABLE DIFFUSION

Generative Adversarial Networks (GANs) were Stable Diffusion, originated from a project called
developed by Ian Goodfellow and his teammates. Latent Diffusion, developed by researchers at Ludwig
GAN is basically an approach to generative modeling Maximilian University in Munich and Heidelberg
that generates a new set of data based on training data University. 4 of the original 5 authors (Robin
that look like training data. GANs have two main Rombach, Andreas Blattmann, Patrick Esser and
blocks [two neural networks] which compete with Dominik Lorenz) later joined Stability AI and released
each other and are able to capture, copy, and analyze subsequent versions of Stable Diffusion.
the variations in a dataset. The two models are usually The technical license for the model was released by the
called Generator and Discriminator which we will CompVis group at Ludwig Maximilian University of
cover in Components on GANs. Munich. Development was led by Patrick Esser
of Runway and Robin Rombach of CompVis, who
The generator network takes random input (typically were among the researchers who had earlier invented
noise) and generates samples, such as images, text, or the latent diffusion model architecture used by Stable
[7]
audio, that resemble the training data it was trained on. Diffusion. Stability AI also
The goal of the generator is to produce samples that credited EleutherAI and LAION (a German nonprofit
are indistinguishable from real data. which assembled the dataset on which Stable Diffusion
[7]
The discriminator network, on the other hand, tries to was trained) as supporters of the project.
distinguish between real and generated samples. It is
trained with real samples from the training data and
generated samples from the generator. The
discriminator’s objective is to correctly classify real
data as real and generated data as fake.
The training process involves an adversarial game
between the generator and the discriminator. The
generator aims to produce samples that fool the
discriminator, while the discriminator tries to improve
its ability to distinguish between real and generated
data. This adversarial training pushes both networks to
improve over time.

You might also like