Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
21 views22 pages

DL - Class Test - 2

The document discusses various types of autoencoders, including UnderComplete Autoencoders, Contractive Autoencoders, Variational Autoencoders, Stochastic Autoencoders, and Stacked Autoencoders. Each type serves different purposes in feature extraction and data representation, with specific training objectives and loss functions. The content emphasizes the importance of regularization and the architecture of autoencoders in achieving effective data encoding and decoding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views22 pages

DL - Class Test - 2

The document discusses various types of autoencoders, including UnderComplete Autoencoders, Contractive Autoencoders, Variational Autoencoders, Stochastic Autoencoders, and Stacked Autoencoders. Each type serves different purposes in feature extraction and data representation, with specific training objectives and loss functions. The content emphasizes the importance of regularization and the architecture of autoencoders in achieving effective data encoding and decoding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter

Unit IV
04
Networking AutoEncoder
and
Security
Hours: 06

Presented by:

Prof.S.S.Bhosale
PREC, Loni
.

Course Outcomes

CO2:
.

Types of Autoencoder
1. UnderComplete Autoncoder:

• An autoencoder whose code dimension is less


than input dimensions is called undercomplete
autoencoder

• Goal of the Autoencoder is to capture the most


important features present in the data.

• Undercomplete autoencoders have a smaller


dimension for hidden layer compared to the input
layer.
.

Types of Autoencoder
•This helps to obtain important features from the data.
•Objective is to minimize the loss function by
penalizing the g(f(x)) for being different from the
input x.

•When decoder is linear and we use a mean squared


error loss function then undercomplete autoencoder
generates a reduced feature space similar to PCA.
.

Types of Autoencoder

• Undercomplete autoencoders do not need any


regularization as they maximize the probability of
data rather than copying the input to the output.
.

Types of Autoencoder
.

Contractive Autoencoders(CAE):

Contractive autoencoder(CAE) objective is to


have a robust learned representation which is
less sensitive to small variation in the data.

Robustness of the representation for the data is


done by applying a penalty term to the loss
function.
.

Contractive Autoencoders(CAE):
 Contractive autoencoder is another
regularization technique like sparse
autoencoders and denoising autoencoders.

CAE surpasses results obtained by regularizing


autoencoder using weight decay or by
denoising.
.

Types of Autoencoder

 CAE is a better choice than denoising


autoencoder to learn useful feature extraction.

 Penalty term generates mapping which are


strongly contracting the data and hence the
name contractive autoencoder.
.

Contractive Autoencoders(CAE):
.

Variational autoencoder :

Variational autoencoder can be defined as being


an autoencoder whose training is regularised to
avoid overfitting and ensure that the latent space
has good properties that enable generative
process.
.

Types of Autoencoder
•Just as a standard autoencoder, a variational
autoencoder is an architecture composed of both
an encoder and a decoder and that is trained to
minimize the reconstruction error between the
encoded-decoded data and the initial data.

• In order to be able to use the decoder of our


autoencoder for generative purpose, we have to
be sure that the latent space is regular enough.
.

Types of Autoencoder

One possible solution to obtain such regularity


is to introduce explicit regularisation during the
training process.
.

Types of Autoencoder
.

Types of Autoencoder
.

Stochasic Autoencoder:

In this autoencoder both encoder and decoder


are not simple functions but instead involved
some noise injection.
.

Types of Autoencoder
.

Types of Autoencoder
Stacked Autoencoder (Deep Autoencoder):
Stacked Autoencoder are the typical and the
most basic form of autoencoder .

Just like other neural networks autoencoder can


have multiple hidden layers.

Such autoencoder are called as Stacked


Autoencoder or deep autoencoder .
.

Types of Autoencoder

Adding more layers helps the autoencoder to


learn more complex codings.

Autoencoder is a kind of unsupervised learning


structure that owns three layers: input layer,
hidden layer, and output layer.

The process of an autoencoder training consists


of two parts: encoder and decoder.
.

Types of Autoencoder

Encoder is used for mapping the input data into


hidden representation, and decoder is referred to
reconstructing input data from the hidden
representation.

The stacked autoencoders are, as the name


suggests, multiple encoders stacked on top of
one another.
.

Types of Autoencoder
.

Types of Autoencoder

You might also like