Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views33 pages

Information Management Assignment

Uploaded by

jbleander2021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views33 pages

Information Management Assignment

Uploaded by

jbleander2021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

INFORMATION MANAGEMENT

ASSIGNMENT
Done by,
JOHN BRUNO LEANDER. J
METHODS
OF
DEEP LEARNING
SYNOPSIS:
INTRODUCTION
DEEP LEARNING
METHODS OF DEEP LEARNING
CONCLUSION.
 Introduction:
In the fast-evolving era of artificial intelligence, Deep
Learning stands as a cornerstone technology, revolutionizing how
machines understand, learn, and interact with complex data. At its
essence, Deep Learning AI mimics the intricate neural networks of
the human brain, enabling computers to autonomously discover
patterns and make decisions from vast amounts of unstructured
data.
 Deep Learning:

Deep learning is a type of machine


learning
and artificial intelligence (AI) that imitates the
way humans gain certain types of knowledge.
Deep learning is an important element of data
science, which includes statistics and predictive
modelling.
Deep learning is a machine learning technique that uses
artificial
intelligence (AI) to teach computers to process data in a
manner similar to the
human brain. Deep learning models can recognize complex
data patterns,
such as pictures, text, and sounds, to produce accurate
predictions and
Insights.
 Methods Of Deep Learning:

 Learning Rate Decay


 Transfer Learning
 Training from Scratch
 Dropout
 Recurrent Neural Networks
 Auto encoders
 Deep Learning based Clustering.
 LEARNING RATE DECAY:

Learning rate decay (lrDecay) is a


technique
used to train neural networks by gradually
reducing the learning rate as training
progresses.
It's a common practice that helps with
optimization and generalization.
Learning Rate Decay
Benefits of Learning Rate
Decay:
Accelerates training: A large initial learning rate helps the
network escape
local minima.

Helps the network converge: A decaying learning rate helps the


network
converge to a local minimum and avoid oscillation.
Prevents overfitting: Learning rate decay can help prevent
overfitting.

Improves generalization: Learning rate decay can help improve


generalization.

Speeds up convergence: Learning rate decay can help speed up


convergence.
 TRANSFER
LEARNING:
Transfer Learning is a machine learning
technique that uses a pre-trained model to
solve
new problems. It's a popular method because it
allows deep neural networks to be trained with
relatively little data.
Transfer learning is the reuse of a pre-trained model on a new
problem.
It’s popular in deep learning because it can train deep neural
networks with
comparatively little data.
 Benefits of Transfer
Learning:

Reduced training time: Transfer learning can reduce training


time because it uses a pre-trained model that already has some
knowledge of the task.
Improved performance: Transfer learning can improve the
performance of neural networks.
Reduced data requirements: Transfer learning can reduce the
amount of data needed to train a model.
 TRAINING FROM
This method requires a developer to
SCRATCH:
collect a large, labeled data set and
configure a network architecture that can
learn the features and model. This
technique is
especially useful for new applications, as
well
as applications with many output
categories.
 DROPOUT:

Dropout is a technique that randomly


removes neurons from a neural network during
training to improve the network's performance.
Dropout also improves generalization and reduces co-
adaptation,
which are other benefits of this technique:
Generalization:
By dropping different neurons in each training iteration, the
network
trains multiple sub-networks, which improves the model's ability to
generalize.
Co-adaptation:
Dropout reduces dependencies between neurons, which improves
the
network's ability to learn valuable features.
 RECURRENT NEURAL
NETWORKS:

A recurrent neural network (RNN) is a deep


learning model that uses feedback loops to process
sequential data and produce a sequential output.
RNNs are different from other types of artificial neural networks
because they use feedback loops to process data. This allows
information to persist, which is similar to memory. RNNs can
also handle variable-length sequences of input.
RNNs are used in many applications, including:
Natural language processing: RNNs are used for text
classification, sentiment
analysis, and machine translation. They can learn to contextualize
words in a sentence.
Speech recognition: RNNs can handle variable-length sequences
of input, making
them well-suited for speech recognition.
Time series analysis: RNNs can be used to solve time series
problems, such as
predicting stock prices.
 AUTO
ENCODERS:
Auto encoders can be used to learn a
compressed representation of the input.
Auto encoders are unsupervised, although
they are trained using supervised learning
methods.
 Benefits of Auto
• Auto encoders
Encoders:
can learn to represent input data in
compressed form. By compressing the data into a lower-
dimensional latent space, they can successfully capture the
most conspicuous characteristics of the input. These acquired
qualities may be useful for subsequent classification,
grouping, or anomaly detection tasks.
• Because we may train the auto encoders on unlabeled data,
they are well suited for unsupervised learning circumstances
where labeled data is rare or unavailable. Auto encoders can
find underlying patterns or structures in data by learning to
recreate the input data without explicit labeling.
• We can use auto encoders for data compression by encoding
the input data into a lower-dimensional form. This is
beneficial for storage and transmission since it reduces the
required storage space or network bandwidth while allowing
accurate reconstruction of the original data.
• Moreover, auto encoders can identify data anomalies or
outliers. An auto encoder learns to consistently reconstruct
normal data instances by training it on normal data patterns.
Anomalies or outliers that deviate greatly from the learned
patterns will have increased reconstruction errors, making
them detectable.
• VAEs (variational auto encoders) are a type of auto encoder
that can be used for generative modeling. VAEs can generate
new data samples by sampling from a previously learned
latent space distribution. This is useful for tasks such as
image or text generation.
 DEEP LEARNING BASED
CLUSTERING:

The deep learning based clustering


techniques are different from traditional
clustering techniques as they cluster the data-
points by finding complex patterns rather than
using simple pre-defined metrics like intra-cluster
euclidean distance.
Benefits of deep learning-based clustering
include:
• Learning feature representations: Deep clustering learns feature
representations of data during the clustering process. This helps to
capture the inherent structure and patterns in the data.
• Alleviating the curse of dimensionality: Deep clustering algorithms can
learn low-dimensional data representations that alleviate the curse of
dimensionality. This is a major drawback of many clustering algorithms, which
rely on
similarity measures based on distance functions.
• Learning non-linear mappings: Deep neural networks (DNNs) can learn
non-linear
mappings that transform data into more clustering-friendly representations.
• Gaining insights from data: Clustering can help to gain
insights into data,
such as the natural structure of gene expressions, gene
functions, and cellular
processes.
• Using clusters as new features : Clusters can be used as
new features in
supervised learning issues. For example, clusters can be used
for automatic
data labeling.
 CONCLUSIO
N:
Deep learning is a subset of machine learning
that uses multilayered neural networks, called deep
neural networks, to simulate the complex decision-
making power of the human brain. Some form of
deep
learning powers most of the artificial intelligence (AI)
applications in our lives today.
THANK YOU

You might also like