CCS355 NEURAL NETWORKS AND DEEP LEARNING L T P C 2 0 2 3
COURSE OBJECTIVES:
To understand the basics in deep neural networks
To understand the basics of associative memory and unsupervised learning networks
To apply CNN architectures of deep neural networks
To analyze the key computations underlying deep learning, then use them to build and train
deep neural networks for various tasks.
To apply autoencoders and generative models for suitable applications.
UNIT I INTRODUCTION
Neural Networks-Application Scope of Neural Networks-Artificial Neural Network: An
IntroductionEvolution of Neural Networks-Basic Models of Artificial Neural Network-
Important Terminologies of ANNs-Supervised Learning Network.
UNIT II ASSOCIATIVE MEMORY AND UNSUPERVISED LEARNING
NETWORKS
Training Algorithms for Pattern Association-Autoassociative Memory Network-
Heteroassociative Memory Network-Bidirectional Associative Memory (BAM)-Hopfield
Networks-Iterative Autoassociative Memory Networks-Temporal Associative Memory
Network-Fixed Weight Competitive Nets-Kohonen Self-Organizing Feature Maps-Learning
Vector Quantization-Counter propagation Networks-Adaptive Resonance Theory Network.
UNIT III THIRD-GENERATION NEURAL NETWORKS
Spiking Neural Networks-Convolutional Neural Networks-Deep Learning Neural Networks-
Extreme Learning Machine Model-Convolutional Neural Networks: The Convolution
Operation – Motivation – Pooling – Variants of the basic Convolution Function – Structured
Outputs – Data Types – Efficient Convolution Algorithms – Neuroscientific Basis –
Applications: Computer Vision, Image Generation, Image Compression.
UNIT IV DEEP FEEDFORWARD NETWORKS
History of Deep Learning- A Probabilistic Theory of Deep Learning- Gradient Learning –
Chain Rule and Backpropagation - Regularization: Dataset Augmentation – Noise
Robustness -Early Stopping, Bagging and Dropout - batch normalization- VC Dimension and
Neural Nets.
UNIT V RECURRENT NEURAL NETWORKS
Recurrent Neural Networks: Introduction – Recursive Neural Networks – Bidirectional RNNs
– Deep Recurrent Networks – Applications: Image Generation, Image Compression, Natural
Language Processing. Complete Auto encoder, Regularized Autoencoder, Stochastic
Encoders and Decoders, Contractive Encoders.
LAB EXPERIMENTS:
1. Implement simple vector addition in TensorFlow.
2. Implement a regression model in Keras.
3. Implement a perceptron in TensorFlow/Keras Environment.
4. Implement a Feed-Forward Network in TensorFlow/Keras.
5. Implement an Image Classifier using CNN in TensorFlow/Keras.
6. Improve the Deep learning model by fine tuning hyper parameters.
7. Implement a Transfer Learning concept in Image Classification.
8. Using a pre trained model on Keras for Transfer Learning
9. Perform Sentiment Analysis using RNN
10. Implement an LSTM based Autoencoder in TensorFlow/Keras.
11. Image generation using GAN
Additional Experiments:
12. Train a Deep learning model to classify a given image using pre trained model
13. Recommendation system from sales data using Deep Learning
14. Implement Object Detection using CNN
15. Implement any simple Reinforcement Algorithm for an NLP problem