Deep Learning
Introduction
Fall 2024
The University of Jordan
Dr. Tamam AlSarhan
Content
I. Introduction
II. Neural Nets As Universal Approximators
III. Training a neural Network –Part 1 (The Problem of Learning)
IV. Training a neural Network- Part 2 (Gradient Descent, Training the Network)
V. Training a neural Network- Part 3 (Backpropagation, Calculus of Backpropagation)
VI. Training a neural Network- Part 4 ( Loss functions, regularizes, Dropout, ….)
VII. Convolutional Neural Networks (CNNs)
VIII. Recurrent Neural Networks (RNNs)
IX. Transformers and Attention
X. Generative Adversarial Networks (GANs)
XI. Large Language Models (LLM)
About this course
• Introduction to deep learning
• basics of ML assumed
• mostly high-school math
• This course is largely “inspired by”: “Deep
Learning with Python” by François Chollet.
• Lots of further material available online, e.g.:
http://cs231n.stanford.edu/ http://course.fast.ai/
https://developers.google.com/machine-learning/crash-course/
www.nvidia.com/dlilabs http://introtodeeplearning.com/
https://github.com/oxford-cs-deepnlp-2017/lectures,
https://jalammar.github.io/
• Using Python, TensorFlow 2 / Keras, and PyTorch
What is deep learning?
Deep learning is a subfield of machine learning focusing on
learning data representations as successive LAYERS of
increasingly meaningful representations.
Image from https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/
Anatomy of a deep neural network
• Layers
• Input data and targets
• Loss function
• Optimizer
Layers
• Data processing modules
• Many different kinds exist
• densely connected
• convolutional
• recurrent
• pooling, flattening, merging, normalization, etc.
• Input: one or more tensors
output: one or more tensors
• Usually have a state, encoded as weights
• learned, initially random
• When combined, form a network or
a model
Neural Networks