ResNet – Deep Residual Networks
Presented by:
Neelam Bhapkar
(2441025)
Siddhi Ingale (2441026)
Shruti Jadhav (2441027)
Ruturaj Taware
(2441070)
ResNet 1 / 13
Introduction to Deep Neural
Networks
Deep neural networks can learn complex features and
patterns. Adding more layers theoretically increases
accuracy.
Problem: Degradation – deeper networks sometimes perform
worse than shallower ones.
Vanishing/Exploding Gradients: Gradients become too
small or large during backpropagation.
Optimization Difficulty: More layers increase training
complexity and may lead to poor convergence.
ResNet 2 / 13
What is ResNet?
ResNet was introduced by Microsoft Research in 2015.
ResNet stands for Residual Network.
Enables training of very deep networks (e.g., ResNet-152).
Introduces skip connections to pass input directly to deeper
layers. Helps solve degradation by allowing gradients to
flow unimpeded.
ResNet 3 / 13
Skip Connections Explained
Helps Gradient Flow – Prevents vanishing gradients by
letting gradients bypass layers.
Learns Residuals – Learns the difference F (x ) and adds
it back:
F (x ) + x .
Identity Mapping – Passes input x directly to deeper
layers unchanged.
Supports Deep Networks – Enables training of 50+ layer
models without degradation.
ResNet 4 / 13
ResNet
Architecture
ResNet 5 / 13
ResNet Architecture Overview
Input image size: 112×112
Initial Conv + Max Pool reduces to
56×56×64 Series of Residual Blocks (1–
8):
28×28×128 → 14×14×256 → 7×7×512
Global Average Pooling to 1×1×512
Fully Connected Layer outputs 100 class
predictions Architecture type: ResNet-18 or
ResNet-34
ResNet 6 / 13
Performance Metrics
ImageNet Top-5 Accuracy:
ResNet-50: 92.2%
ResNet-101: 93.3%
ResNet-152: 93.8%
Training Time: Increases with depth but provides
better generalization.
Inference Speed: Depends on model size and
hardware used.
ResNet 7 / 13
Advantages of ResNet
Trains Very Deep Networks
Solves Vanishing Gradient
Problem High Accuracy
Backbone for Modern Architectures
Faster R-CNN – Object Detection
Mask R-CNN – Instance
Segmentation
ResNet 8 / 13
Disadvantages of
ResNet
High Computational Cost
Deep models require more compute resources and
longer training time.
Increased Memory Usage
More layers = higher memory consumption, especially
with large inputs or batch sizes.
Overfitting Risk
Tends to overfit when applied to small or imbalanced
datasets .
ResNet 9 / 13
Comparison with Other
Architectures
VGGNet vs. ResNet: VGG has deep but plain architecture;
ResNet enables deeper models with skip connections.
AlexNet vs. ResNet: AlexNet is shallower and less accurate;
ResNet outperforms with deeper networks.
Key Point: ResNet addresses degradation problem, making
it more scalable and reliable for deep learning tasks.
ResNet 10 /
Applications of
ResNet
Image Classification – Winner of ImageNet 2015.
Object Detection – Used in Faster R-CNN and Mask R-
CNN.
Medical Imaging – Diagnosis using X-rays, MRIs, etc.
Autonomous Vehicles – Scene understanding for self-
driving.
ResNet 11 /
Future Directions
ResNeXt / DenseNet: Enhanced versions of ResNet with
better performance.
Vision Transformers (ViT): Incorporating ResNet ideas
into attention-based models.
Edge Deployment: Research to optimize ResNet for
mobile and embedded systems.
Hybrid Models: Combining residual learning with
attention mechanisms.
ResNet 12 /
Conclusi
on
ResNet uses skip connections to make deep networks more
stable and accurate.
It is the foundation for many successful computer
vision models, balancing performance and scalability.
ResNet 13 /