Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views16 pages

Emotion Detection

The Supermarket Customer Emotion Detection System utilizes deep learning techniques to analyze shoppers' emotions through facial recognition in real-time, enhancing customer experience and optimizing retailer interactions. Key objectives include real-time emotion detection, facial expression classification, and system adaptability across different environments. The project employs technologies like OpenCV, DeepFace, and Keras TensorFlow for accurate emotion classification and integration into retail settings.

Uploaded by

r39507594
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views16 pages

Emotion Detection

The Supermarket Customer Emotion Detection System utilizes deep learning techniques to analyze shoppers' emotions through facial recognition in real-time, enhancing customer experience and optimizing retailer interactions. Key objectives include real-time emotion detection, facial expression classification, and system adaptability across different environments. The project employs technologies like OpenCV, DeepFace, and Keras TensorFlow for accurate emotion classification and integration into retail settings.

Uploaded by

r39507594
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Project Based Learning 5 : Mini Project

Review 3

SUPERMARKET CUSTOMER
EMOTION DETECTION SYSTEM
Team Members : -
Rishabh Patil 121B1B182
Prerna Pokharkar 121B1B196
Mandar Patil 121B1B203
Introduction :

The Supermarket Customer Emotion Detection System enhances customer


experience by analyzing shoppers' emotional responses through facial
emotion recognition.
It uses deep learning techniques like Convolutional Neural Networks
(CNNs) for emotion detection and classification in real-time.
The system processes video feeds within the supermarket to identify
emotional states such as satisfaction, frustration, and curiosity.
It helps retailers optimize customer interactions, improve service quality,
and adjust marketing strategies based on customer sentiment.
The focus is on real-time emotion detection for seamless integration into the
retail environment.
Objectives :

1. Real-time Emotion Detection: Develop a system that detects human


emotions from live video streams.
2. Facial Expression Classification: Implement a deep learning model to classify
emotions like happiness, sadness, anger, etc.
3. Accuracy Improvement: Enhance recognition accuracy through model fine-
tuning, dataset augmentation, or ensemble methods.
4. Cross-domain Adaptability: Improve system robustness for different
demographics, lighting, and environments.
5. User Interface Development: Create a user-friendly interface for real-time
emotion visualization and feedback.
6. Integration with Applications: Integrate the system into platforms like social
media, virtual reality, and educational tools.
Project Overview :

1. This project implements an emotion detection system using computer vision and deep learning
techniques. It integrates OpenCV for real-time webcam capture and face detection, DeepFace
for emotion classification, and Keras TensorFlow for model training. The Haar Cascade
algorithm is used for initial face detection in the captured video frames. The system can
accurately identify basic human emotions such as happiness, sadness, anger, and surprise,
enhancing human-computer interaction capabilities.
2. Key Features:
Face Detection: Utilizes Haar Cascade to detect faces in real time.
Emotion Classification: Deep learning models in Keras TensorFlow analyze facial expressions.
Real-Time Processing: OpenCV enables live emotion detection from webcam feeds.
3. This application which we are implementing in this project is to view customer satisfaction
through CCTV monitoring customers’ emotions.
Reference Papers :
Proposed
Name Objectives Result Advantages Limitation
Solution

DeepFace: Achieved 97.35% - Improved


Introduced a deep - Requires
To achieve accuracy on the LFW accuracy across
Closing the Gap neural network (9- significant
human-level dataset, significantly varying face poses computational
to Human- layer DNN) with 3D
performance in reducing the and lighting resources due to the
Level face alignment to
face verification performance gap conditions. deep architecture.
Performance in reduce pose variations - Highly reliant on
using deep between humans and - 3D face alignment
Face and improve face large-scale labeled
learning. machines in face reduces pose
Verification recognition accuracy. datasets.
verification. variation.

- Unified solution - Sensitive to


To develop a Introduced a deep
FaceNet: A for recognition, triplet selection
unified face convolutional network Achieved 99.63%
Unified verification, and during training.
recognition, that learns a 128- accuracy on LFW and
Embedding for clustering. - Embeddings can
verification, and dimensional face highly efficient
Face - Scalable and be less effective
clustering embedding for each clustering across
Recognition and efficient with for extreme
approach using face, trained using large-scale datasets.
Clustering compact face variations like
embeddings. triplet loss.
embeddings. occlusions.
Reference Papers :

Proposed
Name Objectives Result Advantages Limitation
Solution

Developed a deep CNN Achieved state-of-the- - High accuracy - Computationally


To improve face expensive due to
model with a 16-layer art results on the LFW with a deeper
recognition the deeper network.
architecture trained dataset, with 98.95% architecture.
VGGFace: Deep performance - High memory
on a large-scale accuracy, and - Pre-trained model
Face using a deeper consumption during
dataset of celebrity excellent available, allowing training and
Recognition network inspired
faces, optimizing for generalization on transfer learning inference, limiting
by the VGG
recognition other face recognition for different facial real-time or edge
architecture.
performance. datasets. recognition tasks. device applications.
System Architecture
System Architecture
Testing Result
Real-time camara capture
Real-time camara capture
Real-time camara capture
Excel data generation
References :

1.Taigman, Y., Yang, M., Ranzato, M., & Wolf, L. (2014). DeepFace: Closing the Gap to Human-
Level Performance in Face Verification. Academic Journal. https://doi.org/10.1109/cvpr.2014.220

2.Zhu, Y., Liang, Y., Tang, K., & Ouchi, K. (2022). FACE-NET: Spatial and Channel Attention
Mechanism for Enhancement in Face Recognition. Academic Journal.
https://doi.org/10.1109/icict55905.2022.00036

3.Cao, Q., Shen, L., Xie, W., Parkhi, O. M., & Zisserman, A. (2018). VGGFace2: A Dataset for
Recognising Faces across Pose and Age. Academic Journal. https://doi.org/10.1109/fg.2018.00020

You might also like