Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
95 views49 pages

Face Detection

Uploaded by

Mohammed Raihan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views49 pages

Face Detection

Uploaded by

Mohammed Raihan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

CHAPTER 1

INTRODUCTION
Aircraft Faces play an important role in social communication. Facial biometrics itself is

used in many applications such as security, forensics and other commercial applications.

Similarly, facial expressions are the fastest way to communicate when communicating

any type of information. That, six basic expressions of happiness, sadness, anger, fear,

disgust and surprise are easily identified across different cultures. The system is designed

to automatically analyze facial motion automatic facial expression recognition systems

(AFERS) through a human body computer. The powerful AFER system can be applied to

many areas of science such as mood detection, clinical psychology and pain assessment.

One has three main steps AFERS;

1. Detect a face from a given input image or video,

2. Extract facial features such as eyes, nose and mouth from detected faces and faces

3. Classify facial expressions into different happiness, anger, sadness, fear, disgust, etc.
Surprise.

1.1 Assignment Problem

Face detection is a special case of object detection. The recommended face detection

system uses skin color detection and segmentation. It also involves an optical

compensation algorithm and a Keep face from morphology operation. To extract facial

features, activate the appearance model using the AAM method. At last,

Dept of CSE 2018-2019Page 1


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Expressions are considered to be happiness, sadness, anger, fear, disgust, and surprise,

initially using the Euclidean distance method and then training the Artificial Neuro-Fuzzy

Inference System (ANFIS).

This article is divided into six parts, the second part includes the necessary basic terms.

Learn about facial recognition and facial expression recognition. The third part of this

paper includes the difference between face recognition and face recognition facial

expression recognition. A program that recognizes facial expressions. The fifth part

includes comments on the correct ten studies on expression recognition.

Various technologies. The sixth part is the conclusion that it is about 90% recognition of

facial expressions, calculated based on the collected reviews. Finally, the seventh and the

seventh part discuss the scope of the future.

1. Face Detection: Face detection refers to determining that a picture contains a face that

we need to be able to define the general structure of the face. Fortunately, faces are not

very different from each other; we all have noses, eyes, forehead, chin and mouth; and all

of this constitutes the general structure of the face. This is a two-level classification

concept: face and non-face.

2. Face detection can be considered as a specific case of object class detection. In object

class detection, the task is to find the position and size of all objects in the image that

belong to a given class.

Facial expression recognition

Usually, the face is a combination of bones, facial muscles and skin tissue. When these

muscles contract, they produce warped facial features. Facial expressions are the fastest

way to communicate when communicating any type of information. The realization of

facial expression recognition can lead to a natural human-machine interface. In 1978,

Dept of CSE 2018-2019Page 2


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Ekman and Frisen reported that facial expressions can be used as a fast signal to change

with the contraction of facial features such as eyebrows, lips, eyes, and cheeks, thereby

affecting the accuracy of recognition and affecting happiness, sadness, fear, and disgust.

And anger. Surprisingly, there are six basic expressions. These expressions involve three

steps: face detection, feature extraction and expression classification in different facial

facial expression recognition.

1.2 Problem Definition

Facial expression recognition consists of three main steps: (1) face detection and image

preprocessing, (2) feature extraction, and (3) expression classification. The purpose of

this paper is to understand the basic differences between face recognition and facial

expression recognition, and to study effective facial expression recognition rates by

recognizing existing models. The paper is divided into six parts, the second part of which

includes the basic terms necessary to understand facial recognition and facial expression

recognition. The third part of the paper includes the distinction between face recognition

and facial expression recognition. The fourth part explains the procedure followed to

identify facial expressions. The fifth section includes a review of ten previous studies

using expression recognition for various techniques. The sixth part is the conclusion that

it is about recognizing that the facial expression rate is higher than 90%, calculated from

the collected reviews. Finally, the scope of the future is discussed in the seventh section.

In this article, we present some first steps in developing a primitive like this: a system that

automatically finds faces in the visual video stream and encodes facial expression

dynamics.

Dept of CSE 2018-2019Page 3


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

1.4 Organization of the project report

This report is sorted in following format

 Chapter 2 includes Literature survey which gives detail about survey conducted
before starting the project.
 Chapter 3 explains the Existing system and also gives the limitations of existing
system.
 Chapter 4 gives an overview of the Evolutionary algorithms.
 Chapter 5 discusses the proposed system for AAM Model.
 Chapter 6 focuses on Requirement analysis gives details about various Software
and Hardware requirements for the project.
 Chapter 7 includes System design, gives details about how the proposed method
developed.
 Chapter 7 discusses the System implementation details.
 Chapter 8 gives the experimental results and comparison results.
 Chapter 9 focuses on future work and contains the concluding statement of the
proposed work.

CHAPTER 2
Dept of CSE 2018-2019Page 4
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

LITERATURE SURVEY
Writing study is generally finished remembering the true objective to analyze the

establishment of the present undertaking. It finds deserts in the present system and

assistants on working out responses for unsolved issues. The accompanying themes

demonstrate foundation of the venture and complete a point by point contemplates on the

key ideas and existing arrangements identified with the undertaking which persuaded to

give arrangements and work on current task.

2.1 Related Work

MB-LBP-based Adaboost algorithm with skin color segmentation for face detection:

Face detection is a complex and important problem in pattern recognition, and it is also

widely used. Due to the rectangular Haar feature method learned by AdaBoost, Viola and

Jones's effective and real-time face detection is possible. This paper proposes a face

detection algorithm based on AdaBoost, but we extract MB-LBP features instead of Haar

features, which are used to train AdaBoost classifiers. In order to reduce the false positive

rate, we also combine skin color with AdaBoost.

Innovative face detection based on skin color segmentation

Recognizing faces from images is very challenging due to the diversity of faces and the
uncertainty of face position. People are increasingly exposed to the study of face
detection in color images and video sequences. In this paper, we propose a novel face
detection framework that can achieve better detection rates. YCgCr chromaticity space
and HSV color space face detection algorithm based on skin color model. First, we build
a skin Gaussian model in the Cg-Cr color space and then use some constraints to get the
face candidates. Second, the calculation of the correlation coefficient is performed
between the given template and the candidate. The experimental results show that our

Dept of CSE 2018-2019Page 5


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

system achieves high detection rate and low false positive rate in various face variations
of color, position and different illumination conditions.

Facial Expression Recognition Based on Local Binary Pattern: Comprehensive


Research

Automated facial expression analysis is an interesting and challenging issue that affects
important applications in many areas, such as human-computer interaction and data-
driven animation. Obtaining effective facial representation from the original facial image
is an important step in successful facial expression recognition. In this paper, we evaluate
the facial expression recognition based on statistical local features and local binary
patterns. Different machine learning methods were systematically checked on several
databases. A large number of experiments have shown that LBP features are effective and
efficient for facial expression recognition. We further developed Boosted-LBP to extract
the most discriminative LBP features and obtain the best recognition performance by
using the support vector machine classifier with Boosted-LBP features. In addition, we
study LBP features for low-resolution facial expression recognition, which is a key issue,
but is rarely involved in existing work. We observed in experiments that LBP features are
performed stably and robustly over a range of effective low-resolution face images and
produce promising performance in compressed low-resolution video sequences captured
in real-world environments.

NSF Facial Animation Workshop Final Report

Face is an important and complex communication channel. It is a very familiar and


sensitive object of human perception. The field of facial animation has evolved over the
past few years because of the rapid computer graphics workstations that have made
hundreds of thousands of polygon modeling and real-time animations affordable and
almost commonplace. Many applications have been developed, such as conference calls,
surgery, information aid systems, games and entertainment. To solve these different
problems, different methods for animation control and modeling have been developed.

CHAPTER 3
Dept of CSE 2018-2019Page 6
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

EXISTING SYSTEM
The present system is described and the flaws of the existing system are also discussed.
The techniques used in the present system are also discussed. It describes the necessity of
a new improved system and the limitations of existing system are also highlighted.

3.1 Existing System Description

There are three main steps in an AFERS;

1. To detect the face of the given video or input image,

2. To extract the facial features such as eyes, nose, mouth of the detected face and

3. Classify facial expressions in different classes such as Happy, Angry, Sad, Fear,

Disgust, and Surprise. Face detection is a special case of object detection. In

The proposed system, face detection is implemented by skin color detection and

segmentation. It also involves a lighting compensation algorithm and morphological

operations to retain the face of the input image.

3.2 Limitations of Existing System

The system plays a communicative role in interpersonal relationships because it can

reveal the affective state, cognitive activity, personality, intention and psychological state

of a person. The proposed system consists of three modules. The face detection module is

based on an image segmentation technique in which the given image is converted into a

binary image and used for face detection.

CHAPTER 4

Dept of CSE 2018-2019Page 7


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

LIFTING ALGORITHM: ADABOOST

As a data scientist in the consumer industry, I generally think that for most predictive
learning tasks, the lifting algorithm is enough, at least until now. They are powerful and
flexible and can be explained very well by some techniques. Therefore, I think it is
necessary to read some materials and write about the enhancement algorithm.

AdaBoost

AdaBoost is the abbreviation of "adaptive promotion" and is the first practical lifting
algorithm proposed by Freund and Schapire in 1996. It focuses on classification issues
and aims to convert a set of weak classifiers into strong classifications.

For any classifier with a precision greater than 50%, the weight is positive. The more
accurate the classifier, the greater the weight. For classifiers with less than 50% accuracy,
the weight is negative. This means that we combine their predictions by flipping the
symbols. For example, we can increase the accuracy of the classifier to 40% and the
accuracy to 60% by flipping the predictive symbols. Therefore, even if the classifier
performs worse than random guessing, it still contributes to the final prediction. We just
don't want any classifier to have an accurate 50% accuracy, which doesn't add any
information, so it doesn't contribute to the final forecast.

AdaBoost as a forward stage additive model

This section is based on the paper: Additive Logistic Regression: An Enhanced Statistical
View. See the original article for more details.

In 2000, Friedman and others. A statistical view of the AdaBoost algorithm was
developed. They interpreted AdaBoost as a phase estimation procedure for additive
logistic regression models.

Lifting collection method

Dept of CSE 2018-2019Page 8


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Boosting is a general collection method that creates strong classifiers from many weak
classifiers. This is done by building a model from the training data and then creating a
second model to try to correct the error from the first model. Add a model until you
perfectly predict the training set or add the largest number of models. AdaBoost is the
first truly successful enhancement algorithm developed for binary classification. This is
the best starting point for understanding help. The modern boost method is based on
AdaBoost, the most famous of which is the stochastic gradient enhancement machine.

AdaBoost data preparation

This section lists some of the heuristics for preparing the best data for AdaBoost.

• Quality Data: As the collection method continues to try to correct the misclassification
in the training data, you need to be aware that the training data is of high quality.

• Outliers: Outliers force the ensemble to work hard along the rabbit hole to correct
unrealistic conditions. These can be removed from the training data set.

• Noise data: Noise data, especially noise in the output variables, can cause problems. If
possible, try to isolate and cleanse these from the training data set.

AdaBoost classifier in Python

Understand the integration approach, use the AdaBoost algorithm and learn to build the
AdaBoost model using Python.

In recent years, enhanced algorithms have gained widespread popularity in data science
or machine learning competitions. Most of the winners of these competitions use lifting
algorithms to achieve high accuracy.

These data science competitions provide a global platform for learning, exploring and
providing solutions to a variety of business and government issues. The Boosting
Dept of CSE 2018-2019Page 9
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

algorithm combines multiple low-precision (or weak) models to create high-precision (or
strong) models. It can be used in a variety of areas such as credit, insurance, marketing
and sales. Lifting algorithms such as AdaBoost, Gradient Boosting, and XGBoost are
widely used machine learning algorithms to win data science competitions.

The AdaBoost collection enhancement algorithm and will cover the following topics:

Ensemble learning method

A collection is a composite model that combines a series of low-performance classifiers


to create improved classifiers. Here, return a single classifier to vote and execute the final
prediction tag for the majority vote. Ensembles provide greater accuracy than single or
basic classifiers. The collection method can be parallelized by assigning each base learner
to a different machine. Finally, you can say that the Ensemble learning method is a meta-
algorithm that combines multiple machine learning methods into a single predictive
model to improve performance. Acquisition methods can use bagging methods to reduce
variance, use reinforcement methods to bias, or use stacking methods to improve
prediction.

Bagging indicates the boot aggregation. It combines multiple learners in a way that
reduces the estimated variance. For example, a random forest training M decision tree
allows you to train M different trees on different data random subsets and vote for final
prediction. The overall method of bagging is random forests and extra trees.

The Boosting algorithm is a set of low-precision classifiers used to create high-precision


classifiers. Low precision classifiers (or weak classifiers) provide better precision than
coin flips. High-precision classifiers (or strong classifiers) provide near-zero error rates.
The lifting algorithm can track models that do not pass accurate predictions. The lifting
algorithm is less affected by the overfitting problem. The following three algorithms have
gained great popularity in the data science competition.

• AdaBoost (Adaptive Promotion)

Dept of CSE 2018-2019Page 10


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

• Gradient Tree Promotion

• XGBoost

Stacking (or stacking generalization) is an integrated learning technique that combines


multiple basic classification model predictions into a new data set. This new data is
treated as input data for another classifier. This classifier is used to resolve this issue.
Stacking is often referred to as mixing.

AdaBoost classifier

Ada-boost or Adaptive Boosting is one of the set enhancement classifiers proposed by


Yoav Freund and Robert Schapire in 1996. It combines multiple classifiers to improve the
accuracy of the classifier. AdaBoost is an iterative collection method. The AdaBoost
classifier builds powerful classifiers by combining multiple poorly performing classifiers
so you can get high-precision, strong classifiers. The basic concept behind Adaboost is to
set the weight of the classifier and train the data samples in each iteration to ensure
accurate prediction of anomaly observations. Any machine learning algorithm can be
used as the base classifier if it accepts weights on the training set.

Adaboost should satisfy two conditions:

1. The classifier should be interactively trained with a variety of weighing training


examples.

2. In each iteration, it attempts to provide an excellent fit of these examples by


minimizing the training error.

How does the AdaBoost algorithm work?

It works as follows:

1. Initially, Adaboost randomly selected training subsets.

Dept of CSE 2018-2019Page 11


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

2. Iteratively train the AdaBoost machine learning model by selecting a training set based
on an accurate prediction of the final training.

3. It assigns a higher weight to the wrong classification observation so that these


observations will get a high probability of classification in the next iteration.

4. In addition, it assigns weights to the trained classifiers in each iteration based on the
accuracy of the classifier. A more accurate classifier will get a high weight.

5. This process iterates until the complete training data fits without any errors or until the
specified maximum number of estimators is reached.

6. To classify, perform a "vote" on all learning algorithms you build.

Load data set

In the modeling part of the model, you can use the IRIS dataset, which is a very well-
known multi-class classification problem. The data set includes 4 features (skull length,
sepal width, petal length, petal width) and target (flower type). These data have three
types of flowers: Setosa, Versicolour and Virginica. The data set can be found in the
scikit-learn library, or you can download it from the UCI machine learning library.

Split data set

To understand model performance, it is a good strategy to divide the data set into training
and test sets.

Let's split the data set using the function train_test_split(). You need to pass 3 parameter
functions, target and test_set size.

Building an AdaBoost model

Let's create an AdaBoost model using Scikit-learn. AdaBoost uses the decision tree
classifier as the default classifier.
Dept of CSE 2018-2019Page 12
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

advantage

AdaBoost is easy to implement. It iteratively corrects errors in weak classifiers and


improves accuracy by combining weak learners. You can use many basic classifiers in
AdaBoost. AdaBoost is not easy to overfit. This can be found through experimental
results, but no specific reason is available.

CHAPTER 5

PROPOSED AAM MODEL


Dept of CSE 2018-2019Page 13
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Image based methods have been applied to many areas of facial computing. One of the
most successful recent technologies, including shape and texture information from facial
images, is the Active Appearance Model (AAM) method. Originally developed by Cootes
and Taylor, it has shown great potential in a variety of facial recognition technologies. As
shown in FIG. 6, the face image is cropped from the detected face. The cropped image is
then used for feature extraction using the AAM method.

Initially, create an active shape model for a neutral image in the database. It automatically
creates a data file that provides information about the model points on the detected face.
Then, the video input starting with a neutral expression gives a sequence of different
expressions such as happiness, sadness, anger, fear, disgust and surprise. The change in
the AAM shape model measures the distance or difference between neutral and other
facial expressions based on changes in facial expressions.

Principal Component Analysis CBIR distinguishes images from image databases into
several categories. Database images belonging to the same category may differ in lighting
conditions, noise, etc., but are not completely random, and although they differ, there may
be some patterns. This mode can be called the main component. Principal Component
Analysis (PCA) is a mathematical tool for extracting the principal components of raw
image data. These main components can also be referred to as feature images. PCA is
commonly used for facial recognition. The idea of using principal components to
represent faces was developed by Sirovich and Kirby in 1987 and by Turk and Pentland
in 1991 for face detection and recognition. Many people think that the feature facial
method is the first working face recognition technology. The PCA can be used to convert
each original image from a database to its corresponding feature image. An important
feature of PCA is that any original image from the image database can be reconstructed
by combining feature images. Even partial feature images can be used to reconstruct an
approximation of the original image. PCA can perform prediction, redundancy deletion,
feature extraction, data compression and so on. Therefore, image retrieval using PCA
becomes apparent.

AAM model:

Dept of CSE 2018-2019Page 14


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

After this step, the expansion operation will help to restore the face area. This process can
be done multiple times to get good results. Since there are features having different
brightness, the pixel value variation of other skin-like regions such as hands or legs is
smaller than the pixel value variation of the face region, and thus all face region
candidates having pixel value variations smaller than the threshold are removed. In order
to improve the detection speed and achieve high robustness, the symmetry of the face is
checked, and all candidates are removed when the symmetry is verified but the facial
features are not detected..

Module description:

Artificial Neural Fuzzy Inference System (ANFIS):

Face detection is a special case of object detection. In the proposed system, skin detection
and segmentation are used to implement face detection. It also involves illumination
compensation algorithms and morphological operations to maintain the face of the input
image. In order to extract facial features, an active appearance model, the AAM method,
is used. Finally, expressions are considered to be happiness, sadness, anger, fear, disgust,
and surprise, initially by using a simple Euclidean distance method and then by training
an artificial neural fuzzy inference system (ANFIS).

Active Look Model (AAM):

Image based methods have been applied to many areas of facial computing. One of the
most recent successful technologies is the Active Appearance Model (AAM) method,
which contains information about the shape and texture of the facial image. Originally
developed by Cootes and Taylor, it has great potential for a variety of facial recognition
technologies. As shown, the face image is cut from the detected face. The cropped image
is used to extract features by the AAM method.

Skin color detection

Dept of CSE 2018-2019Page 15


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Skin color detection is an important topic in computer vision research. It is fair to say that
the most common face detection algorithm uses color information, so estimating areas
with skin tones is often the first critical step in this strategy. Most of the research on face
detection based on skin color is based on RGB, YCbCr and HSI color space. Color is an
important feature of the human face. The use of skin tones as a feature of tracking faces
has several advantages. In this paper, we use the YCbCr color space. In the YCbCr color
space, the Y component represents luminance information; the Cr component represents
red chrominance information; and the Cb component represents blue chrominance
information. Given triple RGB, the YCbCr transform can be calculated using the
following formula: (1) In addition, itLiterature survey:

Furthermore, it has been shown that skin color model based on the Cb and Cr values has
stable range of distribution as follows:

In this paper, only the skin color is preprocessed for face detection. We only need this
algorithm to remove most non-face areas to reduce the calculation to the next step.
Therefore, the area I encountered in the above situation will be treated as a skin area and
set to white, while the other area setting input black will produce a binarized image.
Suppose S' is the number of white pixels and S is the number of all pixels. The face area
should then satisfy the condition: S'/S> = threshold. Experimental results show that if a
threshold of 0.3 is set, most non-face areas can be filtered and almost all face areas can
pass. Through skin color segmentation, we can quickly eliminate non-face areas and

Dept of CSE 2018-2019Page 16


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

reduce the detection range and detection speed when accelerating the use of the AdaBoost
algorithm.

MB-LBP Function In this article, we use the Multiple Local Binary Mode (MB-LBP)
function instead of the Haar-like function. The basic idea of MB-LBP is to convert simple
difference rules in Haar-like features into coded rectangular regions by local binary
pattern operators. The original LBP is defined for each pixel by thresholding the 3x3
neighborhood pixel values with the center pixel value. In order to encode the rectangle,
the MB-LBP operator is defined by comparing the average intensity gc of the center
rectangle with the average intensity of the neighborhood rectangles {g0, ..., g8}. In this
way, it can provide us with a binary sequence.

MB-LBP-based Adaboost algorithm with skin color segmentation for face detection

Face detection is a complex and important problem in pattern recognition, and it has a
wide range of applications. From the work of Viola and Jones, effective and real-time
face detection can be achieved by using Haar's rectangular feature method and AdaBoost
learning. In this paper, we implement the AdaBoost-based face detection algorithm, but
extract the features of MB-LBP instead of the Haar feature for the AdaBoost training
classifier. In order to reduce the false positive rate, we also combine skin color with
AdaBoost.

Innovative face detection based on skin color segmentation.

Due to the diversity of the face and the uncertainty of the position of the face, it is
difficult to recognize the face of the image. The research of face detection in color images
and video sequences has been attracted by more and more people. In this document, we
propose a new face detection framework that can achieve better detection rates. The new
face detection algorithm is based on the skin color model in the YCgCr chromaticity
space and the HSV color space. First, we construct a Gaussian skin model in the Cg-Cr
color space and then use some constraints to get a face candidate. Second, calculate the
correlation coefficient between the given template and the candidate. Experimental results

Dept of CSE 2018-2019Page 17


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

show that our system achieves high detection rates and low false positives in various
facial variations in color, position and variable lighting conditions.

Facial Expression Recognition Based on Local Binary Pattern: Comprehensive


Research

Automated analysis of facial expressions is an interesting and challenging issue that


affects important applications in many areas, such as human-computer interaction and
data-based animation. Obtaining a valid facial representation from the original facial
image is an important step in successfully recognizing facial expressions. In this paper,
we empirically evaluate facial representations based on local statistical features, local
binary patterns, to identify individual facial expressions. Different machine learning
methods are systematically checked in several databases. A large number of experiments
have shown that the characteristics of LBP are effective and effective for recognizing
facial expressions. In addition, we have developed Boosted-LBP to extract the most
discriminative LBP function and obtain the best recognition performance by using the
support vector machine classifier with Boosted-LBP function. In addition, we studied the
characteristics of LBP to identify low-resolution facial expressions, a key issue that is
rarely addressed in existing work. We observed in experiments that LBP features are
performed in a stable and robust manner over the useful range of low resolution images of
the face, and provide promising performance in low resolution compressed video
sequences captured in real environments.

NSF Facial Animation Workshop Final Report

Face is an important and complex communication channel. It is a very familiar and


sensitive object of human perception. In recent years, the field of facial animation has
grown rapidly, as fast computer graphics workstations have made real-time modeling and
animation of hundreds of thousands of polygons affordable and almost universal. Many
applications have been developed, such as conference calls, surgery, information aid
systems, games and entertainment. To solve these different problems, different
approaches have been developed for animation control and modeling.
Original Image
Dept of CSE 2018-2019Page 18
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

FEATURE EXTRACTION

Face Detection

EXPRESSION CLASSIFICATION

Generation and Training

Therefore, pixels on the face are filled with white pixels, and other areas in the image are
filled with black pixels. The binary image received after the morphological operation is
further used for face detection. As shown in FIG. 5, a face area in the image is detected,
which is further used to extract facial features.

Face Detection
Feature Extraction with Template
Expression Matching
Dept of CSE 2018-2019Page 19
Expression Classification
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Dept of CSE 2018-2019Page 20


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Detail detection algorithm

The key idea of this research is to learn the descriptors of detailed neighborhoods, that is,
the differences between patch structures with and without details. The main steps
involved in the proposed method for detecting potential fingerprint details are as follows:

1. Learning Feature Descriptor: Extracts detailed plaques and non-detail plaques


separately from detailed fingerprint images. The goal is to learn detailed patch descriptors
and non-detailed patch descriptors from these partial patches using Stacked Denoising
Sparse AutoEncoder (SDSAE).

2. Training the binary classifier: The detail extraction in the potential print is presented as
a binary classification problem - whether the given potential patch is a detail patch or a
non-detail patch. The descriptors learned in the previous step are used to represent
potential print patches (details and non-details) of the tags. Then learn two binary
supervised classifiers (one using each descriptor) to classify between detail and non-
detailed patches.

3. Detection of detailed plaques: The trained classifier is used to extract and classify
plaque details and non-detail descriptors whenever invisible latent plaques are provided.
The final label is obtained by combining the outputs of the two classifiers using a
weighted summation rule. Detail extraction in the entire late blot is performed by
classifying each partial block as a detail or non-detail plaque.

Dept of CSE 2018-2019Page 21


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Feature Extraction Techniques

Discrete Cosine
Transform

Gabor Filter

Principle Component
Analysis

Linear Discriminate

CHAPTER 6

Dept of CSE 2018-2019Page 22


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

SYSTEM DESIGN

The explanation behind this part is a blueprint that depicts the errands. This is a technique
for converting elements into programming descriptions. Design is one of the fundamental
periods of programming change. The creativity of the blueprint arrangement is the answer
to the questions that outline the requirements document control. This phase is the basic
phase of moving from problem domain to game planning. Structural arrangements can be
the most important factor influencing project concepts and significantly affect later
stages, especially testing and support. The income organization is the arrangement report.
The structural arrangement basically consists of modules such as educational documents
and the genetic count of flight chores by naval forces.

6.1 Architectural Design

This proposes a method of recognizing human facial expressions of human robot


interactions. To this end, a Bzier curve representing the relationship between feature
motion and expression changes is used to extract and approximate facial features,
especially the eyes and lips. For face detection, color segmentation based on a new idea
of fuzzy classification has been employed to manipulate the ambiguity of color.
Experimental results show that this technique can be used to classify skin regions and
non-skin regions. In order to determine if the skin area is a face, the largest connectivity
analysis has been employed. The method can identify facial expression categories as well
as the extent to which facial expressions change. Finally, the system is implemented by
issuing a facial expression command to the manipulator robot.

Precise Active
Appearance
Model Neuro-Fuzzy
Inference System
Dept of CSE 2018-2019Page 23
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Pe-processing
Phase

START

EYE

Mouth

Apply Genetic
Extracted Operators
mount contours

Skin Color

Conversion

6.2 Level-0 Data Flow Diagram

Level-0 is the first level DFD and is often referred to as a context level map. Identify
advanced modules that must meet high-level requirements specifications. This advanced

Dept of CSE 2018-2019Page 24


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

diagram with unit processes is called an acontext diagram. This context map is shown in
many interrelated processes. The level 0 DFD for the proposed fleet allocation problem.

Distance between
Facial Analysis K-Means upper and lower

6.3 Sequence Diagram


The sequence model shows the topic of the use case. There are two sequence models:
scenes and more structured formats called sequence diagrams. The text format is easy to
write, but it does not clearly show the sender and receiver of each message, especially if
there are more than two objects.

Face Detection Feature Active Appearance Euclidean Expression


Extraction Model Distance Detection
Lighting compensation algorithm

Morphological operations to retain

The points on facial features

Created for all the images in data folder

Expressions are recognized as Happy, Sad, Anger

The sequence diagram explains the participants in the interaction and the sequence of
messages between them. The sequence diagram demonstrates the collaboration of the
framework with the characters on the screen to show all or part of the use case. Each role
on the screen, in addition, the frame is represented by a vertical line called Help and each
message is sent by the sender to the beneficiary level bolt. The time continues until
below, and the chart below shows the sequence diagram, showing only the sequence of

Dept of CSE 2018-2019Page 25


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

messages, not the correct plan. Each use case requires at least one sequence diagram to
describe its behavior. Each sequence diagram demonstrates the specific sequence of
behavior of the use case.

The sequence diagram in shows the various processes that a simple genetic algorithm has
and the various interactions between them. The selected crosses and mutations are shown.
The sequence diagram in shows the various processes that a simple genetic algorithm has
and the various interactions between them.

6.4 FLOWCHART

START

Dept of CSE 2018-2019Page 26


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Color allows fast


processing

Transforming from
RGB to forward linear
transformation

Lighting
Compensation

Lighting
Compensation

Extraction Using AAM


Method No

Generate new
population

6.5 ACTIVITY DIAGRAM


Activity diagrams are another important diagram in UML that describes the dynamic aspects of a
system.

An activity diagram is basically a flow chart showing the flow from one activity to another.
Activities can be described as the operation of the system.

The control flow leads from one operation to another. The stream can be sequential, branched or
concurrent. Activity diagrams handle all types of flow control by using different elements such as
fork, join, etc.

Purpose of the activity diagram

The basic purpose of the activity diagram is similar to the other four diagrams. It captures the
dynamic behavior of the system. The other four graphs are used to display the flow of messages
from one object to another, but the activity graph is used to display the flow of messages from one
activity to another.

Dept of CSE 2018-2019Page 27


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Activities are specific operations of the system. Activity diagrams are not only used to visualize
the dynamic nature of the system, but are also used to build executable systems using forward and
reverse engineering techniques. The only thing missing from the activity diagram is the message
part.

It does not display any message flow from one activity to another. Activity diagrams are
sometimes treated as flowcharts. Although the charts look like flowcharts, they are not. It shows
different streams, such as parallel, branch, concurrency and single.
The purpose of the activity diagram can be described as -

Face detection

Color detection Segmentation

Active Appearance
Model

Euclidean
Distance

ANFIS

EXPRESSION
CLASSIFICA nON

6.6 SEQUENCE DIAGRAM


The sequence diagram has four objects (Customer, Order, SpecialOrder, and NormalOrder).

The following figure shows the message sequence of the SpecialOrder object, and the same
message sequence can be used in the case of the NormalOrder object. It is important to
understand the chronological order of the message flows. The message flow is nothing more than
a method call to the object.

Dept of CSE 2018-2019Page 28


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

The first call is sendOrder(), which is a method of the Order object. The next call is confirm(),
which is the method of the SpecialOrder object. The last call is Dispatch(), which is the method of
the SpecialOrder object. The following figure mainly describes the method call from one object to
another, which is also the actual scene when the system is running.

1: Lighting compensation algorithm 2: Morphological operations to retain


Face Feature Active Appearance
Detection Extraction Model

5: Expressions are recognized as Happy, Sad, Anger

3: The points on facial features

4: Created for all the images in data folder


Euclidean Expression
Distance Detection

6.7 USE-CASE DIAGRAM


Use case diagrams are very useful for visualizing the functional requirements of a system that
translates into design choices and development priorities.

They also help identify any internal or external factors that may affect the system and should be
considered. They provide good advanced analysis from outside the system. The use case diagram
specifies how the system interacts with the actor without having to worry about how to implement
the details of the function.

Dept of CSE 2018-2019Page 29


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Face Detection

Feature extraction

Active Appearance Model


User
Recognition

Expression recognition

Euclidean Distance

CHAPTER 7

SYSTEM IMPLEMENTATION

Using Euclidean for expression recognition

Dept of CSE 2018-2019Page 30


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Distance Method In this method, the database consists of a training data set and a test image data
set. For a specific topic, the training and test dataset includes images of different expressions such
as neutral, happy, sad, angry, fear, disgust and surprise. Using the AAM method, points on facial
features are located in all of these images and stored as data files. The data file consists of the
relative x-y coordinates of those anchor points. When any test image is given as an input, the
system finds the Euclidean distance between the point on the test image and the point on each
training image.

Initially, create an active shape model for a neutral image in the database. It automatically creates
a data file that provides information about the model points on the detected face. Then, the video
input starting with a neutral expression gives a sequence of different expressions such as
happiness, sadness, anger, fear, disgust and surprise. The change in the AAM shape model
measures the distance or difference between neutral and other facial expressions based on changes
in facial expressions.

Expression recognition using ANFIS

In order to improve the recognition rate of the system, the artificial neuron-fuzzy inference system
(ANFIS) was used to further modify the third stage. In this method, a still image as well as a
video input can be given for testing the expression. Here, an automatic facial expression
recognition system based on neuron blur has been proposed to recognize human facial
expressions such as happiness, fear, sadness, anger, disgust and surprise. Initially, videos
displaying different emoticons are framed as different images. The sequence of selected images is
then stored in a database folder. Using the AAM method, all image features are located and stored
in the form of .ASF files. Then create an average shape for all the images in the data folder.

Face recognition

It is a computer application for automatically identifying or verifying people from digital images
or video frames.

Program steps: data acquisition, input processing, face image classification and decision making.

Application: Voter verification, use ATM bank, mobile password.


Dept of CSE 2018-2019Page 31
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Facial expression recognition

It is a computer application for identifying facial expressions of anyone using images or video
clips or the person itself.

Program steps: face detection, feature extraction and expression classification. Application:
Healthcare, Games

Classification of deep learning

Deep learning is a branch of the machine that uses a variety of relatively deep neural
systems to solve different problems in different fields such as NLP, computer vision and
bioinformatics. In-depth learning has undergone a huge and continuous exam renaissance
and has emerged in the classroom to convey the best application.

The deep residual network manages part of these problems by using the remaining blocks
to protect the input residual blocks. By using a deep residual learning system, different
methods of other systems with specific preparation challenges can be explored. It is
difficult to know the depth of deep network depth. If the number of layers is too deep, it is
difficult to propagate the error effectively. If the layer is over-restricted, we may not be
able to learn enough drawing controls. In any case, in a deep residue system, it is
protected for deeper preparation, keeping in mind the ultimate goal, gaining sufficient
learning ability without over-emphasizing derogatory problems, on the grounds that in the
most pessimistic circumstances, hinder those The "layer" of "no" meaning can figure out
how to become a character map without any mischief.

This is accomplished by the solver driving the weights of the near-zero ReLu layer along
these lines, except that the simple path association is dynamic and acts as a character
map. Although this is not hypothetical proof, changing the weight to near zero may not
require the solver to be assigned a strong description.The Resnet 50 working model for
image classification is used to classify the ResNet of the 50-layer network structure as
shown in FIG. The delineation strategy of the arranged deep neural framework is similar
to the delineation strategy of the multi-layer excitation forward process. The image is

Dept of CSE 2018-2019Page 32


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

used as data transferred layer by layer until the output layer characterizes the result. At
the primary level, ResNet50 uses a 7x7 convolution and steps down 2 requests for 2
requests, such as the pooling layer. At this point it is blocked by three characters and then
sampled again for 2 downsamplings. The downsampling layer is additionally a
convolutional layer, but there is no personality association. It is still a few layers deep.
The last layer is the normal pool, which generates 1000 element mappings (for Image Net
information) and is a standard mapping for each feature map. The result would be a 1000-
dimensional vector that was directly nourished to the Softmax layer, so it was completely
convolved, and finally we got a classification of the images belonging to which class.

CASE Computer-Aided Software Engineering (CASE) is the use of software tools to


assist in the development and maintenance of software. The tools used to help in this way
are known as CASE tools.

CASE tool

1. A CASE tool is a computer-based product intended to support one or more software


engineering activities within a software development process.

2. Computer-aided software engineering tools are those that are used in each and every
phase of the development of an information system, including analysis, design and
programming. For example, data dictionaries and layout tools help in the analysis and
design phases, while application generators accelerate the programming phase.

3. CASE tools provide automated methods for designing and documenting traditional
structured programming techniques. The ultimate goal of CASE is to provide a language
to describe the general system that is sufficient to generate all the necessary programs.

CLASSIFICATION OF CASE TOOLS

The existing CASE tools can be classified into 4 different dimensions:

1. Life cycle support.

Dept of CSE 2018-2019Page 33


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

2. Integration dimension

3. Dimension of construction.

4. CASE dimension based on knowledge.

Let's take the meaning of these dimensions together with their examples one by one:

CASE tools based on life cycle

This dimension classifies CASE tools according to the activities that support the life cycle
of information systems. They can be classified as upper or lower CASE tools. Upper
CASE Tool Upper CASE Tool is a computer-aided software engineering software
(CASE) tool that supports software development activities upstream from the
implementation. Upper case tool focuses on the analysis phase (but sometimes also the
design phase) of the software development life cycle (diagramming tools, report and form
generators, and analysis tools)

• Lower CASE tool Lower CASE tool Computer-aided software engineering software
(CASE) tool that directly supports implementation (programming) and integration tasks.
The Lower CASE tools are compatible with the generation of database schemas, the
generation of programs, the implementation, the tests and the administration of the
configuration. Integration dimension

Three main dimensions of CASE integration have been proposed:

1. Framework of cases

2. ICASE Tools Tools that integrate CASE upper and lower, for example, make it
possible to design a form and build the database to be compatible at the same time. An
automated system development environment that provides numerous tools to create
diagrams, forms and reports. It also offers analysis, report generation and code generation
functions, and perfectly shares and integrates the data in all the tools and among them.

Dept of CSE 2018-2019Page 34


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

3. Integrated project support environment (IPSE)

Types of CASE tools The general types of CASE tools are listed below:

1. Layout tools: they allow the system process, the data and the control structures to be
represented graphically.

2. Report generators and computer screens: they help to create a prototype of how
systems look and feel. It facilitates the systems analyst to identify the requirements and
the relationship of the data.

3. Analysis tools: automatically check the importance, inconsistent or incorrect


specifications in diagrams, forms and reports.

4. Central Repository: allows the integrated storage of specifications, diagrams, reports


and project management information.

5. Documentation generators: produce technical and user documentation in standard


formats.

6. Code Generators: allow the automatic generation of program and database definition
codes directly from the design documents, diagrams, forms and reports.

Functions of a CASE tool

1. Analysis CASE analysis tools automatically verify incomplete, inconsistent or correct


specifications in diagrams, forms and reports.

2. Design This is where the technical design of the system is created through the design of
the technical architecture, choosing between the architectural designs of
telecommunications, hardware and software that best adapt to the system of the
organization and future needs. Also designing the systems model: graphically creating a
model from the graphical user interface, the design of the screen and databases, to the
location of the objects on the screen.
Dept of CSE 2018-2019Page 35
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

3. The CASE code generation tool has code generators that allow automatic generation of
database and program definition codes directly from documents, diagrams, forms and
reports.

4. The CASE documentation tool has documentation generators to produce technical and
user documentation in standard forms. Each phase of the SDLC produces documentation.
The types of documentation that flow from one face to another vary according to the
organization, the methodologies used and the type of system being built.

CASE environments

An environment is a collection of CASE tools and workbenches that support the software
process. CASE environments are classified according to the integration approach.

1. Tool kits

2. Focused on language.

3. Integrated

4. Fourth generation

5. Focused on the process.

Tool kits

The toolkits are integrated collections of products that are easily extended by adding
different tools and workbenches. Normally, the support provided by a toolkit is limited to
programming, configuration management and project management. And the toolkit itself
is an extended environment from basic sets of operating system tools, for example, the
Unix programmer workbench and the VMS VAX set. In addition, the flexible integration
of toolkits requires the user to activate the tools by explicit invocation or simple control
mechanisms. The resulting files are not structured and could have a different format,
therefore, the access of a file from different tools may require an explicit conversion of
the file format. However, since the only restriction to add a new component is the format
of the files, the toolkits can be expanded easily and incrementally.

Dept of CSE 2018-2019Page 36


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Focused on the language

The environment itself is written in the programming language for which it was
developed, which allows users to reuse, customize and extend the environment. The
integration of the code in different languages is a major problem for language-centric
environments. The lack of integration of processes and data is also a problem. The
strengths of these environments include a good level of control and presentation
integration. Interlisp, Smalltalk, Rational and KEE are examples of language-centric
environments.

Integrated

These environments achieve the integration of the presentation by providing uniform,


consistent and consistent interfaces for the tools and the work environment. Data
integration is achieved through the concept of deposit: they have a specialized database
that manages all the information produced and accessible in the environment. Some
examples of integrated environment are the ICL CADES system, IBM AD / Cycle and
DEC Cohesion.

Fourth generation

The fourth generation environments were the first integrated environments. They are sets
of tools and work banks that support the development of a specific class of program:
electronic data processing and business-oriented applications. In general, they include
programming tools, simple configuration management tools, document handling facilities
and, sometimes, a code generator to produce code in lower level languages. Informix 4GL
and Focus are in this category.

Focused on the process

The environments in this category focus on the integration of processes with other
integration dimensions as starting points. A process-centered environment works by

Dept of CSE 2018-2019Page 37


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

interpreting a process model created by specialized tools. They generally consist of tools
that handle two functions:

• Execution of the process model.

• Process-model production.

Examples are East, Enterprise II, Process Wise, Process Weaver and Arcadia

CASE Tools ADVANTAGES DISADVANTAGES

It helps in the standardization of notations and diagrams. Limitations on the flexibility of


documentation.

Help communication between the members of the development team May lead to the
restriction of the capabilities of the tool

Automatically check the quality of the models. Main danger: integrity and syntactic
correction does NOT mean compliance with the requirements

Reduction of time and effort. Costs associated with the use of the tool: purchase +
training.

Improves the reuse of models or components. Staff resistance to CASE tools

Dept of CSE 2018-2019Page 38


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

CHAPTER 8

SOFTWARE TESTING

Software testing is the way to execute a program in a plan that finds the error. Software
testing is an essential part of software quality assurance, allowing for clear auditing of
framework specifics, plans, and coding. Testing is the last chance to reveal a big mistake
in the transfer of software and office value frameworks. Test the stage by testing specific
program parts to discover the purpose of defects or errors. These parts may be
restrictions, things or modules. In structural testing, these segments are composed to
shape the entire system. At this stage, the test should be based on the structure's
compliance with its utilitarian elements and will not be sustained in a sudden manner.
Test data is the input that is planned for testing the system, and the test case is a
commitment to test the structure, and if the system is operating according to its specific
function, it is expected to obtain yield from these information wells. This is broken
directly in a solid system. Make sure that the inspection results are directly decomposed
in every possible combination of conditions. According to the requirements, ordinary lead
of the structure under different blends is given. Therefore, selecting a test case with input
and yield on the expected line, the input is not very large and the fit message must be
given and the top of the data source does not occasionally appear, which can be
considered an exception.

8.1 Test Environment


The software was tested on the following platform:

Dept of CSE 2018-2019Page 39


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

 OS : Windows 10 pro 64-bit Operating System.


 Processor Speed : Intel® Core™ i5-4200 [email protected] 2.30GHz
 Memory : 4GB RAM
 Hard Drive : 4.47 GB for the Installation and 51 GB or more for storage

8.2 Testing Principles


The basic criteria for successful software testing are:
 A great experiment is an experiment that is likely to find an error that is not yet
familiar.
 A productive test is a test that reveals unfamiliar errors so far.
 All tests should prepare a net match for the customer's necessities.
 The test should be scheduled for a period of time before the test begins.
 The test should start at an early age and be tested in substance.
 Exhaustive test is unimaginable

8.3 Testing Strategies


Unique test types include unit testing, black box testing, white box testing and integration
testing. Unit testing is done after a single module is completed and the results prove to be
executable. It is limited to the prerequisites of the architect. The singular parts try to
ensure that they work accurately. Each segment is self-initiated and has no other
framework parts. Try to try out the framework by arranging legal test information for
each module and check the results at normal yield. Unit tests are checked around the
smallest unit of the software planning module. Unit testing is a method of testing the
usefulness of the smallest substance involved in a plan, separating the elements from the
frame by regenerating all other external interfaces of the element. It is also known as a
white box or glass box or component or module test. The unit under test needs to go
through a basic range other than useful. An attempt was made to check if the software
unit has performed the itemized outline of the unit accurately. The most important goals
of unit testing are:
 Verify the function of the module under test:

Dept of CSE 2018-2019Page 40


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

 Verify its scope of assistance.


 Check the function to match the code with the outline and information changes.
 Find unfamiliar errors.
 Ensure software standards.
 Test the lowest level element (most free) within its limits.
 Test module vitality.

Unit testing is also known as white box testing. A white box is a test program that
considers the inward structure of a frame or segment. This method is known as a white
box test, considering the fact that the total internal structure and work of the code can be
accessed.

Qualitative Assessment
This section contains two fingerprint examples and detail extraction results. Figure 6
shows the detail extraction on the inferior fingerprint, while Figure 7 shows
Detail extraction on fine fingerprints. Taking into account the details of NIST and DP
extraction on inferior fingerprints, it is clear that drying will have an impact on the
success of these extractors. In this case, MENet is able to detect the maximum number of
correct details.

The ideal situation is a common minimization of errors and undetected details, but this is
a very challenging task. Future work may see an improved MENet post-processing
program that takes into account local determinism and quality measurements when
determining blob thresholds.

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system, the results of the processing are communicated to the
users and to other systems through outputs. In the output design, it is determined how the
information will be moved for an immediate need and also the output on paper. It is the
most important and direct source of information for the user. Intelligent and efficient
output design improves the system's relationship to help users make decisions.

Dept of CSE 2018-2019Page 41


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

1. The design of the computer output should proceed in an organized and thoughtful
manner; the correct output must be developed while ensuring that each output element is
designed so that people find that the system can be used easily and effectively. When the
analysis designs the output of the computer, they should identify the specific output that
is needed to meet the requirements.

2. Select methods to present information.


3. Create documents, reports or other formats that contain information produced by the
system.
The form of output of an information system must meet one or more of the following
objectives.
 Transmit information about past activities, current status or projections of the
 Future.
 Point out important events, opportunities, problems or warnings.
 Activate an action.
 Confirm an action.
The face plays an important role in social communication. Facial biometrics itself is used
in many applications such as security, forensics and other commercial applications.
Similarly, facial expressions are the fastest means of communication, while transmitting
any type of information. In 1978, Ekman and Frisen reported that Happy, Sad, Anger,
Fear, Disgust and Surprise are the six basic expressions that are easily recognized in very
different cultures. A system designed to analyze facial actions automatically through an
interaction with a personal computer, is called Automatic Facial Expression Recognition
System (AFERS). The robust AFER system can be applied in many areas of science, such
as emotion detection, clinical psychology and pain assessment. There are three main steps
in an AFERS;
1. To detect the face of the given video or input image,
2. To extract the facial features such as eyes, nose, mouth of the detected face and
3. Classify facial expressions in different classes such as Happy, Angry, Sad, Fear,
Disgust and Surprise.

Dept of CSE 2018-2019Page 42


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

Face detection is a special case of object detection. In the proposed system, face detection
is implemented through the detection and segmentation of skin color. It also involves a
lighting compensation algorithm and morphological operations to retain the face of the
input image. To extract the facial features, the Active Appearance Model is used, that is,
the AAM method. Finally, the expressions are recognized as Happy, Sad, Anger, Fear,
Disgust and Surprise, initially using the simple method of the Euclidean Distance and
then training the Neuro-Fuzzy Artificial Inference System (ANFIS).

CHAPTER 9

RESULTS AND ANALYSIS

The author proposes five distances:


• The distance between the upper eyelid and the lower eyelid;
• The distance between the inner corner of the eye and the inner corner of the
eyebrow;
• The width of the mouth opening, the distance between the left and right
mouth angles.
• The height of the mouth opening, the distance between the upper lip and
the lower lip.

Dept of CSE 2018-2019Page 43


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

• The distance between the corner of the mouth and the corresponding outer
corner of the eye.

In order to extract these distances, follow the sequence of steps below, which
will be detailed later in this article:
• Extract faces using the Haar classifier in the OpenCV library;
• Rotate the face so that the line connecting the eyes is always level;
• For each eye, use Bezier curves to identify the exact eye contour and
approximate the contour;
• Extract three features (distance) and two features associated with the mouth
for each eye.

Using neural networks for facial analysis

The 269 patterns found using feed forward neural networks with multiple hidden layers
were classified. In order to classify facial expressions into categories, K-means
classification will be used. This type of classification is needed to better divide portrait
pictures into one of six types: anger, disgust, fear, happiness, neutrality, and sadness. Data
analysis was performed in Matlab. For 269 input vectors, the following table was
obtained, which defines clustering silhouettes for different numbers of clustering groups.

Skin Color Detection


Skin color can be considered a good feature for detecting human faces. Since
color allows for fast processing and is very robust to geometric changes in facial patterns,
skin color has proven to be a useful and robust hint for face detection, positioning and
tracking.
Image content filtering, content-aware video compression and image color balance
applications can also benefit from automatic detection of skin in images.
Different categories of color spaces are orthogonal color spaces used in television
transmission. This includes YUV, YIQ and YCbCr. YIQ is used for NTSC television
broadcasting, while YCbCr is used for JPEG image compression and MPEG video
compression. In the YCbCr color space, the Y component represents luminance
information; the Cr component represents red chrominance information; and the Cb
component represents blue chrominance information.

Dept of CSE 2018-2019Page 44


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

One advantage of using these color spaces is that most video media have been encoded
using these color spaces.
All of these color spaces separate the illumination channel (Y) from the two orthogonal
chrominance channels (UV, IQ, CbCr).

Therefore, unlike RGB, the position of the skin tone in the chroma channel is not affected
by changing the intensity of the illumination. In the chrominance channel, the skin color
is usually located in a compact cluster with an elliptical shape. This helps to build a skin
detector that is constant in illumination intensity and uses a simple classifier.

A histogram showing the different color models of two different images. The histogram
of the image represents the relative frequency of occurrence of various gray levels in the
image. From the results the results obtained show that for any type of skin, the gray scale
distribution of Cb and Cr is within a range of pixel values.

SCREENSHOTS

CONCLUSION AND FUTURE WORK

Automated facial expression recognition systems have a wide range of applications in


psychological research and human-computer interaction applications. The system plays a
role in interpersonal relationships because they reveal a person's emotional state,
emotional activity, personality, intention, and mental state. The proposed system consists
of three modules. The face detection module is based on an image segmentation
technique in which a given image is converted to a binary image and further used for face
detection. In different color models such as RGB, YCbCr, and HSI, YCbCr is easy to
obtain a robust image.

The first stage of face detection has been tried for 105 image samples and the correct
detection of 95 image samples has been obtained. According to the test results, false face
detection occurs in the case where the image quality is low or the face size is lower than

Dept of CSE 2018-2019Page 45


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

32×32. The AAM (Active Appearance Model) method combines the shape and texture
information of the face image, so it is found to facilitate feature extraction.

For expression recognition, the Euclidean distance method is useful for still images and
requires a lot of manual work. The recognition rate of this method is 90-95%. ANFIS
(Artificial Neural Fuzzy Inference System) has been used as a further improvement due to
its ambiguity with real-time or robust images. In this system, still images and video can
be given as inputs and tested for different expressions. In addition, the system can work
accurately for databases that are not related to individuals. The accuracy of facial
expression recognition varies with the number of training samples. For a large number of
training samples, the system makes the recognition rate close to 100%. The neurofuzzy
method for facial expression recognition is suitable for real-time applications such as
human sentiment analysis, human-computer interaction, surveillance and online
conferencing, and entertainment.

The proposed work can be further extended by increasing the number of different
expressions (anger, fear, disgust, joy, surprise, sadness) in addition to the six universal
expressions. Classification of other facial expressions may require the extraction and
tracking of additional facial points and corresponding features. The system can be
improved by using a wider training set to cover a wider range of poses and low quality
images.

BIBLIOGRAPHY

1. P. Ekman, T. Huang, T. Sejnowski, J. Hager, "Final Report to NSF of the


Planning Workshop on Facial Expression Understanding", 1992. Ewa Piatkowska
"Facial Expression

2. Recognition System", Master's Thesis Technical Report.

Dept of CSE 2018-2019Page 46


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

3. Kai-biao ge, jing wen, bin fang, "Adaboost Algorithm based on MB-LBP features
with skin Color segmentation for face detection" Proceedings of the 2011
International Conference on wavelet analysis and pattern recognition, guilin, 10-
13 july, 2011.

4. Kamarul Hawari, Bin Ghazali, Jie Ma, Rui Xiao, "An Innovative Face Detection
based on Skin Color Segmentation", International Journal of Computer
Applications (0975-8887), Volume 34- No.2, November 2011.

5. GJ. Edwards, T.F. Cootes, and CJ. Taylor, "Face reccognition using active
appearance models", Proceedings of the European Conference on Computer
Vision, 1998.

6. Aliaa A. A. Youssif and Wesam A. A. Asker, "Automatic Facial Expression


Recognition System Based on Geometric and Appearance Features", Computer &
Information Journal, 2011, Volume 4, Issue 2, Page 115.

7. 2011, Volume 4, Issue 2, Page 115. [7] V. Gomathi, Dr. K. Ramar, and A.
Santhiyaku Jeevakumar, "A Neuro Fuzzy approach for Facial Expression
Recognition using LBP Histograms", International Journal of Computer Theory
and Engineering, Vol. 2, No. 2 April, 2010.
8. Jizheng, Xia, Lijang, Yuli, Angelo; “Facial expression recognition considering
differences in facial structure and texture”, IET Computer Vision 2013.

9. Jiawei, Congting, Hongyun, Zilu; “Facial expression recognition based on


completed local binary pattern and sparse representation”, Ninth International
Conference on Natural Computation (ICNC) 2013.

10. Jizheng, Xia, Yuli, Angolo; “Facial expression recognition based on t-SNE and
adaboost M2”, IEEE International Conference on Green Computing and
Communications and IEEE Internet of Things and IEEE Cyber, Physical and
Social Computing 2013.

Dept of CSE 2018-2019Page 47


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

11. J.J. Lee, Md. Zia Uddin, T.S. Kim; “Spatiotemporal human facial expression
recognition using fisher independent component analysis and hidden markov
model”, 30th Annual International IEEE EMBS Conference 2008.

12. Weifeng, Caifeng, Yanjiang; “facial expression recognition based on


discriminative distance learning”, 21st International Conference on Pattern
Recognition (ICPR 2012).

13. Ying, Zhang; “facial expression recognition based on NMF and SVM”,
International Forum on Information Technology and Applications 2009.

14. Anagha, Dr. Kulkarnki; “Facial detection and facial expression recognition
system”, International Conference on Electronics and Communication System
(ICECS -2014).

15. Claude C. Chibelushi, Fabrice Bourel; “Facial Expression Recognition: A Brief


Tutorial Overview”, 2002.

16. G.Hemalatha, C.P. Sumathi; “A Study of Techniques for Facial Detection and
Expression Classification”, International Journal of Computer Science &
Engineering Survey (IJCSES) 2014

17. Deepti, Archana, Dr. Jagathy; “Facial expression recognition using ANN”, IOSR
Journal of Computer Engineering 2013.

18. Banu, Danciu, Boboc, Moga, Balan; “A novel approach for face expression
recognition”, IEEE 10th Jubilee International Symposium on Intelligent Systems
and Informatics 2012.

Dept of CSE 2018-2019Page 48


FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM

19. Wang Zhen, Ying Zilu; “Facial expression recognition based on adaptive local
binary pattern and sparse representation”, 2012 IEEE.

Dept of CSE 2018-2019Page 49

You might also like