Face Detection
Face Detection
CHAPTER 1
INTRODUCTION
Aircraft Faces play an important role in social communication. Facial biometrics itself is
used in many applications such as security, forensics and other commercial applications.
Similarly, facial expressions are the fastest way to communicate when communicating
any type of information. That, six basic expressions of happiness, sadness, anger, fear,
disgust and surprise are easily identified across different cultures. The system is designed
(AFERS) through a human body computer. The powerful AFER system can be applied to
many areas of science such as mood detection, clinical psychology and pain assessment.
2. Extract facial features such as eyes, nose and mouth from detected faces and faces
3. Classify facial expressions into different happiness, anger, sadness, fear, disgust, etc.
Surprise.
Face detection is a special case of object detection. The recommended face detection
system uses skin color detection and segmentation. It also involves an optical
compensation algorithm and a Keep face from morphology operation. To extract facial
features, activate the appearance model using the AAM method. At last,
Expressions are considered to be happiness, sadness, anger, fear, disgust, and surprise,
initially using the Euclidean distance method and then training the Artificial Neuro-Fuzzy
This article is divided into six parts, the second part includes the necessary basic terms.
Learn about facial recognition and facial expression recognition. The third part of this
paper includes the difference between face recognition and face recognition facial
expression recognition. A program that recognizes facial expressions. The fifth part
Various technologies. The sixth part is the conclusion that it is about 90% recognition of
facial expressions, calculated based on the collected reviews. Finally, the seventh and the
1. Face Detection: Face detection refers to determining that a picture contains a face that
we need to be able to define the general structure of the face. Fortunately, faces are not
very different from each other; we all have noses, eyes, forehead, chin and mouth; and all
of this constitutes the general structure of the face. This is a two-level classification
2. Face detection can be considered as a specific case of object class detection. In object
class detection, the task is to find the position and size of all objects in the image that
Usually, the face is a combination of bones, facial muscles and skin tissue. When these
muscles contract, they produce warped facial features. Facial expressions are the fastest
Ekman and Frisen reported that facial expressions can be used as a fast signal to change
with the contraction of facial features such as eyebrows, lips, eyes, and cheeks, thereby
affecting the accuracy of recognition and affecting happiness, sadness, fear, and disgust.
And anger. Surprisingly, there are six basic expressions. These expressions involve three
steps: face detection, feature extraction and expression classification in different facial
Facial expression recognition consists of three main steps: (1) face detection and image
preprocessing, (2) feature extraction, and (3) expression classification. The purpose of
this paper is to understand the basic differences between face recognition and facial
recognizing existing models. The paper is divided into six parts, the second part of which
includes the basic terms necessary to understand facial recognition and facial expression
recognition. The third part of the paper includes the distinction between face recognition
and facial expression recognition. The fourth part explains the procedure followed to
identify facial expressions. The fifth section includes a review of ten previous studies
using expression recognition for various techniques. The sixth part is the conclusion that
it is about recognizing that the facial expression rate is higher than 90%, calculated from
the collected reviews. Finally, the scope of the future is discussed in the seventh section.
In this article, we present some first steps in developing a primitive like this: a system that
automatically finds faces in the visual video stream and encodes facial expression
dynamics.
Chapter 2 includes Literature survey which gives detail about survey conducted
before starting the project.
Chapter 3 explains the Existing system and also gives the limitations of existing
system.
Chapter 4 gives an overview of the Evolutionary algorithms.
Chapter 5 discusses the proposed system for AAM Model.
Chapter 6 focuses on Requirement analysis gives details about various Software
and Hardware requirements for the project.
Chapter 7 includes System design, gives details about how the proposed method
developed.
Chapter 7 discusses the System implementation details.
Chapter 8 gives the experimental results and comparison results.
Chapter 9 focuses on future work and contains the concluding statement of the
proposed work.
CHAPTER 2
Dept of CSE 2018-2019Page 4
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM
LITERATURE SURVEY
Writing study is generally finished remembering the true objective to analyze the
establishment of the present undertaking. It finds deserts in the present system and
assistants on working out responses for unsolved issues. The accompanying themes
demonstrate foundation of the venture and complete a point by point contemplates on the
key ideas and existing arrangements identified with the undertaking which persuaded to
MB-LBP-based Adaboost algorithm with skin color segmentation for face detection:
Face detection is a complex and important problem in pattern recognition, and it is also
widely used. Due to the rectangular Haar feature method learned by AdaBoost, Viola and
Jones's effective and real-time face detection is possible. This paper proposes a face
detection algorithm based on AdaBoost, but we extract MB-LBP features instead of Haar
features, which are used to train AdaBoost classifiers. In order to reduce the false positive
Recognizing faces from images is very challenging due to the diversity of faces and the
uncertainty of face position. People are increasingly exposed to the study of face
detection in color images and video sequences. In this paper, we propose a novel face
detection framework that can achieve better detection rates. YCgCr chromaticity space
and HSV color space face detection algorithm based on skin color model. First, we build
a skin Gaussian model in the Cg-Cr color space and then use some constraints to get the
face candidates. Second, the calculation of the correlation coefficient is performed
between the given template and the candidate. The experimental results show that our
system achieves high detection rate and low false positive rate in various face variations
of color, position and different illumination conditions.
Automated facial expression analysis is an interesting and challenging issue that affects
important applications in many areas, such as human-computer interaction and data-
driven animation. Obtaining effective facial representation from the original facial image
is an important step in successful facial expression recognition. In this paper, we evaluate
the facial expression recognition based on statistical local features and local binary
patterns. Different machine learning methods were systematically checked on several
databases. A large number of experiments have shown that LBP features are effective and
efficient for facial expression recognition. We further developed Boosted-LBP to extract
the most discriminative LBP features and obtain the best recognition performance by
using the support vector machine classifier with Boosted-LBP features. In addition, we
study LBP features for low-resolution facial expression recognition, which is a key issue,
but is rarely involved in existing work. We observed in experiments that LBP features are
performed stably and robustly over a range of effective low-resolution face images and
produce promising performance in compressed low-resolution video sequences captured
in real-world environments.
CHAPTER 3
Dept of CSE 2018-2019Page 6
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM
EXISTING SYSTEM
The present system is described and the flaws of the existing system are also discussed.
The techniques used in the present system are also discussed. It describes the necessity of
a new improved system and the limitations of existing system are also highlighted.
2. To extract the facial features such as eyes, nose, mouth of the detected face and
3. Classify facial expressions in different classes such as Happy, Angry, Sad, Fear,
The proposed system, face detection is implemented by skin color detection and
reveal the affective state, cognitive activity, personality, intention and psychological state
of a person. The proposed system consists of three modules. The face detection module is
based on an image segmentation technique in which the given image is converted into a
CHAPTER 4
As a data scientist in the consumer industry, I generally think that for most predictive
learning tasks, the lifting algorithm is enough, at least until now. They are powerful and
flexible and can be explained very well by some techniques. Therefore, I think it is
necessary to read some materials and write about the enhancement algorithm.
AdaBoost
AdaBoost is the abbreviation of "adaptive promotion" and is the first practical lifting
algorithm proposed by Freund and Schapire in 1996. It focuses on classification issues
and aims to convert a set of weak classifiers into strong classifications.
For any classifier with a precision greater than 50%, the weight is positive. The more
accurate the classifier, the greater the weight. For classifiers with less than 50% accuracy,
the weight is negative. This means that we combine their predictions by flipping the
symbols. For example, we can increase the accuracy of the classifier to 40% and the
accuracy to 60% by flipping the predictive symbols. Therefore, even if the classifier
performs worse than random guessing, it still contributes to the final prediction. We just
don't want any classifier to have an accurate 50% accuracy, which doesn't add any
information, so it doesn't contribute to the final forecast.
This section is based on the paper: Additive Logistic Regression: An Enhanced Statistical
View. See the original article for more details.
In 2000, Friedman and others. A statistical view of the AdaBoost algorithm was
developed. They interpreted AdaBoost as a phase estimation procedure for additive
logistic regression models.
Boosting is a general collection method that creates strong classifiers from many weak
classifiers. This is done by building a model from the training data and then creating a
second model to try to correct the error from the first model. Add a model until you
perfectly predict the training set or add the largest number of models. AdaBoost is the
first truly successful enhancement algorithm developed for binary classification. This is
the best starting point for understanding help. The modern boost method is based on
AdaBoost, the most famous of which is the stochastic gradient enhancement machine.
This section lists some of the heuristics for preparing the best data for AdaBoost.
• Quality Data: As the collection method continues to try to correct the misclassification
in the training data, you need to be aware that the training data is of high quality.
• Outliers: Outliers force the ensemble to work hard along the rabbit hole to correct
unrealistic conditions. These can be removed from the training data set.
• Noise data: Noise data, especially noise in the output variables, can cause problems. If
possible, try to isolate and cleanse these from the training data set.
Understand the integration approach, use the AdaBoost algorithm and learn to build the
AdaBoost model using Python.
In recent years, enhanced algorithms have gained widespread popularity in data science
or machine learning competitions. Most of the winners of these competitions use lifting
algorithms to achieve high accuracy.
These data science competitions provide a global platform for learning, exploring and
providing solutions to a variety of business and government issues. The Boosting
Dept of CSE 2018-2019Page 9
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM
algorithm combines multiple low-precision (or weak) models to create high-precision (or
strong) models. It can be used in a variety of areas such as credit, insurance, marketing
and sales. Lifting algorithms such as AdaBoost, Gradient Boosting, and XGBoost are
widely used machine learning algorithms to win data science competitions.
The AdaBoost collection enhancement algorithm and will cover the following topics:
Bagging indicates the boot aggregation. It combines multiple learners in a way that
reduces the estimated variance. For example, a random forest training M decision tree
allows you to train M different trees on different data random subsets and vote for final
prediction. The overall method of bagging is random forests and extra trees.
• XGBoost
AdaBoost classifier
It works as follows:
2. Iteratively train the AdaBoost machine learning model by selecting a training set based
on an accurate prediction of the final training.
4. In addition, it assigns weights to the trained classifiers in each iteration based on the
accuracy of the classifier. A more accurate classifier will get a high weight.
5. This process iterates until the complete training data fits without any errors or until the
specified maximum number of estimators is reached.
In the modeling part of the model, you can use the IRIS dataset, which is a very well-
known multi-class classification problem. The data set includes 4 features (skull length,
sepal width, petal length, petal width) and target (flower type). These data have three
types of flowers: Setosa, Versicolour and Virginica. The data set can be found in the
scikit-learn library, or you can download it from the UCI machine learning library.
To understand model performance, it is a good strategy to divide the data set into training
and test sets.
Let's split the data set using the function train_test_split(). You need to pass 3 parameter
functions, target and test_set size.
Let's create an AdaBoost model using Scikit-learn. AdaBoost uses the decision tree
classifier as the default classifier.
Dept of CSE 2018-2019Page 12
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM
advantage
CHAPTER 5
Image based methods have been applied to many areas of facial computing. One of the
most successful recent technologies, including shape and texture information from facial
images, is the Active Appearance Model (AAM) method. Originally developed by Cootes
and Taylor, it has shown great potential in a variety of facial recognition technologies. As
shown in FIG. 6, the face image is cropped from the detected face. The cropped image is
then used for feature extraction using the AAM method.
Initially, create an active shape model for a neutral image in the database. It automatically
creates a data file that provides information about the model points on the detected face.
Then, the video input starting with a neutral expression gives a sequence of different
expressions such as happiness, sadness, anger, fear, disgust and surprise. The change in
the AAM shape model measures the distance or difference between neutral and other
facial expressions based on changes in facial expressions.
Principal Component Analysis CBIR distinguishes images from image databases into
several categories. Database images belonging to the same category may differ in lighting
conditions, noise, etc., but are not completely random, and although they differ, there may
be some patterns. This mode can be called the main component. Principal Component
Analysis (PCA) is a mathematical tool for extracting the principal components of raw
image data. These main components can also be referred to as feature images. PCA is
commonly used for facial recognition. The idea of using principal components to
represent faces was developed by Sirovich and Kirby in 1987 and by Turk and Pentland
in 1991 for face detection and recognition. Many people think that the feature facial
method is the first working face recognition technology. The PCA can be used to convert
each original image from a database to its corresponding feature image. An important
feature of PCA is that any original image from the image database can be reconstructed
by combining feature images. Even partial feature images can be used to reconstruct an
approximation of the original image. PCA can perform prediction, redundancy deletion,
feature extraction, data compression and so on. Therefore, image retrieval using PCA
becomes apparent.
AAM model:
After this step, the expansion operation will help to restore the face area. This process can
be done multiple times to get good results. Since there are features having different
brightness, the pixel value variation of other skin-like regions such as hands or legs is
smaller than the pixel value variation of the face region, and thus all face region
candidates having pixel value variations smaller than the threshold are removed. In order
to improve the detection speed and achieve high robustness, the symmetry of the face is
checked, and all candidates are removed when the symmetry is verified but the facial
features are not detected..
Module description:
Face detection is a special case of object detection. In the proposed system, skin detection
and segmentation are used to implement face detection. It also involves illumination
compensation algorithms and morphological operations to maintain the face of the input
image. In order to extract facial features, an active appearance model, the AAM method,
is used. Finally, expressions are considered to be happiness, sadness, anger, fear, disgust,
and surprise, initially by using a simple Euclidean distance method and then by training
an artificial neural fuzzy inference system (ANFIS).
Image based methods have been applied to many areas of facial computing. One of the
most recent successful technologies is the Active Appearance Model (AAM) method,
which contains information about the shape and texture of the facial image. Originally
developed by Cootes and Taylor, it has great potential for a variety of facial recognition
technologies. As shown, the face image is cut from the detected face. The cropped image
is used to extract features by the AAM method.
Skin color detection is an important topic in computer vision research. It is fair to say that
the most common face detection algorithm uses color information, so estimating areas
with skin tones is often the first critical step in this strategy. Most of the research on face
detection based on skin color is based on RGB, YCbCr and HSI color space. Color is an
important feature of the human face. The use of skin tones as a feature of tracking faces
has several advantages. In this paper, we use the YCbCr color space. In the YCbCr color
space, the Y component represents luminance information; the Cr component represents
red chrominance information; and the Cb component represents blue chrominance
information. Given triple RGB, the YCbCr transform can be calculated using the
following formula: (1) In addition, itLiterature survey:
Furthermore, it has been shown that skin color model based on the Cb and Cr values has
stable range of distribution as follows:
In this paper, only the skin color is preprocessed for face detection. We only need this
algorithm to remove most non-face areas to reduce the calculation to the next step.
Therefore, the area I encountered in the above situation will be treated as a skin area and
set to white, while the other area setting input black will produce a binarized image.
Suppose S' is the number of white pixels and S is the number of all pixels. The face area
should then satisfy the condition: S'/S> = threshold. Experimental results show that if a
threshold of 0.3 is set, most non-face areas can be filtered and almost all face areas can
pass. Through skin color segmentation, we can quickly eliminate non-face areas and
reduce the detection range and detection speed when accelerating the use of the AdaBoost
algorithm.
MB-LBP Function In this article, we use the Multiple Local Binary Mode (MB-LBP)
function instead of the Haar-like function. The basic idea of MB-LBP is to convert simple
difference rules in Haar-like features into coded rectangular regions by local binary
pattern operators. The original LBP is defined for each pixel by thresholding the 3x3
neighborhood pixel values with the center pixel value. In order to encode the rectangle,
the MB-LBP operator is defined by comparing the average intensity gc of the center
rectangle with the average intensity of the neighborhood rectangles {g0, ..., g8}. In this
way, it can provide us with a binary sequence.
MB-LBP-based Adaboost algorithm with skin color segmentation for face detection
Face detection is a complex and important problem in pattern recognition, and it has a
wide range of applications. From the work of Viola and Jones, effective and real-time
face detection can be achieved by using Haar's rectangular feature method and AdaBoost
learning. In this paper, we implement the AdaBoost-based face detection algorithm, but
extract the features of MB-LBP instead of the Haar feature for the AdaBoost training
classifier. In order to reduce the false positive rate, we also combine skin color with
AdaBoost.
Due to the diversity of the face and the uncertainty of the position of the face, it is
difficult to recognize the face of the image. The research of face detection in color images
and video sequences has been attracted by more and more people. In this document, we
propose a new face detection framework that can achieve better detection rates. The new
face detection algorithm is based on the skin color model in the YCgCr chromaticity
space and the HSV color space. First, we construct a Gaussian skin model in the Cg-Cr
color space and then use some constraints to get a face candidate. Second, calculate the
correlation coefficient between the given template and the candidate. Experimental results
show that our system achieves high detection rates and low false positives in various
facial variations in color, position and variable lighting conditions.
FEATURE EXTRACTION
Face Detection
EXPRESSION CLASSIFICATION
Therefore, pixels on the face are filled with white pixels, and other areas in the image are
filled with black pixels. The binary image received after the morphological operation is
further used for face detection. As shown in FIG. 5, a face area in the image is detected,
which is further used to extract facial features.
Face Detection
Feature Extraction with Template
Expression Matching
Dept of CSE 2018-2019Page 19
Expression Classification
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM
The key idea of this research is to learn the descriptors of detailed neighborhoods, that is,
the differences between patch structures with and without details. The main steps
involved in the proposed method for detecting potential fingerprint details are as follows:
2. Training the binary classifier: The detail extraction in the potential print is presented as
a binary classification problem - whether the given potential patch is a detail patch or a
non-detail patch. The descriptors learned in the previous step are used to represent
potential print patches (details and non-details) of the tags. Then learn two binary
supervised classifiers (one using each descriptor) to classify between detail and non-
detailed patches.
3. Detection of detailed plaques: The trained classifier is used to extract and classify
plaque details and non-detail descriptors whenever invisible latent plaques are provided.
The final label is obtained by combining the outputs of the two classifiers using a
weighted summation rule. Detail extraction in the entire late blot is performed by
classifying each partial block as a detail or non-detail plaque.
Discrete Cosine
Transform
Gabor Filter
Principle Component
Analysis
Linear Discriminate
CHAPTER 6
SYSTEM DESIGN
The explanation behind this part is a blueprint that depicts the errands. This is a technique
for converting elements into programming descriptions. Design is one of the fundamental
periods of programming change. The creativity of the blueprint arrangement is the answer
to the questions that outline the requirements document control. This phase is the basic
phase of moving from problem domain to game planning. Structural arrangements can be
the most important factor influencing project concepts and significantly affect later
stages, especially testing and support. The income organization is the arrangement report.
The structural arrangement basically consists of modules such as educational documents
and the genetic count of flight chores by naval forces.
Precise Active
Appearance
Model Neuro-Fuzzy
Inference System
Dept of CSE 2018-2019Page 23
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM
Pe-processing
Phase
START
EYE
Mouth
Apply Genetic
Extracted Operators
mount contours
Skin Color
Conversion
Level-0 is the first level DFD and is often referred to as a context level map. Identify
advanced modules that must meet high-level requirements specifications. This advanced
diagram with unit processes is called an acontext diagram. This context map is shown in
many interrelated processes. The level 0 DFD for the proposed fleet allocation problem.
Distance between
Facial Analysis K-Means upper and lower
The sequence diagram explains the participants in the interaction and the sequence of
messages between them. The sequence diagram demonstrates the collaboration of the
framework with the characters on the screen to show all or part of the use case. Each role
on the screen, in addition, the frame is represented by a vertical line called Help and each
message is sent by the sender to the beneficiary level bolt. The time continues until
below, and the chart below shows the sequence diagram, showing only the sequence of
messages, not the correct plan. Each use case requires at least one sequence diagram to
describe its behavior. Each sequence diagram demonstrates the specific sequence of
behavior of the use case.
The sequence diagram in shows the various processes that a simple genetic algorithm has
and the various interactions between them. The selected crosses and mutations are shown.
The sequence diagram in shows the various processes that a simple genetic algorithm has
and the various interactions between them.
6.4 FLOWCHART
START
Transforming from
RGB to forward linear
transformation
Lighting
Compensation
Lighting
Compensation
Generate new
population
An activity diagram is basically a flow chart showing the flow from one activity to another.
Activities can be described as the operation of the system.
The control flow leads from one operation to another. The stream can be sequential, branched or
concurrent. Activity diagrams handle all types of flow control by using different elements such as
fork, join, etc.
The basic purpose of the activity diagram is similar to the other four diagrams. It captures the
dynamic behavior of the system. The other four graphs are used to display the flow of messages
from one object to another, but the activity graph is used to display the flow of messages from one
activity to another.
Activities are specific operations of the system. Activity diagrams are not only used to visualize
the dynamic nature of the system, but are also used to build executable systems using forward and
reverse engineering techniques. The only thing missing from the activity diagram is the message
part.
It does not display any message flow from one activity to another. Activity diagrams are
sometimes treated as flowcharts. Although the charts look like flowcharts, they are not. It shows
different streams, such as parallel, branch, concurrency and single.
The purpose of the activity diagram can be described as -
Face detection
Active Appearance
Model
Euclidean
Distance
ANFIS
EXPRESSION
CLASSIFICA nON
The following figure shows the message sequence of the SpecialOrder object, and the same
message sequence can be used in the case of the NormalOrder object. It is important to
understand the chronological order of the message flows. The message flow is nothing more than
a method call to the object.
The first call is sendOrder(), which is a method of the Order object. The next call is confirm(),
which is the method of the SpecialOrder object. The last call is Dispatch(), which is the method of
the SpecialOrder object. The following figure mainly describes the method call from one object to
another, which is also the actual scene when the system is running.
They also help identify any internal or external factors that may affect the system and should be
considered. They provide good advanced analysis from outside the system. The use case diagram
specifies how the system interacts with the actor without having to worry about how to implement
the details of the function.
Face Detection
Feature extraction
Expression recognition
Euclidean Distance
CHAPTER 7
SYSTEM IMPLEMENTATION
Distance Method In this method, the database consists of a training data set and a test image data
set. For a specific topic, the training and test dataset includes images of different expressions such
as neutral, happy, sad, angry, fear, disgust and surprise. Using the AAM method, points on facial
features are located in all of these images and stored as data files. The data file consists of the
relative x-y coordinates of those anchor points. When any test image is given as an input, the
system finds the Euclidean distance between the point on the test image and the point on each
training image.
Initially, create an active shape model for a neutral image in the database. It automatically creates
a data file that provides information about the model points on the detected face. Then, the video
input starting with a neutral expression gives a sequence of different expressions such as
happiness, sadness, anger, fear, disgust and surprise. The change in the AAM shape model
measures the distance or difference between neutral and other facial expressions based on changes
in facial expressions.
In order to improve the recognition rate of the system, the artificial neuron-fuzzy inference system
(ANFIS) was used to further modify the third stage. In this method, a still image as well as a
video input can be given for testing the expression. Here, an automatic facial expression
recognition system based on neuron blur has been proposed to recognize human facial
expressions such as happiness, fear, sadness, anger, disgust and surprise. Initially, videos
displaying different emoticons are framed as different images. The sequence of selected images is
then stored in a database folder. Using the AAM method, all image features are located and stored
in the form of .ASF files. Then create an average shape for all the images in the data folder.
Face recognition
It is a computer application for automatically identifying or verifying people from digital images
or video frames.
Program steps: data acquisition, input processing, face image classification and decision making.
It is a computer application for identifying facial expressions of anyone using images or video
clips or the person itself.
Program steps: face detection, feature extraction and expression classification. Application:
Healthcare, Games
Deep learning is a branch of the machine that uses a variety of relatively deep neural
systems to solve different problems in different fields such as NLP, computer vision and
bioinformatics. In-depth learning has undergone a huge and continuous exam renaissance
and has emerged in the classroom to convey the best application.
The deep residual network manages part of these problems by using the remaining blocks
to protect the input residual blocks. By using a deep residual learning system, different
methods of other systems with specific preparation challenges can be explored. It is
difficult to know the depth of deep network depth. If the number of layers is too deep, it is
difficult to propagate the error effectively. If the layer is over-restricted, we may not be
able to learn enough drawing controls. In any case, in a deep residue system, it is
protected for deeper preparation, keeping in mind the ultimate goal, gaining sufficient
learning ability without over-emphasizing derogatory problems, on the grounds that in the
most pessimistic circumstances, hinder those The "layer" of "no" meaning can figure out
how to become a character map without any mischief.
This is accomplished by the solver driving the weights of the near-zero ReLu layer along
these lines, except that the simple path association is dynamic and acts as a character
map. Although this is not hypothetical proof, changing the weight to near zero may not
require the solver to be assigned a strong description.The Resnet 50 working model for
image classification is used to classify the ResNet of the 50-layer network structure as
shown in FIG. The delineation strategy of the arranged deep neural framework is similar
to the delineation strategy of the multi-layer excitation forward process. The image is
used as data transferred layer by layer until the output layer characterizes the result. At
the primary level, ResNet50 uses a 7x7 convolution and steps down 2 requests for 2
requests, such as the pooling layer. At this point it is blocked by three characters and then
sampled again for 2 downsamplings. The downsampling layer is additionally a
convolutional layer, but there is no personality association. It is still a few layers deep.
The last layer is the normal pool, which generates 1000 element mappings (for Image Net
information) and is a standard mapping for each feature map. The result would be a 1000-
dimensional vector that was directly nourished to the Softmax layer, so it was completely
convolved, and finally we got a classification of the images belonging to which class.
CASE tool
2. Computer-aided software engineering tools are those that are used in each and every
phase of the development of an information system, including analysis, design and
programming. For example, data dictionaries and layout tools help in the analysis and
design phases, while application generators accelerate the programming phase.
3. CASE tools provide automated methods for designing and documenting traditional
structured programming techniques. The ultimate goal of CASE is to provide a language
to describe the general system that is sufficient to generate all the necessary programs.
2. Integration dimension
3. Dimension of construction.
Let's take the meaning of these dimensions together with their examples one by one:
This dimension classifies CASE tools according to the activities that support the life cycle
of information systems. They can be classified as upper or lower CASE tools. Upper
CASE Tool Upper CASE Tool is a computer-aided software engineering software
(CASE) tool that supports software development activities upstream from the
implementation. Upper case tool focuses on the analysis phase (but sometimes also the
design phase) of the software development life cycle (diagramming tools, report and form
generators, and analysis tools)
• Lower CASE tool Lower CASE tool Computer-aided software engineering software
(CASE) tool that directly supports implementation (programming) and integration tasks.
The Lower CASE tools are compatible with the generation of database schemas, the
generation of programs, the implementation, the tests and the administration of the
configuration. Integration dimension
1. Framework of cases
2. ICASE Tools Tools that integrate CASE upper and lower, for example, make it
possible to design a form and build the database to be compatible at the same time. An
automated system development environment that provides numerous tools to create
diagrams, forms and reports. It also offers analysis, report generation and code generation
functions, and perfectly shares and integrates the data in all the tools and among them.
Types of CASE tools The general types of CASE tools are listed below:
1. Layout tools: they allow the system process, the data and the control structures to be
represented graphically.
2. Report generators and computer screens: they help to create a prototype of how
systems look and feel. It facilitates the systems analyst to identify the requirements and
the relationship of the data.
6. Code Generators: allow the automatic generation of program and database definition
codes directly from the design documents, diagrams, forms and reports.
2. Design This is where the technical design of the system is created through the design of
the technical architecture, choosing between the architectural designs of
telecommunications, hardware and software that best adapt to the system of the
organization and future needs. Also designing the systems model: graphically creating a
model from the graphical user interface, the design of the screen and databases, to the
location of the objects on the screen.
Dept of CSE 2018-2019Page 35
FACE DETECTION AND FACIAL EXPRESSION RECOGNITION SYSTEM
3. The CASE code generation tool has code generators that allow automatic generation of
database and program definition codes directly from documents, diagrams, forms and
reports.
4. The CASE documentation tool has documentation generators to produce technical and
user documentation in standard forms. Each phase of the SDLC produces documentation.
The types of documentation that flow from one face to another vary according to the
organization, the methodologies used and the type of system being built.
CASE environments
An environment is a collection of CASE tools and workbenches that support the software
process. CASE environments are classified according to the integration approach.
1. Tool kits
2. Focused on language.
3. Integrated
4. Fourth generation
Tool kits
The toolkits are integrated collections of products that are easily extended by adding
different tools and workbenches. Normally, the support provided by a toolkit is limited to
programming, configuration management and project management. And the toolkit itself
is an extended environment from basic sets of operating system tools, for example, the
Unix programmer workbench and the VMS VAX set. In addition, the flexible integration
of toolkits requires the user to activate the tools by explicit invocation or simple control
mechanisms. The resulting files are not structured and could have a different format,
therefore, the access of a file from different tools may require an explicit conversion of
the file format. However, since the only restriction to add a new component is the format
of the files, the toolkits can be expanded easily and incrementally.
The environment itself is written in the programming language for which it was
developed, which allows users to reuse, customize and extend the environment. The
integration of the code in different languages is a major problem for language-centric
environments. The lack of integration of processes and data is also a problem. The
strengths of these environments include a good level of control and presentation
integration. Interlisp, Smalltalk, Rational and KEE are examples of language-centric
environments.
Integrated
Fourth generation
The fourth generation environments were the first integrated environments. They are sets
of tools and work banks that support the development of a specific class of program:
electronic data processing and business-oriented applications. In general, they include
programming tools, simple configuration management tools, document handling facilities
and, sometimes, a code generator to produce code in lower level languages. Informix 4GL
and Focus are in this category.
The environments in this category focus on the integration of processes with other
integration dimensions as starting points. A process-centered environment works by
interpreting a process model created by specialized tools. They generally consist of tools
that handle two functions:
• Process-model production.
Examples are East, Enterprise II, Process Wise, Process Weaver and Arcadia
Help communication between the members of the development team May lead to the
restriction of the capabilities of the tool
Automatically check the quality of the models. Main danger: integrity and syntactic
correction does NOT mean compliance with the requirements
Reduction of time and effort. Costs associated with the use of the tool: purchase +
training.
CHAPTER 8
SOFTWARE TESTING
Software testing is the way to execute a program in a plan that finds the error. Software
testing is an essential part of software quality assurance, allowing for clear auditing of
framework specifics, plans, and coding. Testing is the last chance to reveal a big mistake
in the transfer of software and office value frameworks. Test the stage by testing specific
program parts to discover the purpose of defects or errors. These parts may be
restrictions, things or modules. In structural testing, these segments are composed to
shape the entire system. At this stage, the test should be based on the structure's
compliance with its utilitarian elements and will not be sustained in a sudden manner.
Test data is the input that is planned for testing the system, and the test case is a
commitment to test the structure, and if the system is operating according to its specific
function, it is expected to obtain yield from these information wells. This is broken
directly in a solid system. Make sure that the inspection results are directly decomposed
in every possible combination of conditions. According to the requirements, ordinary lead
of the structure under different blends is given. Therefore, selecting a test case with input
and yield on the expected line, the input is not very large and the fit message must be
given and the top of the data source does not occasionally appear, which can be
considered an exception.
Unit testing is also known as white box testing. A white box is a test program that
considers the inward structure of a frame or segment. This method is known as a white
box test, considering the fact that the total internal structure and work of the code can be
accessed.
Qualitative Assessment
This section contains two fingerprint examples and detail extraction results. Figure 6
shows the detail extraction on the inferior fingerprint, while Figure 7 shows
Detail extraction on fine fingerprints. Taking into account the details of NIST and DP
extraction on inferior fingerprints, it is clear that drying will have an impact on the
success of these extractors. In this case, MENet is able to detect the maximum number of
correct details.
The ideal situation is a common minimization of errors and undetected details, but this is
a very challenging task. Future work may see an improved MENet post-processing
program that takes into account local determinism and quality measurements when
determining blob thresholds.
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system, the results of the processing are communicated to the
users and to other systems through outputs. In the output design, it is determined how the
information will be moved for an immediate need and also the output on paper. It is the
most important and direct source of information for the user. Intelligent and efficient
output design improves the system's relationship to help users make decisions.
1. The design of the computer output should proceed in an organized and thoughtful
manner; the correct output must be developed while ensuring that each output element is
designed so that people find that the system can be used easily and effectively. When the
analysis designs the output of the computer, they should identify the specific output that
is needed to meet the requirements.
Face detection is a special case of object detection. In the proposed system, face detection
is implemented through the detection and segmentation of skin color. It also involves a
lighting compensation algorithm and morphological operations to retain the face of the
input image. To extract the facial features, the Active Appearance Model is used, that is,
the AAM method. Finally, the expressions are recognized as Happy, Sad, Anger, Fear,
Disgust and Surprise, initially using the simple method of the Euclidean Distance and
then training the Neuro-Fuzzy Artificial Inference System (ANFIS).
CHAPTER 9
• The distance between the corner of the mouth and the corresponding outer
corner of the eye.
In order to extract these distances, follow the sequence of steps below, which
will be detailed later in this article:
• Extract faces using the Haar classifier in the OpenCV library;
• Rotate the face so that the line connecting the eyes is always level;
• For each eye, use Bezier curves to identify the exact eye contour and
approximate the contour;
• Extract three features (distance) and two features associated with the mouth
for each eye.
The 269 patterns found using feed forward neural networks with multiple hidden layers
were classified. In order to classify facial expressions into categories, K-means
classification will be used. This type of classification is needed to better divide portrait
pictures into one of six types: anger, disgust, fear, happiness, neutrality, and sadness. Data
analysis was performed in Matlab. For 269 input vectors, the following table was
obtained, which defines clustering silhouettes for different numbers of clustering groups.
One advantage of using these color spaces is that most video media have been encoded
using these color spaces.
All of these color spaces separate the illumination channel (Y) from the two orthogonal
chrominance channels (UV, IQ, CbCr).
Therefore, unlike RGB, the position of the skin tone in the chroma channel is not affected
by changing the intensity of the illumination. In the chrominance channel, the skin color
is usually located in a compact cluster with an elliptical shape. This helps to build a skin
detector that is constant in illumination intensity and uses a simple classifier.
A histogram showing the different color models of two different images. The histogram
of the image represents the relative frequency of occurrence of various gray levels in the
image. From the results the results obtained show that for any type of skin, the gray scale
distribution of Cb and Cr is within a range of pixel values.
SCREENSHOTS
The first stage of face detection has been tried for 105 image samples and the correct
detection of 95 image samples has been obtained. According to the test results, false face
detection occurs in the case where the image quality is low or the face size is lower than
32×32. The AAM (Active Appearance Model) method combines the shape and texture
information of the face image, so it is found to facilitate feature extraction.
For expression recognition, the Euclidean distance method is useful for still images and
requires a lot of manual work. The recognition rate of this method is 90-95%. ANFIS
(Artificial Neural Fuzzy Inference System) has been used as a further improvement due to
its ambiguity with real-time or robust images. In this system, still images and video can
be given as inputs and tested for different expressions. In addition, the system can work
accurately for databases that are not related to individuals. The accuracy of facial
expression recognition varies with the number of training samples. For a large number of
training samples, the system makes the recognition rate close to 100%. The neurofuzzy
method for facial expression recognition is suitable for real-time applications such as
human sentiment analysis, human-computer interaction, surveillance and online
conferencing, and entertainment.
The proposed work can be further extended by increasing the number of different
expressions (anger, fear, disgust, joy, surprise, sadness) in addition to the six universal
expressions. Classification of other facial expressions may require the extraction and
tracking of additional facial points and corresponding features. The system can be
improved by using a wider training set to cover a wider range of poses and low quality
images.
BIBLIOGRAPHY
3. Kai-biao ge, jing wen, bin fang, "Adaboost Algorithm based on MB-LBP features
with skin Color segmentation for face detection" Proceedings of the 2011
International Conference on wavelet analysis and pattern recognition, guilin, 10-
13 july, 2011.
4. Kamarul Hawari, Bin Ghazali, Jie Ma, Rui Xiao, "An Innovative Face Detection
based on Skin Color Segmentation", International Journal of Computer
Applications (0975-8887), Volume 34- No.2, November 2011.
5. GJ. Edwards, T.F. Cootes, and CJ. Taylor, "Face reccognition using active
appearance models", Proceedings of the European Conference on Computer
Vision, 1998.
7. 2011, Volume 4, Issue 2, Page 115. [7] V. Gomathi, Dr. K. Ramar, and A.
Santhiyaku Jeevakumar, "A Neuro Fuzzy approach for Facial Expression
Recognition using LBP Histograms", International Journal of Computer Theory
and Engineering, Vol. 2, No. 2 April, 2010.
8. Jizheng, Xia, Lijang, Yuli, Angelo; “Facial expression recognition considering
differences in facial structure and texture”, IET Computer Vision 2013.
10. Jizheng, Xia, Yuli, Angolo; “Facial expression recognition based on t-SNE and
adaboost M2”, IEEE International Conference on Green Computing and
Communications and IEEE Internet of Things and IEEE Cyber, Physical and
Social Computing 2013.
11. J.J. Lee, Md. Zia Uddin, T.S. Kim; “Spatiotemporal human facial expression
recognition using fisher independent component analysis and hidden markov
model”, 30th Annual International IEEE EMBS Conference 2008.
13. Ying, Zhang; “facial expression recognition based on NMF and SVM”,
International Forum on Information Technology and Applications 2009.
14. Anagha, Dr. Kulkarnki; “Facial detection and facial expression recognition
system”, International Conference on Electronics and Communication System
(ICECS -2014).
16. G.Hemalatha, C.P. Sumathi; “A Study of Techniques for Facial Detection and
Expression Classification”, International Journal of Computer Science &
Engineering Survey (IJCSES) 2014
17. Deepti, Archana, Dr. Jagathy; “Facial expression recognition using ANN”, IOSR
Journal of Computer Engineering 2013.
18. Banu, Danciu, Boboc, Moga, Balan; “A novel approach for face expression
recognition”, IEEE 10th Jubilee International Symposium on Intelligent Systems
and Informatics 2012.
19. Wang Zhen, Ying Zilu; “Facial expression recognition based on adaptive local
binary pattern and sparse representation”, 2012 IEEE.