Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
43 views3 pages

Transformation of Facial Expression Into Corresponding Emoticons

1. The document proposes a system to transform facial expressions into corresponding emojis in real-time. 2. It uses an API to detect faces from camera images, then extracts features using HAAR cascade classification. 3. A support vector machine classifier categorizes the facial expressions into seven emotions, which are then mapped to emojis overlaid on the faces.

Uploaded by

Methane Carbon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views3 pages

Transformation of Facial Expression Into Corresponding Emoticons

1. The document proposes a system to transform facial expressions into corresponding emojis in real-time. 2. It uses an API to detect faces from camera images, then extracts features using HAAR cascade classification. 3. A support vector machine classifier categorizes the facial expressions into seven emotions, which are then mapped to emojis overlaid on the faces.

Uploaded by

Methane Carbon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 3

International Journal of Engineering and Advanced Technology (IJEAT)

ISSN: 2249 – 8958, Volume-X, Issue-X, don’t delete Top & Bottom Header, Manuscript details (I st Page, Bottom, Left
Side)

Ankur Ankit, Dhananjay Narayan, Alok Kumar

Transformation of Facial Expression into


corresponding Emoticons

are temporary feelings over an issue while mood is a
Abstract: There are many different ways to express and generalized sentiment what usually lasts longer.
communicate our feelings. The two classified ways of Primarily there are seven different form of emotions
communication are verbal and non - verbal. Facial expressed by humans [3]. They include: Happiness, Sad,
expressions are a great way of communication involving the
Anger, Surprise, Disgust, Fear and Neutral. All the other
exchange of wordless intimations. It has enticed much
emotions are the inferences of these emotions.
research attention in the field of computer vision and
Artificial intelligence. Many kinds of research have been done
for grouping these expressions. It is chiefly done to acquire the
sentiments of humans. In this project, an API can be
employed to fetch images from any camera-based application
in real time. HAAR cascade classifier is employed to extract
the image features from the images fetched earlier. Support
Vector Machines (SVM) is used to classify those features into
corresponding expressions. And these expressions are then
converted to their equivalent emojis, after that these emojis
are get superimposed over the actual face expression as a
mask. This project can be used to study the different facial
expressions that a machine can understand and also it can be
used as a filter used in social media apps like Facebook,
Instagram, Snapchat, etc. Fig. 1. Classification of Emotions

Index Terms: Emotion Recognition, Face Detection API, In this paper, we will look into the detection of the faces
SVM, HAAR, OpenCV, Emoji, Computer Vision in real time images using readily available APIs. Further
after the detection of the faces, using HAAR Cascade, we
I.INTRODUCTION can extract the features of the images and then process it.
Communication is an important act of exchanging Followed by which the emotions are classified through
information between two different persons or groups. The SVM. These emotions are then transformed to their
person sending the information is referred as sender while identical emoticons which will be later superimposed on the
the person acquiring the information is referred as receiver. face.
The non-verbal communication of messaging involves the
exchange of wordless cues. II.EXISTING SYSTEM
“Nonverbal communication is ubiquitous.”[1]. They are The current systems are mostly based on neural networks
always present in every communication process. It which need require large number of datasets for
comprises of 93% of human communication and in this computation. Designing of these neural networks are
55% consists of human gestures and actions [2]. Facial mathematically complex in nature. The processing and
expressions like Laughing, crying, staring and body testing time if these networks consume a lot of time.
gestures like pointing, crossed-legs and some of the hand Though they depict quite efficient results for the static
gestures like thumbs signals are some of the non-verbal images, their real time processing is low.
communications. By looking at someone’s facial
expression, we can comprehend the other person’s feelings. For facial features extraction, the various machine
These non-verbal signs give more insights and meaning learning algorithms such as Viola-Jones, HOG are used that
that is not provided by the verbal communication. are not as efficient as HAAR- Like features. Viola jones is
A major chunk of non-verbal communication involves slow in processing of image while HOG is for quality. HOG
the facial emotions exhibited by a person. Emotions collects noisy information like background, blur, rotation
represent the mental state along with the facial expressions, changes and lighting of the entire image and followed by
actions or any physical changes. They are associated with the generation of Histogram.
the current mood but differs from it, in a way that emotions


Revised Manuscript Received on December 22, 2018. III. PROPOSED SYSTEM
Ankur Ankit, Department of Computer Science, SRM Institute of Science
and Technology, Chennai, India. The idea of the proposed system is to employ an API that
Dhananjay Narayan, Department of Computer Science, SRM Institute of will detect the face after which the image can be processed
Science and Technology, Chennai, India.
Alok Kumar, , Department of Computer Science, SRM Institute of using HAAR cascade for facial feature extraction. SVM
Science and Technology, Chennai, India. Classifier is then used to categorize the emotions into its

Published By:
Blue Eyes Intelligence Engineering &
Retrieval Number: XXXXXXX Sciences Publication
1
Please Enter Title Name of Your Paper

seven distinct types. Using HAAR of OpenCV package, the [7] and Japanese Female Facial Expression (JAFFE) [8].
corresponding emojis of the emotions can get superimposed HAAR-Like features have high accuracy to detect faces
over the subjects’ faces. In any camera module of any from different angles [9].
leading social networking apps, the use of APIs can reduce It extracts the facial features from the face of the subject
the processing time for face detection for which they have like eyes, eyebrows, and mouth expressions which we get
their in-built face detection algorithm which can detect the through the API. These results are then delivered to the
face smoothly and followed by which the emoticons can be Support Vector Machines (SVM).
implemented over the faces as filters.

Fig. 2. Transformation of facial expression into corresponding


emoticon

IV. ARCHITECTURE

Fig. 4. HAAR-Like feature for face detection

C.SUPPORT VECTOR MACHINE (SVM)


Support Vector Machine is a supervised machine
learning algorithm that is used for classification as well as
regression problems. The SVM is used in many pattern
analysis tasks with support of binary classifier that
differentiates between the classifications of the expressions.
It works by classifying data through the use of assessment
of an optimal hyper plane which separates one class’s data
points from the other [10].

Fig. 3. Proposed System Architecture

A.API IMPLEMENTATION
An API acts as an interface between an operating system,
application and the user [4]. The API design plays a
significant role on its usage [5]. An API is designed in such
a way that it hides the background details of modules from
the users who do not have the knowledge of complexity of Fig. 5. SVM Classifier
the modules. Thus, API facilitates the user-friendly
interface [5]. The features of image that is given to the SVM after
A camera-based API can be used which automatically HAAR Classification, is then compared with the datasets
detects the face of the subject(s) regardless of the which have been trained and then those images are
background and send this image to the model for categorized to the corresponding emotion variant. After
processing after which the emoji will be superimposed over this, the corresponding emoticon is superimposed over the
the face. image. The result is transferred back to the API that
displays the corresponding new superimposed image with
the emoticon.

V. RESULTS
B.HAAR CASCADE
Our proposed model will detect a face using API and
The image that is supplied by the API is then provided to
feature extraction is done through HAAR Cascade.
the HAAR cascade in which some dataset has been given
Emotions are classified from the extractions through SVM.
for training the data. For the development of a working
The Emojis are later superimposed over the faces according
model, we will use two datasets: Cohn-Kanade (CK+) [6]

Published By:
Blue Eyes Intelligence Engineering &
Retrieval Number: XXXXXXX Sciences Publication
2
International Journal of Engineering and Advanced Technology (IJEAT)
ISSN: 2249 – 8958, Volume-X, Issue-X, don’t delete Top & Bottom Header, Manuscript details (I st Page, Bottom, Left
Side)

to the matching emotion exhibited by the subject. The final Recognition, 2000. Proceedings. Fourth IEEE International
Conference on, pp. 46–53, 2000
output will be as shown in the Figure 6. 8. M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding
facial expressions with gabor wavelets,” in Automatic Face and
Gesture Recognition, 1998. Proceedings. Third IEEE International
Conference on, pp. 200–205, Apr 1998.
9. Rekha N., Dr. M. Z. Kurian, " Face Detection in Real Time Based
on HOG," in International Journal of Advanced Research in
Computer Engineering and Technology (IJARCET) Volume 3
Issue 4, April 2014.
10. Emotion Expression Recognition using Support Vector Machine,
Melanie Dumas Department of Computer Science University of
California, San Diego La Jolia, CA 92193-01

Fig. 6. Emotion indicated by the emoticon

VI. CONCLUSION
In this paper, Computer Vision has been used for the
recognition of facial emotion and converting those emotions
into their corresponding emoticons. Object face is detected
using any camera-based API. The Features of the
expressions of the detected face will be extracted using
HAAR cascade that will supply the feature extractions of
the expressions depicted in the image for further
classification into seven emotions by employing Support
Vector Machines (SVM) that exhibits a good accuracy value
as compared to the other existing algorithms. This proposed
model can be used by the leading social networking
handlers like Facebook, Instagram, Snapchat for their
camera-based applications involving various effects and
filters.
There are many existing face-detecting neural networks
that have good efficiency but their implementation may be
difficult in some cases. Through our approach of using APIs
instead of neural networks, we can make the
implementation convenient.

REFERENCES
1. (Burgoon, J., Guerrero, L., Floyd, K., (2010). Nonverbal
Communication, Taylor & Francis. p.3)
2. Carton, J.S., Kessler, E.A., Pape, C.L.: Nonverbal decoding skills
and relationship well-being in adults. J. Nonverbal Behav. 23(1),
91–100 (1999)
3. Izard, C.E.: Human Emotions. Springer, New York (2013)
4. Lewine, Donald A. (1991). POSIX Programmer's Guide (PDF).
O'Reilly & Associates, Inc. p. 1. Retrieved 2 August 2016.
5. Clarke, Steven (2004). "Measuring API Usability". Dr. Dobb's.
Retrieved 29 July 2016.
6. P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I.
Matthews, “The extended Cohn-Kanade dataset (CK+): A
complete dataset for action unit and emotion-specified expression,”
in Computer Vision and Pattern Recognition Workshops
(CVPRW), 2010 IEEE Computer Society Conference on, pp. 94–
101, June 2010.
7. T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for
facial expression analysis,” in Automatic Face and Gesture

Published By:
Blue Eyes Intelligence Engineering &
Retrieval Number: XXXXXXX Sciences Publication
3

You might also like