Report File of Project
Report File of Project
On
Face Recognition System Using ML
Submitted in partial fulfillment of the
requirement for the award of the degree of
DEGREE
Session 2023-25
in
[Computer Science]
By
[Mohit Gangwar (23SCSE2030741)]
[Sanju Kumar (23SCSE2030713)]
[Arjun Singh Mawri (23SCSE2030740)]
We would like to express our deep gratitude to our project guide Mr. Ashok
Kumar Yadav, Associate Professor, Department of SCSE , his guidance with
unsurpassed knowledge and immense encouragement. We are grateful for providing us
with the required facilities for the completion of the project work.
We would like to thank our parents, friends, and classmates for their encouragement
throughout our project period. At last but not the least, we thank everyone for supporting
us directly or indirectly in completing this project successfully.
PROJECT STUDENTS
Mohit Gangwar(23SCSE2030741)
Sanju Kumar(23SCSE2030713)
Arjun Singh Mawri(23SCSE2030740)
CANDIDATE’S DECLARATION
I/We hereby certify that the work which is being presented in the project, entitled “ FACE
RECOGNITION BASED ATTENDANCE SYSTEM”
in partial fulfillment of the requirements for the award of the MCA (Master of Computer Application)
submitted in the School of Computer Application and Technology of Galgotias University, Greater
Noida, is an original work carried out during the period of September 2024 , to November 2024,
under the supervision of “ MR. Ashok Kumar Yadav” Department of Computer Science and
Engineering/School of Computer Application and Technology , Galgotias University, Greater Noida.
The matter presented in the thesis/project/dissertation has not been submitted by me/us for the
This is to certify that the above statement made by the candidates is correct to the best of my knowledge.
1. Abstract
2. Chapter 1 (Introduction)
3. Chapter 2 (Literature Review)
4. Chapter 3 (Model Implementation And Analysis)
5. Chapter 4 (Code Implementation)
6. Chapter 5 (Work Plan)
7. Chapter 6 (Performance Analysis)
8. Conclusion
9. References
ABSTRACT
i
CHAPTER-1
INTRODUCTION
1
1.1 Project Objective:
2
1.2 Background:
3
Nowadays, face recognition system is prevalent due to its simplicity and
awesome performance. For instance, airport protection systems and FBI use
face recognition for criminal investigations by tracking suspects, missing
children and drug activities (Robert Silk, 2017). Apart from that, Facebook
which is a popular social networking website implement face recognition to
allow the users to tag their friends in the photo for entertainment purposes
(Sidney Fussell, 2018). Furthermore, Intel Company allows the users to use
face recognition to get access to their online account (Reichert, C., 2017).
Apple allows the users to unlock their mobile phone, iPhone X by using face
recognition (deAgonia, M., 2017).
4
as
5
calling student names or checking respective identification cards. There are not
only disturbing the teaching process but also causes distraction for students
during exam sessions. Apart from calling names, attendance sheet is passed
around the classroom during the lecture sessions. The lecture class especially
the class with a large number of students might find it difficult to have the
attendance sheet being passed around the class. Thus, face recognition
attendance system is proposed in order to replace the manual signing of the
presence of students which are burdensome and causes students get distracted
in order to sign for their attendance. Furthermore, the face recognition based
automated student attendance system able to overcome the problem of
fraudulent approach and lecturers does not have to count the number of
students several times to ensure the presence of the students.
The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facial
identification. One of the difficulties of facial identification is the identification
between known and unknown images. In addition, paper proposed by Pooja G.R et
al. (2010) found out that the training process for face recognition student attendance
system is slow and time-consuming. In addition, the paper proposed by Priyanka
Wagh et al. (2015) mentioned that different lighting and head poses are often the
problems that could degrade the performance of face recognition based student
attendance system.
Hence, there is a need to develop a real time operating student attendance system
which means the identification process must be done within defined time constraints
to prevent omission. The extracted features from facial images which represent the
identity of the students have to be consistent towards a change in background,
illumination, pose and expression. High accuracy and fast computation time will be
6
the evaluation points of the performance.
7
1.5 Flow chart
8
1.6 Scope of the project:
9
CHAPTER-2
LITERATURE REVIEW
10
2.1 Student Attendance System:
11
● Image processing for autonomous machine application
12
● Description/feature Selection – extracts the description of image objects
suitable for further computer processing.
● Recognition and Interpretation – Assigning a label to the object based on
the information provided by its descriptor. Interpretation assigns meaning to
a set of labelled objects.
● Knowledge Base – This helps for efficient processing as well as inter
module cooperation.
Face Detection
Face detection is the process of identifying and locating all the present faces
in a single image or video regardless of their position, scale, orientation, age and
expression. Furthermore, the detection should be irrespective of extraneous
illumination conditions and the image and video content[5].
13
pose and other factors, needs to be identified based on acquired images[6].
Face Detection
A face Detector has to tell whether an image of arbitrary size contains a
human face and if so, where it is. Face detection can be performed based
on several cues: skin color (for faces in color images and videos, motion (for
faces in videos), facial/head shape, facial appearance or a combination of these
parameters. Most face detection algorithms are appearance based without using
other cues. An input image is scanned at all possible locations and scales by a sub
window. Face detection is posed as classifying the pattern in the sub window
either as a face or a non-face. The face/nonface classifier is learned from face and
non-face training examples using statistical learning methods[9]. Most modern
algorithms are based on the Viola Jones object detection framework, which is
based on Haar Cascades.
14
Face Detection
Advantages Disadvantages
Method
1. Long Training Time. 2.Limited
1. High
Viola Jones Head Pose. 3.Not able to detect dark
detection Speed.
Algorithm faces.
2. High Accuracy.
1.Simple computation. 1.Only used for binary and grey
Local Binary 2.High tolerance against images. 2.Overall performance is
Pattern Histogram the monotonic inaccurate compared to Viola-Jones
illumination changes. Algorithm.
15
Figure 2.2: Haar Feature
16
The values
17
of integral image at the rest of the locations are cumulative. For instance, the
value at location 2 is summation of A and B, (A + B), at location 3 is summation
of A and C, (A + C), and at location 4 is summation of all the regions, (A + B + C
+ D). Therefore, the sum within the D region can be computed with only addition
and subtraction of diagonal at location 4 + 1 − (2 + 3) to eliminate rectangles A, B
and C.
It was first described in 1994 (LBP) and has since been found to be a
powerful feature for texture classification. It has further been determined that
when LBP is combined with histograms of oriented gradients (HOG) descriptor,
it improves the detection performance considerably on some datasets. Using the
LBP combined with histograms we can represent the face images with a simple
data vector.
19
● It can also be represented as a 3x3 matrix containing the intensity of
each pixel (0~255).
● Then, we need to take the central value of the matrix to be used as the
threshold.
● This value will be used to define the new values from the 8 neighbors.
● For each neighbor of the central value (threshold), we set a new binary
value. We set 1 for values equal or higher than the threshold and 0 for
values lower than the threshold.
● Now, the matrix will contain only binary values (ignoring the central
value). We need to concatenate each binary value from each position
from the matrix line by line into a new binary value (e.g. 10001101).
Note: some authors use other approaches to concatenate the binary
values (e.g. clockwise direction), but the final result will be the same.
● Then, we convert this binary value to a decimal value and set it to the
central value of the matrix, which is actually a pixel from the original
image.
● At the end of this procedure (LBP procedure), we have a new image
which represents better the characteristics of the original image.
20
value of the new data point.
Based on the image above, we can extract the histogram of each region as
follows:
21
● So to find the image that matches the input image we just need to
compare two histograms and return the image with the closest
histogram.
● We can use various approaches to compare the histograms (calculate
the distance between two histograms), for example: Euclidean
distance, chi-square, absolute value, etc. In this example, we can use
the Euclidean distance (which is quite known) based on the
following formula:
● So the algorithm output is the ID from the image with the closest
histogram. The algorithm should also return the calculated distance,
which can be used as a ‘confidence’ measurement.
● We can then use a threshold and the ‘confidence’ to automatically
estimate if the algorithm has correctly recognized the image. We
can assume that the algorithm has successfully recognized if the
confidence is lower than the threshold defined.
22
CHAPTER-3
MODAL IMPLEMENTATION
AND ANALYSIS
23
3.1 INTRODUCTION:
Face detection involves separating image windows into two classes; one
containing faces (turning the background (clutter). It is difficult because although
commonalities exist between faces, they can vary considerably in terms of age,
skin color and facial expression. The problem is further complicated by differing
lighting conditions, image qualities and geometries, as well as the possibility of
partial occlusion and disguise. An ideal face detector would therefore be able to
detect the presence of any face under any set of lighting conditions, upon any
background. The face detection task can be broken down into two steps. The first
step is a classification task that takes some arbitrary image as input and outputs a
binary value of yes or no, indicating whether there are any faces present in the
image. The second step is the face localization task that aims to take an image as
input and output the location of any face or faces within that image as some
bounding box with (x, y, width, height).After taking the picture the system will
compare the equality of the pictures in its database and give the most related
result. We will use NVIDIA Jetson Nano Developer kit, Logitech C270 HD
Webcam, open CV platform and will do the coding in python language.
3.2 Modal Implementation:
24
The main components used in the implementation approach are open source
computer vision library (OpenCV). One of OpenCV’s goals is to provide a simple-
to-use computer vision infrastructure that helps people build fairly sophisticated
vision applications quickly. OpenCV library contains over 500 functions that span
many areas in vision. The primary technology behind Face recognition is OpenCV.
The user stands in front of the camera keeping a minimum distance of 50cm and his
image is taken as an input. The frontal face is extracted from the image then
converted to gray scale and stored. The Principal component Analysis (PCA)
algorithm is performed on the images and the eigen values are stored in an xml file.
When a user requests for recognition the frontal face is extracted from the captured
video frame through the camera. The eigen value is re-calculated for the test face
and it is matched with the stored data for the closest neighbour.
25
● Histograms: computing, equalization, and object localization with back
projection algorithm
● Segmentation: thresholding, distance transform, foreground/background
detection, watershed segmentation
26
Step : Install OpenCV system-level dependencies and other development dependencies
Let’s now install OpenCV dependecies on our system beginning with tools needed to build
and compile OpenCV with parallelism:
Lastly, we’ll install Video4Linux (V4L) so that we can work with USB webcams and
install a library for FireWire cameras:
27
Step #6: Set up Python virtual environments on your Jetson Nano
Figure 3.9: Each Python virtual environment you create on your NVIDIA Jetson Nano is separate and
independent from the others.
I can’t stress this enough: Python virtual environments are a best practice when both
developing and deploying Python software projects.
Virtual environments allow for isolated installs of different Python packages. When you
use them, you could have one version of a Python library in one environment and another
version in a separate, sequestered environment.
In the remainder of this tutorial, we’ll create one such virtual environment; however, you
can create multiple environments for your needs after you complete this Step#6. Be sure
to read the RealPython guide on virtual environments if you aren’t familiar with them.
First, we’ll install the de facto Python package management tool, pip:
$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python3 get-pip.py
$ rm get-pip.py
28
And then we’ll install my favorite tools for managing virtual
environments, virtualenv and virtualenvwrapper:
The virtualenvwrapper tool is not fully installed until you add information to your bash
profile. Go ahead and open up your ~/.bashrc with the nano ediitor:
$ nano ~/.bashrc
Save and exit the file using the keyboard shortcuts shown at the bottom of the nano
editor, and then load the bash profile to finish the virtualenvwrapper installation:
$ source ~/.bashrc
29
Figure 3.10: Terminal output from the virtualenvwrapper setup installation indicates that there are no
errors. We now have a virtual environment management system in place so we can create computer
vision and deep learning virtual environments on our NVIDIA Jetson Nano.
This step is dead simple once you’ve installed virtualenv and virtualenvwrapper in the
previous step. The virtualenvwrapper tool provides the following commands to work with
virtual environments:
● mkvirtualenv
● lsvirtualenv
30
● rmvirtualenv
● workon
● deactivate
: Exits the virtual environment taking you back to your system environment
Assuming Step #6 went smoothly, let’s create a Python virtual environment on our
Nano:
I’ve named the virtual environment py3cv4 indicating that we will use Python 3 and
OpenCV 4. You can name yours whatever you’d like depending on your project and
software needs or even your own creativity.When your environment is ready, your bash
prompt will be preceded by (py3cv4). If your prompt is not preceded by the name of your
virtual environment name, at any time you can use the workon command as follows:
$ workon py3cv4
31
Figure 3.11: Ensure that your bash prompt begins with your virtual environment name for the remainder of
this tutorial on configuring your NVIDIA Jetson Nano for deep learning and computer vision.
For the remaining steps , you must be “in” the py3cv4 virtual environment.
3.3.2.1 Webcam:
32
Specifications:
• Logitech C270 Web Camera (960-000694) supports for NVIDIA jetson nano
developer kit.
• The C270 HD Webcam gives you sharp, smooth conference calls (720p/30fps) in
a widescreen format. Automatic light correction shows you in lifelike, natural
colors.
• Which is suitable to use with the NVIDIA Jetson Nano and NVIDIA Jetson
Xavier NX Development Kits.
Face Detection:
Start capturing images through web camera of the client side: Begin:
● calculate the eigen value of the captured face image and compared
with eigen values of existing faces in the database.
● If eigen value does not matched with existing ones,save the new face
image information to the face database (xml file).
● If eigen value matched with existing one then recognition step will done.
End
Face Recognition:
Using PCA algorithm the following steps would be followed in for face
recognition:
Begin:
33
● Find the face information of matched face image in from the database.
● update the log table with corresponding face image and system
time that makes completion of attendance for an individua
students.
End
This section presents the results of the experiment conducted to capture the
face into a grey scale image of 50x50 pixels.
34
Figure 3.13 : Dataset sample
35
CHAPTER-4
CODE IMPLEMENTATION
36
4.1 Code Implementation:
All our code is written in Python language. First here is our project
directory structure and files.
Note: The names inside square brackets [“folder name”] indicate it is a folder.
[Attendance] => It contains all the attendance sheets saved after taking attendance.
[ImagesUnknown] => Unknown images are placed inside this folder to avoid false positives.
[EmployeeDetails] => Here we place Employeedetails.csv file to use while recognizing
faces. [Trainingimage] => After capture dataset of a student, all his/her images are
stored here.
4.1.1 main.py
All the work will be done here, Detect the face ,recognize the faces and take
attendance.
37
38
4.1.2 Capture_Image.py
This capture_image.py will collect the data set of a student and add his/her name
39
in tha StudentsDetails.csv
40
4.1.3 checkcamera.py
This checkcamra.py will check weather the camera is correctly connected or not,
if connected whether the face is detecting or not.
41
4.1.4 Train_Image.py
All the images in the Training Image folder will be accessed here and a model is
created by using this trainimage.py file.
42
4.1.5 Recognize.py
When this Recognize.py file is executed, camera will be opened and it will
recognize all the students present in this Students.csv file and those who are present it
will mark attendance automatically and save in Attendance folder with date and time.
43
4.1.6 requirements.txt
This file consists all the required files to be install before executing the codes.
opencv-contrib-
pythonnumpy
pandas
Pillow
pytest-shutil
python-csv
yagmail
44
4.2 Sample Images:
45
CHAPTER-5
WORK PLAN
46
5.1 Introduction:
A project work plan allows you to outline the requirements of a project, project
planning steps, goals, and team members involved in the project.Within each goal,
you're going to outline the necessary Key Action Steps in project planning, the
requirements, and who's involved in each action step.
Activity status
Month
August Selection of project area and Study of the Completed
related work.
47
October Study of project related works like Completed
face recognition and detection
techniques
October Study of the Image processing in python Completed
and Open Computer Vision
Financial Plan identifies the Project Finance needed to meet specific objectives. The
Financial Plan defines all of the various types of expenses that a project will incur
(equipment, materials and administration costs) along with an estimation of the value of
each expense. The Financial Plan also summarizes the total expense to be incurred across
the project and this total expense becomes the project budget. As part of the Financial
Planning exercise, a schedule is provided which states the amount of money needed
during each stage of the project.
48
Components
Sd Card
Hardware Accessories
49
CHAPTER-6
PERFORMANCE ANALYSIS
50
6.1 Introduction:
6.2 Analysis:
51
6.3 Flow Chart:
52
CONCLUSION
Face recognition systems are part of facial image processing applications and their
significance as a research area are increasing recently. Implementations of system are
crime prevention, video surveillance, person verification, and similar security activities.
The face recognition system implementation can be part of Universities. Face
Recognition Based Attendance System has been envisioned for the purpose of reducing
the errors that occur in the traditional (manual) attendance taking system. The aim is to
automate and make a system that is useful to the organization such as an institute. The
efficient and accurate method of attendance in the office environment that can replace the
old manual methods. This method is secure enough, reliable and available for use.
Proposed algorithm is capable of detect multiple faces, and performance of system has
acceptable good results.
53
REFERENCES
[1]. A brief history of Facial Recognition, NEC, New Zealand,26 May 2020.[Online]. Available:
https://www.nec.co.nz/market-leadership/publications-media/a-brief-history-of-facial-
recognition/
[2]. Face detection,TechTarget Network, Corinne Bernstein, Feb, 2020.[Online]. Available:
https://searchenterpriseai.techtarget.com/definition/face-detection
[3]. Paul Viola and Michael Jones, Rapid Object Detection using a Boosted Cascade of Simple
Features. Accepted Conference on Computer Vision and Pattern Re cognition, 2001.
[4]. Face Detection with Haar Cascade,Towards Data Science-727f68dafd08,Girija Shankar
Behera, India, Dec 24, 2020.[Online]. Available:https://towardsdatascience.com/face-detection-
with-haar-cascade-727f68dafd08
[5]. Face Recognition: Understanding LBPH Algorithm,Towards Data Science-
90ec258c3d6b,Kelvin Salton do Prado, Nov 11, 2017.[Online]. Available
:https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
[6]. What is Facial Recognition and how sinister is it, Theguardian, IanSample, July, 2019.
[Online]. Available: https://www.theguardian.com/technology/2019/jul/29/what-is-facial-
recognition-and-how-sinister-is-it
[7].Kushsairy Kadir , Mohd Khairi Kamaruddin, Haidawati Nasir, Sairul I Safie, Zulkifli Abdul
Kadir Bakti,"A comparative study between LBP and Haar-like features for Face Detection using
OpenCV", 4th International Conference on Engineering Technology and Technopreneuship
(ICE2T), DOI:10.1109/ICE2T.2014.7006273, 12 January 2015.
[8].Senthamizh Selvi.R,D.Sivakumar, Sandhya.J.S , Siva Sowmiya.S, Ramya.S , Kanaga Suba
Raja.S,"Face Recognition Using Haar - Cascade Classifier for Criminal Identification",
International Journal of Recent Technology and Engineering(IJRTE), vol.7, issn:2277-3878, ,
issue-6S5, April 2019.
[9]. Robinson-Riegler, G., & Robinson-Riegler, B. (2008). Cognitive psychology: applying the
54
science of the mind. Boston, Pearson/Allyn and Bacon..
[10]. Margaret Rouse, What is facial recognition? - Definition from WhatIs.com, 2012. [online]
Available at: http://whatis.techtarget.com/definition/facial-recognition
[11]. Robert Silk, Biometrics: Facial recognition tech coming to an airport near you: Travel
Weekly, 2017. [online] Available at: https://www.travelweekly.com/Travel-News/Airline-
News/Biometrics-Facial-recognition-tech-coming-airport-near-you
[12]. Sidney Fussell, NEWS Facebook's New Face Recognition Features: What We Do (and
Don't) Know, 2018. [online] Available at: https://gizmodo.com/facebooks-new-face-recognition-
fea tures-what-we-do-an-1823359911
[13]. Reichert, C. Intel demos 5G facial-recognition payment technology | ZDNet, 2017. [online]
ZDNet. Available at: https://www.zdnet.com/article/intel-demos-5g-facial-recognition-payment-
technology/#:~:text=Such%20%22pay%20via%20face%20identification,and%20artificial%20int
elligence%20(AI). [Accessed 25 Mar. 2018].
[14]. Mayank Kumar Rusia, Dushyant Kumar Singh, Mohd. Aquib Ansari, “Human Face
Identification using LBP and Haar-like Features for Real Time Attendance Monitoring”, 2019
Fifth International Conference on Image Information Processing (ICIIP) ,Shimla, India,
DOI: 10.1109/ICIIP47207.2019.8985867 10 February 2020.
55