Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
62 views71 pages

Major Project Final AI DS B31

The project report titled 'Safety Helmet Detection In Industrial Site' focuses on developing a real-time monitoring system using AI and Computer Vision to ensure worker safety by detecting helmet compliance. The system utilizes facial recognition and object detection algorithms to identify workers and generate alerts for non-compliance, thereby promoting a safer workplace. The project aims to shift safety management from a reactive to a proactive approach, providing valuable data-driven insights for continuous improvement in industrial safety.

Uploaded by

Anoop RN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views71 pages

Major Project Final AI DS B31

The project report titled 'Safety Helmet Detection In Industrial Site' focuses on developing a real-time monitoring system using AI and Computer Vision to ensure worker safety by detecting helmet compliance. The system utilizes facial recognition and object detection algorithms to identify workers and generate alerts for non-compliance, thereby promoting a safer workplace. The project aims to shift safety management from a reactive to a proactive approach, providing valuable data-driven insights for continuous improvement in industrial safety.

Uploaded by

Anoop RN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

SIDDAGANGA INSTITUTE OF TECHNOLOGY, TUMAKURU-572103

(An Autonomous Institute under Visvesvaraya Technological University, Belagavi)

Project Report on

“Safety Helmet Detection In Industrial Site”

submitted in partial fulfillment of the requirement for the completion of


VII semester of
BACHELOR OF ENGINEERING
in
COMPUTER SCIENCE & ENGINEERING

Submitted by

Arjun Sharma (1SI21AD010)


Mansi Singh (1SI21AD0033)
Sohan Reddy (1SI21AD047)
Anoop RN (1SI22AD400)

under the guidance of


Dr. Shreenath K N
Associate Professor
Department of CSE
SIT, Tumakuru-03

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


2024-25
SIDDAGANGA INSTITUTE OF TECHNOLOGY, TUMAKURU-572103
(An Autonomous Institute under Visvesvaraya Technological University, Belagavi)
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

CERTIFICATE

This is to certify that the project work entitled “Safety Helmet Detection In Industrial
Site” is a bonafide work carried out by Arjun Sharma(1SI21AD010), Mansi Singh
(1SI21AD033), Sohan Reddy(1SI21AD047) and Anoop RN(1SI22AD400) in
partial fulfillment for the completion of VII semester of Bachelor of Engineering in
Artificial Intelligence& Data Science from Siddaganga Institute of Technology,
an autonomous institute under Visvesvaraya Technological University, Belagavi
during the academic year 2024-25. It is certified that all corrections/suggestions in-
dicated for internal assessment have been incorporated in the report deposited in the
department library. The Project report has been approved as it satisfies the academic
requirements in respect of project work prescribed for the completion of VII semester of
Bachelor of Engineering degree.

Dr. Shreenath K N Dr. N R Sunitha Dr. S V Dinesh


Associate Professor Head of the Department Principal
Dept. of CSE Dept. of CSE SIT,Tumakuru-03
SIT,Tumakuru-03 SIT,Tumakuru-03

External viva:
Names of the Examiners Signature with date
1.
2.
ACKNOWLEDGEMENT

We offer our humble pranams at the lotus feet of His Holiness, Dr. Sree Sree Sivaku-
mara Swamigalu, Founder President and His Holiness, Sree Sree Siddalinga Swami-
galu, President, Sree Siddaganga Education Society, Sree Siddaganga Math for bestowing
upon their blessings.

We deem it as a privilege to thank late Dr. M N Channabasappa, Director, SIT,


Tumakuru, Dr. Shivakumaraiah, CEO, SIT, Tumakuru, and Dr. S V Dinesh, Prin-
cipal, SIT, Tumakuru for fostering an excellent academic environment in this institution,
which made this endeavor fruitful.

We would like to express our sincere gratitude to Dr. N R Sunitha, Professor and Head,
Department of CSE, SIT, Tumakuru for her encouragement and valuable suggestions.

We thank our guide Dr. Shreenath K N,Associate Professor, Department of Computer


Science & Engineering, SIT, Tumakuru for the valuable guidance, advice , invaluable sup-
port, guidance, and assistance in providing the necessary resources, which significantly
contributed to the successful completion of our project.

Arjun Sharma (1SI21AD010)


Mansi Singh (1SI21AD033)
Sohan Reddy (1SI21AD047)
Anoop RN (1SI22AD400)
Course Outcomes

After successful completion of major project, graduates will be able to:


CO1: To identify a problem through literature survey and knowledge of contemporary
engineering technology.
CO2: To consolidate the literature search to identify issues/gaps and formulate the engi-
neering problem
CO3: To prepare project schedule for the identified design methodology and engage in
budget analysis, and share responsibility for every member in the team
CO4: To provide sustainable engineering solution considering health, safety, legal, cul-
tural issues and also demonstrate concern for environment
CO5: To identify and apply the mathematical concepts, science concepts, engineering and
management concepts necessary to implement the identified engineering problem
CO6: To select the engineering tools/components required to implement the proposed
solution for the identified engineering problem
CO7: To analyze, design, and implement optimal design solution, interpret results of ex-
periments and draw valid conclusion
CO8: To demonstrate effective written communication through the project report, the
one-page poster presentation, and preparation of the video about the project and the four
page IEEE/Springer/ paper format of the work
CO9: To engage in effective oral communication through power point presentation and
demonstration of the project work.
CO10:To demonstrate compliance to the prescribed standards/ safety norms and abide
by the norms of professional ethics.
CO11: To perform in the team, contribute to the team and mentor/lead the team
Safety Helmet Detection In Industrial Site 2024-25

CO-PO Mapping

PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PSO1 PSO2 PSO3
CO-1 3 3
CO-2 3 3
CO-3 3 3
CO-4 3 3
CO-5 3 3 3 3
CO-6 3 3 3
CO-7 3 3 3
CO-8 3 3 3
CO-9 3 3
CO-10 3 3
CO-11 3 3 3
Average 3 3 3 3 3 3 3 3 3 3 3 3 3 3

Attainment level: - 1: Slight (low) 2: Moderate (medium) 3: Substantial (high).


PO1: Engineering knowledge.
PO2: Problem analysis.
PO3:Design of solutions.
PO4:Conduct investigations of complex problems.
PO5: Engineering tool usage.
PO6:Engineer and the world.
PO7:Ethics.
PO8:Individual and collaborative work.
PO9:communication.
PO10:project management and finance.
PO11: Life-long learning.
PSO1: Computer based systems development.
PSO2: Software development.
PSO3: Computer communications and Internet applications.

Dept.of AI&DS, S.I.T.,Tumakuru-03 2


Abstract
Ensuring worker safety is a key priority in industrial and construction environments.
Non- compliance with safety measures, such as not wearing helmets, poses significant
risks. This project addresses this issue by implementing a real-time monitoring system
using Artificial Intelligence (AI) and Computer Vision. The primary aim is to minimize
safety violations and create a safer workplace through automated detection. The system
combines facial recognition technology and object detection algorithms to identify workers
and check for helmet compliance. By processing live video feeds or recorded footage, the
solution accurately detects individuals who are not wearing helmets. This data is used
to generate alerts, thereby enabling timely intervention and ensuring adherence to safety
standards.
The system is developed using Python and integrates the YOLOv8 object detection model
for helmet detection with a facial recognition library to identify workers. A database
stores pre- encoded facial data, which is matched against live detections. Alerts for non-
compliance are sent via email, audio alerts, and images of the violations are saved for
further review. This project ensures scalability and adaptability to various industrial
sites.
This project contributes significantly to industrial safety by introducing an automated
approach to compliance monitoring. Unlike manual methods prone to oversight, this AI-
driven solution offers consistent accuracy and reliability. By integrating real-time alert
systems and robust identification mechanisms, the system not only reduces the risk of
workplace injuries but also promotes a culture of safety and accountability across indus-
trial operations. Beyond immediate safety improvements, the system provides valuable
data-driven insights. Analyzing aggregated, anonymized data on non-compliance trends
helps identify high-risk areas or periods, enabling targeted interventions like additional
training or revised protocols. Its logging capabilities create a detailed audit trail, aiding
incident investigations and ensuring accountability. This shifts safety management from
a reactive approach to a proactive, continuously improving system.

i
Contents
Abstract i

List of Figures ii

List of Tables iii

1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Objective of the Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Organisation of the Report . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Literature Survey 5
2.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 System Overview 10
3.1 AI-Based Helmet Detection and Facial Recognition for Workplace Safety . 10
3.1.1 Accurate system work flow diagram . . . . . . . . . . . . . . . . . . 12
3.2 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4 System Architecture and High Level Design 17


4.1 Component 1: Input Module . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.1.1 More Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Component 2: Processing Module . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.1 Key Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Component 3: Alert System . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.1 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.1 Programming Language . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.2 Development Libraries . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.3 Integrated Development Environment (IDE) . . . . . . . . . . . . . 20
4.4.4 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Safety Helmet Detection In Industrial Site 2024-25
4.4.5 Frameworks and Dependencies . . . . . . . . . . . . . . . . . . . . . 21
4.4.6 Storage Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5.1 Helmet Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5.2 Facial Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.5.3 Alert Notification System . . . . . . . . . . . . . . . . . . . . . . . 22
4.5.4 Violation Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.5.5 Video Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.6 Non-Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.6.1 Performance Requirements . . . . . . . . . . . . . . . . . . . . . . . 22
4.6.2 Scalability Requirements . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6.3 Reliability, Usability, Security, and Environmental Requirements . . 23
4.7 Additional Features and Enhancements . . . . . . . . . . . . . . . . . . . . 24

5 System Architecture And Low Level Design 26


5.1 Object Detection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.1.1 Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.1.2 Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.1.3 Pseudocode for the Safety Monitoring System . . . . . . . . . . . . 29
5.2 Context Flow Diagram (CFD) . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.3 Data Flow Diagram (DFD) . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.4 Comparison between CFD and DFD . . . . . . . . . . . . . . . . . . . . . 31
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

6 Results 33
6.1 Test Set up Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.1.1 Input Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.1.2 Output Components . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.1.3 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.1.4 Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.1.5 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 Test Procedures and Test Cases . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2.1 Testing Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Dept.of AI&DS, S.I.T.,Tumakuru-03 iii


Safety Helmet Detection In Industrial Site 2024-25
6.3 Snapshots and Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.4.1 Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.4.2 Traceability to Requirements . . . . . . . . . . . . . . . . . . . . . 37
6.4.3 Graph Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4.4 Testcase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

7 Conclusion 39
7.1 Scope for future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Bibliography 42

Appendices 43

A Project Management and Budget Estimination Details 44


A.1 Project Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
A.2 Budget Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A.3 Dataset Details / Input Details . . . . . . . . . . . . . . . . . . . . . . . . 45
A.3.1 Dataset Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A.3.2 Input Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

B Configuration Details 48
B.1 GitHub Repository Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
B.2 Repository Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
B.3 Configuration Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
B.3.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
B.3.2 Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

C Specifications and Standards 50


C.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
C.1.1 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . 50
C.1.2 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 50
C.2 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
C.3 Performance Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Dept.of AI&DS, S.I.T.,Tumakuru-03 iv


D Self Assessment of the Project 52
List of Figures
3.1 Accurate System Workflow Diagram . . . . . . . . . . . . . . . . . . . . . . 12

5.1 Context Flow Diagram (CFD) of the safety monitoring system. . . . . . . 31


5.2 Data Flow Diagram (DFD) of the Safety Monitoring System . . . . . . . . 32

6.1 A red bounded box will be created around worker’s face not wearing helmet. 33
6.2 Terminal with ID of violator also whether mail is sent or not. . . . . . . . . 34
6.3 Graph illustrating accuracy and processing speed trends across test cases. . 36
6.4 Accuracy and processing speed trends across multiple test cases. . . . . . . 37

A.1 User Interface for UNSW NB 15 3.csv -Confusion Matrix. . . . . . . . . . . 44


A.2 User Interface for UNSW NB 15 3.csv -Confusion Matrix. . . . . . . . . . . 45

ii
List of Tables
6.1 Helmet Detection Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 38

iii
Chapter 1
Introduction
This chapter introduces the project, highlighting the need for a AI-based helmet detec-
tion in industrial safety.The increasing need for workplace safety, particularly in indus-
trial and construction settings, calls for innovative and efficient monitoring solutions.
Traditional approaches to ensuring safety compliance, such as manual checks, are often
unreliable, time-consuming, and ineffective in large-scale operations. Despite regulatory
mandates, non- compliance with safety measures—such as the failure to wear protective
helmets—continues to contribute to workplace injuries and fatalities.
This project aims to address these challenges by developing an automated system that
leverages the power of Artificial Intelligence (AI) and Computer Vision. The proposed
system integrates face recognition technology with object detection algorithms to identify
workers and monitor their compliance with helmet usage requirements in real time. By
replacing manual supervision with AI-driven monitoring, the system ensures better safety
compliance while reducing human intervention and errors. The scope of this project
extends to the development and implementation of a robust and scalable system capable
of analyzing live or recorded video streams. Alerts are generated automatically for any
detected violations, ensuring timely corrective actions. The methodology combines pre-
trained models, advanced recognition algorithms, and notification systems, making the
solution adaptable to diverse workplace environments.
This chapter introduces the background and significance of the problem, defines the
project’s scope, and highlights the innovative approach adopted to enhance workplace
safety through technology. The alarming frequency of workplace accidents, particularly
in industries like construction and manufacturing, underscores the urgent need for inno-
vative safety solutions. Traditional safety protocols, often reliant on manual observation
and sporadic checks, struggle to maintain consistent compliance in dynamic and complex
work environments. This project addresses this critical gap by leveraging the transfor-
mative potential of Artificial Intelligence and Computer Vision to develop an automated,
real-time helmet detection system. To further elaborate on the technical aspects, the

1
Safety Helmet Detection In Industrial Site 2024-25
YOLO model employed in this project is specifically trained on a dataset of images and
videos featuring workers in various industrial settings, both with and without helmets.
This training process fine-tunes the model’s ability to accurately identify helmets under
different lighting conditions, angles, and occlusions. The face recognition component uti-
lizes a separate pre-trained model, which is then adapted to recognize the faces of workers
within the specific environment. This involves capturing and storing facial embeddings of
authorized personnel, which are then compared against faces detected in the video feed.
The integration of these two technologies allows for a comprehensive understanding of not
just the presence or absence of a helmet, but also the identity of the individual involved
in any violation.

The system architecture comprises several key modules working in tandem. The video
input, either from live cameras or recorded footage, is fed into the YOLO object detection
module. This module identifies potential helmets and faces within each frame. The face
recognition module then processes the detected faces, comparing them against the stored
database. The results are then passed to a logic module that determines compliance
based on the presence of a helmet and the identification of the individual. If a violation
is detected, an alert is triggered. This alert can take various forms, such as an email
notification to safety personnel, a visual alarm on a monitoring dashboard, or even a
direct message to the individual’s communication device.

The project emphasizes scalability and robustness. The system is designed to handle mul-
tiple camera feeds concurrently, making it suitable for large-scale deployments. The use
of optimized algorithms and hardware acceleration ensures real-time performance, mini-
mizing latency between a violation occurring and an alert being generated. Furthermore,
the system incorporates error handling and fault tolerance mechanisms to ensure contin-
uous operation even in challenging environments. The collected data, including violation
instances and worker compliance rates, can be aggregated and analyzed to identify trends
and improve overall safety practices. This data-driven approach facilitates proactive in-
terventions and contributes to a safer and more productive work environment.

1.1 Motivation
In today’s industrial and construction sectors, ensuring the safety of workers is paramount.
Despite strict regulations, non-compliance with safety measures such as wearing helmets

Dept.of AI&DS, S.I.T.,Tumakuru-03 2


Safety Helmet Detection In Industrial Site 2024-25
is still prevalent, leading to preventable injuries and fatalities. Manual safety monitoring
methods are often inefficient, prone to errors, and unable to scale with the increasing size of
workforces. This project stems from the need to address these limitations by automating
safety monitoring systems. The motivation lies in leveraging Artificial Intelligence (AI)
and Computer Vision technologies to create an efficient, real-time, and scalable solution
that enhances compliance and minimizes risks.
Worker safety is crucial in industrial and construction sectors. Despite strict regulations,
non-compliance with essential safety measures like helmet usage persists, leading to pre-
ventable injuries and fatalities. Traditional manual monitoring methods are inefficient,
error-prone, and struggle to scale with growing workforces. Additionally, they can be in-
consistent and subjective, resulting in unreliable safety assessments. By automating safety
monitoring, the system shifts from reactive measures to a proactive, data-driven approach,
fostering a safer and more productive work environment. The proposed system will pro-
vide real-time alerts for violations, maintain compliance records, and integrate seamlessly
with existing workplace infrastructure. By enabling continuous, unbiased monitoring,
it ensures a more effective and reliable safety management system, ultimately reducing
workplace accidents and improving overall operational efficiency.

1.2 Objective of the Project


1. Develop an automated system that utilizes facial recognition and object detection to
accurately detect helmet usage and identify workers in live or recorded video feeds.

2. Implement a system capable of detecting non-compliance in various environmental


conditions and generating real-time alerts for timely intervention and corrective
action.

3. Ensure the solution is cost-effective, scalable, and adaptable to different industrial


environments, integrating seamlessly with existing infrastructure while providing
user-friendly monitoring and analysis tools.

1.3 Organisation of the Report


Chapter 1 provides an overview of the project, covering the motivation, problem state-
ment, objectives, and a brief introduction to the proposed solution and methodology.
Chapter 2 presents a literature review, summarizing existing research and related works

Dept.of AI&DS, S.I.T.,Tumakuru-03 3


Safety Helmet Detection In Industrial Site 2024-25
on helmet detection, face recognition, and safety monitoring systems. It also explores the
historical context of workplace safety and the evolution of safety regulations.
Chapter 3 explains the system overview, including the architecture, design methodology,
block diagram, and the interaction of different system components. It also discusses the
chosen hardware and software platforms.
Chapter 4 details the system’s high-level design, including the architecture, functional and
non-functional requirements, and key components such as YOLO-based object detection,
face recognition, and the alert mechanism.
Chapter 5 focuses on software architecture and low-level design, presenting algorithms,
pseudocode, flowcharts, data flow diagrams (DFD), and context flow diagrams (CFD) to
illustrate system operations. Implementation details, including programming languages
and libraries, are also discussed.
Chapter 6 presents the results, including test setup, test cases, expected versus actual
results, snapshots of detected violations, compliance reports, and system alerts. It also
evaluates performance metrics such as accuracy, processing speed, and robustness, along
with comparisons to existing solutions.
Chapter 7 summarizes the project, highlighting key contributions, achievements, and po-
tential future enhancements. It also discusses the broader impact of the research on
workplace safety.
Bibliography includes all references, from research articles and technical papers to online
resources, consulted during the project development.
Appendices provide supporting materials such as dataset details, input configurations,
code snippets, and additional information that complements the main report.
This structured organization ensures a clear understanding of the project, from its concep-
tualization and research foundation to implementation, evaluation, and future directions.

Dept.of AI&DS, S.I.T.,Tumakuru-03 4


Chapter 2
Literature Survey
This chapter gives an overview of the studies regarding helmet detection, recognition, and
AI-enabled systems. It explores the evolution of these technologies and their application
in industrial safety, highlighting the increasing reliance on AI for robust and efficient
solutions.
Li, Zhang, and Wang have researched deep learning techniques for enhancing helmet
detection in industrial environments [1]. Their work focuses on the critical need for accu-
rate and reliable helmet detection to improve workplace safety and reduce the risk of head
injuries. They likely explored various deep learning architectures, potentially including
advancements like YOLOv5 for real-time object detection. This research contributes to
the development of more robust and effective safety monitoring systems, ultimately aim-
ing to create safer industrial workplaces. Their research may have also investigated the
impact of different training datasets and data augmentation techniques on the model’s
performance.
Kim, Park, and Lee have proposed a deep learning-based method combining YOLO and
ResNet to enhance helmet detection, particularly in challenging low-light conditions [2].
This research addresses the difficulty of maintaining accurate object detection when image
visibility is poor, a common problem in industrial settings. A key contribution might be
demonstrating the effectiveness of their combined architecture for robust helmet detection
regardless of lighting or achieving superior performance. This work is significant for
ensuring safety in a wider range of industrial environments. Future research directions
could explore the generalizability of this approach to other low-visibility scenarios, such
as fog or heavy rain.
Singh, Gupta, and Sharma have proposed a hybrid model integrating IoT and AI for real-
time monitoring of helmet violations [3]. This research likely involves a system where IoT
devices (e.g., cameras, sensors) capture images/videos of workers. The AI model might use
object detection and potentially other techniques to identify workers not wearing helmets.
A key contribution might be a working prototype demonstrating the feasibility of real-

5
Safety Helmet Detection In Industrial Site 2024-25
time safety monitoring. This real-time capability is crucial for immediate intervention and
accident prevention. The scalability and reliability of the IoT infrastructure for deploying
such a system in large-scale industrial settings would be a crucial consideration.

Zhang, Li, and Zhao have proposed a comparative study of YOLOv5 and Faster R-CNN
for helmet detection tasks [4]. Their research analyzes the performance of these two pop-
ular object detection models in the specific context of helmet detection. The comparison
likely considers factors such as accuracy, speed, and computational requirements. This
study provides valuable information for researchers and practitioners in selecting the most
suitable model for their needs. Further research could explore how transfer learning can
be effectively utilized to improve the performance of these models with limited training
data.

Gao, Ma, Wu, and Yu have proposed a facial recognition system integrated with RFID
technology for improved worker authentication accuracy in industrial applications [5].
This research aims to enhance security and access control in workplaces by combining
facial recognition with RFID. The integration of these technologies could also be explored
for timekeeping, attendance tracking, and other workforce management applications. The
privacy implications of collecting and storing biometric data like facial recognition need
careful consideration and robust security measures.

Chen and Zhao created systems utilizing federated learning for facial recognition in in-
dustrial workplaces, addressing ethical and privacy concerns [6]. Their work focuses on
the balance between security and individual privacy. Federated learning allows for train-
ing models on decentralized data sources without directly sharing sensitive information,
a key aspect of their proposed systems. This approach is crucial for responsible AI im-
plementation in industrial settings. Their research likely explores the specific challenges
of implementing federated learning in resource-constrained industrial environments and
may propose solutions to optimize communication and computation efficiency.

Li and Sun (2021) have proposed algorithms to reduce bias in facial recognition mod-
els, particularly focusing on minimizing false positives in specific demographic groups [7].
This research tackles the important ethical issue of fairness in facial recognition technol-
ogy. Their algorithms likely aim to create more equitable and inclusive systems. A key
contribution is a new algorithm that significantly reduces bias in facial recognition or a
comprehensive analysis of bias in existing facial recognition models. This work is essen-

Dept.of AI&DS, S.I.T.,Tumakuru-03 6


Safety Helmet Detection In Industrial Site 2024-25
tial for ensuring that these systems do not perpetuate or amplify societal biases. Their
research might delve into the different types of bias that can exist in facial recognition
systems and evaluate the effectiveness of their proposed algorithms in mitigating these
biases.

Wang and Qu proposed a study on the impact of illumination and pose variations on
the performance of facial recognition systems [8]. This research analyzes how changes in
lighting conditions and head pose affect the accuracy of facial recognition. Understanding
these impacts is crucial for designing robust systems. A key contribution might be a
detailed analysis of the effects of illumination and pose on different facial recognition
algorithms or recommendations for improving system robustness to these variations. The
findings can inform the development of techniques to mitigate these challenges in real-
world deployments. Their study could also investigate the use of data augmentation
techniques or image preprocessing methods to improve the robustness of facial recognition
systems to illumination and pose variations.

Patel, Kumar, and Ahmed (2022) created a system integrating AI and IoT to reduce
safety violations by leveraging real-time data and intelligent analysis [9]. This system
likely uses sensors and other IoT devices to collect data on workplace conditions and
worker behavior. AI algorithms are then used to analyze this data and identify potential
safety hazards or violations. This research contributes to the development of proactive
safety management systems. Their work may explore the challenges of integrating data
from diverse IoT devices and the development of efficient algorithms for real-time data
processing and analysis.

Alvarez and Perez used predictive analytics techniques to anticipate safety violations
using historical data and environmental parameters [10]. By analyzing past incidents
and environmental factors, the authors aim to predict future safety violations. This
information can be used to implement targeted interventions and prevent accidents. This
research highlights the potential of data-driven approaches to safety management. Their
research could investigate the use of machine learning models to predict safety violations
and evaluate the accuracy and reliability of these predictions. They might also explore the
ethical implications of using predictive analytics in safety management and the importance
of ensuring fairness and transparency in these systems.

Li, Zhang, and Yang ensemble models for safety compliance detection in smart factories

Dept.of AI&DS, S.I.T.,Tumakuru-03 7


Safety Helmet Detection In Industrial Site 2024-25
[11]. This research aims to improve the accuracy and robustness of safety monitoring
in complex, automated industrial environments. A key contribution might be demon-
strating the effectiveness of ensemble learning for detecting safety violations in smart
factories. Specific ensemble methods (e.g., bagging, boosting) and datasets used should
be mentioned, along with an analysis of their performance compared to single-model ap-
proaches. Further research could explore the optimization of ensemble architectures for
real-time processing in industrial settings.

Gao and Xu proposed hyperparameter optimization in YOLO models for enhanced safety
compliance detection in noisy environments [12]. This research focuses on tuning the pa-
rameters of YOLO models to improve their performance in challenging conditions. They
likely explore different optimization algorithms and evaluate their impact on detection
accuracy and speed. A key contribution might be identifying optimal hyperparameter
settings for robust safety compliance detection. Future research could investigate the
transferability of these optimal hyperparameters to different industrial settings and ex-
plore adaptive hyperparameter tuning methods.

Huang and Zhang gave a review of scalability and robustness challenges in deploying AI
models in large-scale industrial environments [13]. This research examines the practical
difficulties of implementing AI-based safety systems in real-world industrial settings. They
likely discuss issues related to data management, model training, and system integration.
Future work could involve developing best practices for AI deployment in this domain.
Specific AI models and industrial use cases discussed should be mentioned, along with
potential solutions to the identified challenges. They might also explore the role of edge
computing in addressing scalability challenges.

Smith and Lee proposed a discussion of ethical considerations in the use of AI for work-
place safety, focusing on privacy and worker consent [14]. This research explores the
ethical implications of using AI to monitor worker behaviour and safety. A key contri-
bution might be raising awareness of these ethical concerns and proposing guidelines for
responsible AI implementation. Future work could involve developing ethical frameworks
for AI in workplace safety. Specific ethical principles and legal considerations should be
mentioned, along with an analysis of potential biases in AI-driven safety systems.

Johnson and Williams proposed integrating augmented reality (AR) with AI for immersive
safety training in industrial environments [15]. This research explores the potential of

Dept.of AI&DS, S.I.T.,Tumakuru-03 8


Safety Helmet Detection In Industrial Site 2024-25
AR to create more engaging and effective safety training programs. They likely discuss
how AR can be used to simulate hazardous situations and provide workers with hands-
on experience in a safe environment. A key contribution might be demonstrating the
benefits of AR-based training compared to traditional methods. They might also explore
the development of personalized AR training programs tailored to individual worker needs
and the integration of haptic feedback for a more immersive experience.

2.1 Summary
The literature survey highlights advancements in helmet detection, safety monitoring, and
AI-enabled systems in industrial environments. Key studies explore deep learning tech-
niques like YOLOv5 and ResNet for robust helmet detection, including in low-light con-
ditions, and the integration of IoT for real-time monitoring of safety violations. Research
also compares object detection models like YOLOv5 and Faster R-CNN and investigates
facial recognition combined with RFID for enhanced worker authentication. Ethical con-
siderations in AI, including privacy and bias reduction, are also addressed, with federated
learning and predictive analytics offering solutions for data security and safety violation
prevention. Additionally, studies discuss hyperparameter optimization, ensemble models
for improved accuracy, and augmented reality for immersive safety training. These con-
tributions collectively enhance industrial safety, improve detection accuracy, and ensure
ethical AI deployment in workplace environments.

Dept.of AI&DS, S.I.T.,Tumakuru-03 9


Chapter 3
System Overview
3.1 AI-Based Helmet Detection and Facial Recogni-
tion for Workplace Safety
This chapter reviews AI-based helmet detection and facial recognition techniques used in
workplace safety. It analyzes deep learning models like YOLO and ResNet for accuracy,
speed, and real-time performance. It explores the evolution of these techniques, high-
lighting their strengths and limitations in the context of industrial safety applications.
The chapter may also discuss the importance of dataset quality and diversity for training
robust and reliable models. This system integrates several components, each serving a
critical role:

1. Input Data Acquisition: The system begins with data acquisition through video
streams or image feeds captured using surveillance cameras. These inputs serve as
the foundation for further processing and analysis. Different types of cameras, such
as RGB, thermal, or depth cameras, could be utilized depending on the specific
application requirements and environmental conditions. The placement and config-
uration of these cameras are crucial for ensuring adequate coverage and capturing
high-quality images.

2. Preprocessing: Captured frames are resized, filtered, and normalized to ensure


optimal performance for subsequent detection algorithms. This stage is essential to
maintain the system’s speed and accuracy under varying conditions. Preprocessing
techniques might include noise reduction, image enhancement, and color correction
to improve the clarity and consistency of the input data. The specific preprocessing
steps employed may depend on the characteristics of the input images and the
requirements of the chosen detection algorithms.

3. YOLOv8 Helmet Detection: A custom-trained YOLOv8 object detection model


is employed to detect the presence of helmets in the video frames. This model iden-

10
Safety Helmet Detection In Industrial Site 2024-25
tifies helmets and workers in real-time, drawing bounding boxes around detected
objects. YOLOv8’s architecture and training process could be discussed in more
detail, including the choice of loss functions and optimization algorithms. The per-
formance of the YOLOv8 model could be evaluated using metrics such as precision,
recall, and mean average precision (mAP).

4. Facial Recognition: Once a worker is detected, the system uses a facial recognition
algorithm to match the detected face with a pre-encoded database of workers. This
step enables the system to uniquely identify individuals and associate compliance
results with specific IDs. Different facial recognition algorithms, such as FaceNet or
ArcFace, could be employed, and their performance compared in terms of accuracy
and computational efficiency. The database of worker faces needs to be securely
managed and regularly updated to ensure accuracy and privacy.

5. Compliance Check: The system cross-verifies the detected worker with the pres-
ence of a helmet. If a worker is identified without a helmet, the system flags a
safety violation. The criteria for determining a safety violation could be config-
urable, allowing for flexibility in defining compliance rules. The system might also
incorporate mechanisms for handling ambiguous cases, such as partially obscured
faces or helmets.

6. Violation Logging: For every violation, the system captures and stores the corre-
sponding frame along with relevant metadata, such as the worker ID and timestamp.
This feature is essential for maintaining a compliance log for review. The stored
data could be used for generating reports, analyzing trends in safety violations, and
identifying areas for improvement in safety protocols. The system should also in-
clude mechanisms for data privacy and security, ensuring that sensitive information
is protected from unauthorized access. Furthermore, the system might incorporate
notification mechanisms to alert supervisors or safety officers in real-time about
detected violations.

7. Alert Notification: An alert mechanism is integrated to send real-time notifica-


tions via email and audio to designated personnel. The email includes the worker’s
ID and an image of the violation for immediate action. The alert system could be

Dept.of AI&DS, S.I.T.,Tumakuru-03 11


Safety Helmet Detection In Industrial Site 2024-25
configurable, allowing users to customize notification preferences, such as the fre-
quency and types of alerts. Different notification channels, like SMS or mobile app
push notifications, could also be integrated for increased accessibility. The system
might also include an escalation mechanism, where alerts are escalated to higher-
level personnel if the initial alert is not acknowledged within a certain timeframe.

8. Modular System Design: The system is modular, ensuring easy maintenance and
scalability. Each component is connected seamlessly to process inputs, generate
outputs, and respond to non-compliance events effectively. This modular design
allows for independent updates and upgrades to individual components without
affecting the rest of the system. The system can be scaled to accommodate larger
workforces and more complex environments by adding more cameras and processing
units. The modularity also facilitates integration with other safety management
systems and platforms.

3.1.1 Accurate system work flow diagram

Figure 3.1: Accurate System Workflow Diagram .

Dept.of AI&DS, S.I.T.,Tumakuru-03 12


Safety Helmet Detection In Industrial Site 2024-25
The Figure 3.1 has the following components:

1. Input Data Acquisition: The system begins with the acquisition of input data,
typically video streams or image frames from cameras installed in the workspace.
Different types of cameras, such as RGB, thermal, or depth cameras, could be
used depending on the specific application requirements. The cameras should be
strategically placed to ensure optimal coverage of the work area and minimize blind
spots. The system may also support different video resolutions and frame rates,
allowing for flexibility in balancing performance and resource utilization. This data
is fed into the system for further analysis. The data acquisition module could include
pre-processing steps, such as image format conversion or compression, to optimize
data transfer and storage.

2. Preprocessing: The raw input data undergoes preprocessing steps, including re-
sizing, noise reduction, and format conversion. Various image processing techniques,
like filtering, normalization, and color correction, could be applied to enhance the
quality of the input data and improve the accuracy of subsequent analysis. The
specific preprocessing steps used may depend on the characteristics of the input im-
ages and the requirements of the chosen detection algorithms. These steps ensure
compatibility with the YOLOv8 object detection model and optimize processing
efficiency. The preprocessing module could be implemented using hardware accel-
eration to improve performance and reduce latency.

3. YOLOv8-Helmet Detection: The preprocessed data is passed to the YOLOv8


model, a state-of-the-art object detection algorithm. YOLOv8’s architecture and
training process could be described in more detail, including the choice of loss func-
tions and optimization algorithms. The performance of the YOLOv8 model could
be evaluated using metrics such as precision, recall, and mean average precision
(mAP). YOLOv8 identifies the presence of helmets and determines whether a worker
is compliant with safety regulations. The model could be trained to detect different
types of helmets or other safety equipment, depending on the specific needs of the
application.

4. Facial Recognition: If a helmet violation is detected, the system employs facial


recognition to identify the worker. Different facial recognition algorithms, such

Dept.of AI&DS, S.I.T.,Tumakuru-03 13


Safety Helmet Detection In Industrial Site 2024-25
as FaceNet or ArcFace, could be used, and their performance compared in terms
of accuracy and computational efficiency. The database of worker faces needs to
be securely managed and regularly updated to ensure accuracy and privacy. This
step cross-references the detected face with a pre-stored database of worker facial
features. The facial recognition module could include liveness detection techniques
to prevent spoofing attacks.

5. Compliance Check: The system integrates outputs from the helmet detection
and facial recognition modules. The compliance check module could implement
logic to handle cases where the facial recognition module fails to identify a worker,
such as when the face is partially obscured. It checks for compliance by confirming
the absence of a helmet for an identified worker. The system could be configured
to trigger different actions based on the severity of the violation, such as issuing a
warning or notifying a supervisor.

6. Violation Logging: If non-compliance is confirmed, the system logs the violation.


This logging process is crucial for record-keeping, analysis, and potential disciplinary
actions. The system should ensure that the logging is accurate, reliable, and tamper-
proof to maintain the integrity of the safety records. Consideration should be given
to data privacy regulations and secure storage of sensitive information. The log
includes details such as the worker’s identity, time, date, and image evidence of the
violation. Additional information, such as the location of the violation, the type
of violation, and any environmental factors that may have contributed, could be
included for a more comprehensive record. The system might also allow for manual
annotations or notes to be added to the log by safety personnel.

7. Alert Notification: Simultaneously, an alert is generated and sent to the con-


cerned authorities or supervisors. The alert should be immediate and attention-
grabbing to ensure prompt response. Different alert levels could be implemented
based on the severity of the violation, allowing for prioritized attention to the most
critical issues. Notifications can include email and audio alerts with attached im-
ages of the violation for immediate action. Other notification methods, such as SMS
messages, mobile app push notifications, or alerts through a dedicated safety man-
agement platform, could be integrated for increased accessibility and redundancy.

Dept.of AI&DS, S.I.T.,Tumakuru-03 14


Safety Helmet Detection In Industrial Site 2024-25
The alert system should be configurable, allowing users to customize notification
preferences and escalation procedures. This workflow ensures seamless monitoring,
efficient detection, and prompt intervention for safety violations in the workplace,
making it a robust system for enhancing compliance and safety standards. By au-
tomating the detection and notification process, the system reduces the reliance on
manual observation and improves the speed and consistency of responses to safety
violations. This proactive approach contributes to a safer work environment and a
stronger safety culture. Furthermore, the data collected by the system can be used
to identify trends, assess risks, and implement targeted interventions to prevent
future violations.

3.2 Mathematical Formulation


This section illustrates how to refer to equations based on the implemented code
logic.

An example of a centered equation is:


v
u n
uX
d(x, y) = t (xi − yi )2 (3.1)
i=1

This equation calculates the Euclidean distance between two face encoding vectors,
where x and y are vectors of length n. This distance is a measure of the dissimilarity
between the two faces. A smaller distance indicates a higher similarity.

To write an equation inline, use this:

Area of Overlap
IoU =
Area of Union

which measures the overlap between the bounding boxes of the detected face and
the helmet. IoU stands for Intersection over Union.

The following equation describes the overlap detection logic:



True, if IoU ≥ threshold

Wearing Helmet = (3.1)
False, otherwise

where:

Dept.of AI&DS, S.I.T.,Tumakuru-03 15


Safety Helmet Detection In Industrial Site 2024-25
- IoU = Intersection Over Union of bounding boxes, which is calculated as the area
of intersection between the predicted bounding box and the ground truth bounding
box divided by the area of their union. - threshold = a predefined value to classify
significant overlap. This threshold is a hyperparameter that needs to be tuned based
on the specific application and dataset. A typical value might be 0.5 or higher.

Equation (3.1) determines whether a worker is wearing a helmet. If the IoU is


greater than or equal to the threshold, it is considered that the worker is wearing a
helmet.

Another important equation relates to alert generation:



True, if (Wearing Helmet = False) AND (Worker ID = Unknown)

Alert =
False, otherwise

(3.2)

where:

- Wearing Helmet = status of helmet detection (True or False) as determined by


Equation (3.1).

Equation (3.2) ensures that an alert is generated only if the detected worker is not
wearing a helmet and is also an unrecognized person.

Dept.of AI&DS, S.I.T.,Tumakuru-03 16


Chapter 4
System Architecture and High Level
Design
This chapter details the architecture and design of the proposed safety monitoring sys-
tem, illustrating its structure and functionalities. The system is divided into modular
components, each contributing to the project’s objectives. The architecture defines the
technical requirements, system functionalities, and constraints to ensure an efficient and
scalable solution. This modularity promotes maintainability, flexibility, and potential
future expansion of the system.

4.1 Component 1: Input Module


The Input Module is responsible for capturing video data, either from live camera feeds or
pre-recorded footage. This module acts as the entry point of the system and supplies the
video frames to subsequent components for processing. The choice of input method (live
or recorded) can be configured based on the specific needs of the deployment environment.
The module is designed to handle various video sources and formats.

4.1.1 More Details


• The module uses OpenCV (Open Source Computer Vision Library) to interface with
cameras for capturing live video streams.

• Supports input from USB cameras, IP cameras, or pre-recorded video files, ensuring
compatibility with various deployment settings.

• Converts video frames into a format suitable for processing, such as RGB (Red,
Green, Blue) arrays.

• Ensures smooth frame capture at real-time speeds to maintain efficiency.

• Potential additions:

– Error handling for camera connection issues or file reading errors.

17
Safety Helmet Detection In Industrial Site 2024-25
– Buffering or queuing mechanisms to handle temporary fluctuations in input
data rates.

– Camera calibration and image rectification to improve accuracy.

4.2 Component 2: Processing Module


The Processing Module is the central component responsible for detecting workers, rec-
ognizing faces, and identifying helmet compliance. It integrates YOLO (You Only Look
Once) for object detection and a facial recognition library for worker identification.

4.2.1 Key Features


• YOLO Integration: Detects helmets in each frame using bounding boxes and
classification with high accuracy.

• Facial Recognition Library: Integrates a facial recognition library (e.g., FaceNet,


ArcFace) to identify workers.

• Compliance Logic: Combines results from YOLO (helmet detection) and facial
recognition to determine if a worker is wearing a helmet.

• Overlap Analysis: Checks if a detected helmet overlaps with a worker’s bounding


box to confirm compliance.

• Implements algorithms to calculate confidence scores and discard low-confidence


detections.

4.3 Component 3: Alert System


The Alert System generates notifications (email and audio) when a violation is detected.
This component ensures real-time communication with supervisors.

4.3.1 Functionality
• Sends email and audio alerts to managers using the SMTP (Simple Mail Transfer
Protocol) protocol. Email alerts can provide detailed information about the viola-
tion, including the worker’s ID, timestamp, and an image of the violation. Audio
alerts can be used to notify personnel in the immediate vicinity of the violation. The

Dept.of AI&DS, S.I.T.,Tumakuru-03 18


Safety Helmet Detection In Industrial Site 2024-25
system could be configured to send alerts to different individuals or groups based
on the type or severity of the violation.

• Saves images of violations in a designated directory for review and documentation.


Storing images of violations provides valuable evidence for incident investigations
and safety training purposes. The images should be stored securely and organized
in a way that makes it easy to retrieve them. Consideration should be given to data
privacy regulations and the retention period for these images.

• Logs worker IDs and timestamps for detected violations in a database. Logging
violations in a database allows for tracking trends, identifying high-risk areas, and
generating reports on safety compliance. The database should be designed to store
the data efficiently and securely. The system could provide tools for querying and
analyzing the data to gain insights into safety performance.

• Ensures alerts are triggered only when non-compliance is confirmed. This prevents
false alarms and ensures that alerts are only sent when a genuine safety violation
has occurred. The system should be robust to handle situations where there might
be temporary fluctuations in the input

4.4 Software Requirements


This section outlines the software tools and frameworks utilized for the implementation
of the helmet detection and facial recognition system.

4.4.1 Programming Language


Python is selected as the primary programming language due to its simplicity, vast library
support, and compatibility with AI frameworks. It facilitates rapid development and
testing. Python’s extensive ecosystem of libraries and frameworks makes it well- suited
for computer vision and machine learning applications. Its ease of use and readability
also contribute to faster development cycles..

4.4.2 Development Libraries


• OpenCV: Used for image and video processing, enabling frame capture, resizing,
and annotation with bounding boxes. OpenCV provides a wide range of functions

Dept.of AI&DS, S.I.T.,Tumakuru-03 19


Safety Helmet Detection In Industrial Site 2024-25
for image and video manipulation, making it a powerful tool for computer vision
tasks. It supports various image and video formats and offers optimized algorithms
for real-time processing..

• Face Recognition:Provides pre-built methods for face detection and encoding,


simplifying the identification process. This library likely leverages deep learning
models for facial recognition and provides easy-to-use APIs for face detection, en-
coding, and comparison. The specific face recognition library used (e.g., FaceNet,
ArcFace) should be mentioned, along with a justification for its selection. Consid-
eration should be given to the accuracy, speed, and resource requirements of the
chosen library.

• NumPy: Handles numerical operations and data manipulation, particularly for


image matrices. NumPy is a fundamental library for scientific computing in Python.
Its efficient array operations are essential for image processing tasks, as images are
represented as numerical matrices. NumPy allows for fast manipulation of pixel
data and efficient implementation of image processing algorithms.

• JSON: Stores facial encodings in a lightweight format for easy retrieval and updates.
JSON (JavaScript Object Notation) is a widely used format for data interchange.
Storing facial encodings in JSON format allows for easy storage, retrieval, and up-
dating of the facial recognition database. It’s a human-readable format that can be
easily parsed by the system.

• smtplib:Manages email notifications for safety alerts, ensuring quick communica-


tion with supervisors. smtplib is a Python library for sending emails using the
Simple Mail Transfer Protocol (SMTP). It provides the functionality to connect to
an email server and send email notifications, including attachments (e.g., images of
violations). Using smtplib simplifies the implementation of the alert system.

4.4.3 Integrated Development Environment (IDE)


The project is developed using tools such as Visual Studio Code and Jupyter Notebook,
offering debugging support, syntax highlighting, and seamless integration with Python
libraries. These IDEs provide a productive environment for coding, testing, and debug-
ging. Visual Studio Code is a powerful and versatile code editor with excellent support

Dept.of AI&DS, S.I.T.,Tumakuru-03 20


Safety Helmet Detection In Industrial Site 2024-25
for Python development. Jupyter Notebooks offer an interactive environment for data
analysis and visualization.

4.4.4 Operating System


The system is tested on Windows 10 and Ubuntu Linux to ensure cross-platform compat-
ibility.

4.4.5 Frameworks and Dependencies

The YOLOv8 framework, pre-trained on the COCO dataset, is integrated using


PyTorch, which optimizes performance for object detection tasks. PyTorch is a
popular deep learning framework that provides tools for building and training neural
networks. Using PyTorch to integrate YOLOv8 allows for efficient training and
inference. The COCO dataset is a large-scale dataset commonly used for object
detection tasks. Pre- training on COCO provides a good starting point for the
helmet detection task and reduces the amount of task-specific training data required.

4.4.6 Storage Requirements


Images and logs of violations are stored in an organized directory structure, and the
facial encodings are saved in JSON format, making it scalable for future expansions. A
well-organized directory structure is essential for managing the large amounts of data
generated by the system. Storing facial encodings in JSON format allows for efficient
retrieval and updating of the database. The storage requirements will depend on factors
such as the number of cameras, the frequency of violations, and the retention period for
the data. The system should be designed to handle large amounts of data and provide
mechanisms for archiving or deleting old data.

4.5 Functional Requirements


The functional requirements specify the operations that the system must perform to
achieve its objectives effectively. These requirements are categorized as follows:

4.5.1 Helmet Detection


• Detect helmets in the video feed using the YOLOv8 object detection model.

• Classify individuals based on whether they are wearing helmets or not.

Dept.of AI&DS, S.I.T.,Tumakuru-03 21


Safety Helmet Detection In Industrial Site 2024-25
• Draw bounding boxes around detected helmets and faces for visualization.

4.5.2 Facial Recognition


• Identify workers by matching their faces with pre-stored encodings.

• Maintain a database of authorized workers with facial data encoded in JSON.

• Label detected individuals with their assigned IDs for traceability.

4.5.3 Alert Notification System


• Send email and audio alerts to supervisors when a worker is detected without a
helmet.

• Attach images of violations in the email notifications.

• Maintain logs of alerts with timestamps for auditing purposes.

4.5.4 Violation Logging


• Save images of non-compliant workers in a dedicated directory.

• Append metadata, such as worker ID and timestamp, to ensure traceability.

4.5.5 Video Processing


• Process live camera feeds or pre-recorded video files.

• Perform real-time detection and recognition without significant delays.

• Display results on-screen for immediate monitoring.

4.6 Non-Functional Requirements


4.6.1 Performance Requirements
• The system should process video frames at a minimum of 15 FPS (Frames Per
Second) for near real-time detection. This ensures that the system can keep up
with the video feed and detect violations quickly. Higher frame rates might be
desirable for some applications, but this will require more processing power. The
system should be optimized to achieve the desired frame rate while maintaining
accuracy.

Dept.of AI&DS, S.I.T.,Tumakuru-03 22


Safety Helmet Detection In Industrial Site 2024-25
• Detection accuracy must exceed 90%, with minimal false positives and negatives.
High accuracy is crucial for ensuring that the system is reliable and does not generate
too many false alarms or miss actual violations. The accuracy should be measured
on a representative dataset of images and videos. The system should be designed to
minimize both false positives (incorrectly identifying a helmet or worker) and false
negatives (failing to detect a helmet or worker).

• Response time for generating email alerts must be under 10 seconds. This ensures
that supervisors are notified of violations promptly. The response time should be
measured from the moment a violation is detected to the time the email alert is
sent. Factors that can affect response time include network latency and email server
performance.

4.6.2 Scalability Requirements


• The system should handle multiple video feeds simultaneously without degradation
in performance. This is important for monitoring larger areas or multiple work
sites. The system should be able to scale its processing capacity to handle the
increased load. This might involve using multiple processing units or cloud-based
infrastructure.

• Support for adding new worker IDs and facial encodings dynamically without restart-
ing the system. This allows for easy updates to the worker database without inter-
rupting the monitoring process. The system should provide a user-friendly interface
for adding, updating, and deleting worker information. The database should be
designed to handle a large number of workers.

4.6.3 Reliability, Usability, Security, and Environmental Re-


quirements
• The system should ensure stable operation over prolonged periods without crashes or
interruptions. Reliability is crucial for ensuring that the system is always available
to monitor safety compliance. The system should be thoroughly tested to identify
and fix any potential issues that could lead to crashes or interruptions. Redundancy
mechanisms could be implemented to ensure that the system continues to operate
even if there is a hardware or software failure..

Dept.of AI&DS, S.I.T.,Tumakuru-03 23


Safety Helmet Detection In Industrial Site 2024-25
• Provide a user-friendly interface for monitoring and reviewing violations. The inter-
face should be intuitive and easy to use, even for individuals who are not technical
experts. It should provide clear and concise information about detected violations,
including images and relevant metadata. The interface should be accessible from
different devices, such as desktop computers, laptops, and mobile devices..

• Ensure encrypted storage for face encoding data to protect worker identities. This
is crucial for protecting the privacy of workers. The encryption should be strong
and the encryption keys should be securely managed. Access to the face encoding
data should be restricted to authorized personnel only.

• The system should work reliably in different lighting conditions, ensuring accuracy
during both day and night. This requires the system to be robust to variations in
illumination, including changes in ambient light, shadows, and glare. The system
might employ techniques such as image enhancement and normalization to mitigate
the effects of varying lighting conditions. Testing should be conducted under a wide
range of lighting scenarios to ensure reliable performance.

4.7 Additional Features and Enhancements


The system is designed to allow future upgrades, such as:

• Integration with IoT (Internet of Things) devices for automated alarms. This could
involve connecting the system to other sensors or devices in the workplace, such as
environmental sensors or access control systems. This integration could enable more
sophisticated and automated responses to safety violations. For example, an alarm
could be triggered automatically if a worker without a helmet enters a restricted
area.

• Expansion to detect additional safety gear, such as gloves and safety vests. This
would broaden the scope of the system to monitor compliance with a wider range
of safety regulations. This would require training the object detection model to
recognize these additional items of safety gear. The system architecture should be
flexible enough to accommodate these new detection capabilities.

• Use of cloud storage for maintaining logs and accessing data remotely. Cloud storage
offers scalability, reliability, and accessibility. Storing logs and data in the cloud

Dept.of AI&DS, S.I.T.,Tumakuru-03 24


Safety Helmet Detection In Industrial Site 2024-25
would allow for remote monitoring and analysis of safety performance. It would also
facilitate data sharing and collaboration among authorized personnel. Appropriate
security measures should be implemented to protect the data stored in the cloud.

• The defined architecture ensures scalability, real-time efficiency, and security in


implementation. These are important considerations for deploying the system in
real-world industrial environments. The architecture is designed to handle large
amounts of data, process video feeds in real-time, and protect sensitive information.
The modularity of the architecture allows for future expansion and integration with
other systems.

Dept.of AI&DS, S.I.T.,Tumakuru-03 25


Chapter 5
System Architecture And Low Level
Design
This chapter details the software architecture of the safety monitoring system, emphasiz-
ing its modularity, scalability, and real-time performance. It outlines both the high-level
design, providing a general overview of the system’s components and their interactions,
and the low- level design, delving into the specific algorithms and implementation details.
This chapter includes descriptions tailored to the uploaded code and the requirements
provided, offering a comprehensive understanding of the system’s inner workings. It also
highlights key design decisions and trade-offs considered during the development process.

5.1 Object Detection Algorithm


The system leverages the YOLO (You Only Look Once) model for object detection,
specifically the YOLOv8 version, known for its speed and accuracy. This is coupled with
a facial recognition component to reliably identify workers. The combined algorithm is
designed to ensure compliance with workplace safety standards by automating helmet
detection and generating real-time alerts for violations. The algorithm is optimized for
efficient processing to minimize latency and ensure timely intervention.

5.1.1 Algorithm Description


1. System Initialization:

• Load the YOLOv8 model for helmet and object detection.

• Load stored worker face encodings from the database file face.encodings.json.

• Set up directories for saving non-compliance images.

2. Capture Input:

• Access the video source (live camera or pre-recorded file).

• Process frames sequentially.

26
Safety Helmet Detection In Industrial Site 2024-25
3. Object Detection Using YOLO:

• Pass the captured frame to YOLOv8 to detect objects (workers and helmets).

• Extract bounding box coordinates for identified helmets and workers.

4. Facial Recognition:

• For each detected worker, extract facial features within the bounding box.
This involves cropping the face region from the image using the bounding box
coordinates and then applying a facial recognition model to generate a unique
encoding representing the face. Techniques like landmark detection and face
alignment might be used as pre-processing steps before feature extraction.

• Match the extracted features with the stored encodings to identify the worker.
The extracted facial feature vector is compared to the stored encodings in the
face.encodings.json file. A similarity score is calculated, and if it exceeds a
certain threshold, the worker is identified. Different distance metrics, such as
Euclidean distance or cosine similarity, can be used for the comparison.

5. Helmet Compliance Check:

• Verify if each identified worker has an overlapping helmet detection. This step
checks if the bounding box of a detected helmet overlaps sufficiently with the
bounding box of a detected worker. The Intersection over Union (IoU) is a
common metric used to quantify the overlap. A minimum IoU threshold is
typically set to determine if the overlap is significant enough to consider the
worker compliant.

• If no helmet is detected, classify the worker as non-compliant. If the overlap


check fails, or if no helmet is detected for a recognized worker, the worker is
flagged as non- compliant.

6. Generate Alert:

• Save an image of the violation with worker ID and timestamp. A snapshot of


the frame showing the non-compliant worker is saved, along with the worker’s
ID (if identified) and the current timestamp. This image serves as evidence of

Dept.of AI&DS, S.I.T.,Tumakuru-03 27


Safety Helmet Detection In Industrial Site 2024-25
the violation. The image could be stored in a designated directory for review
and documentation.

• Trigger an email and audio alert to the supervisor. An alert, containing infor-
mation about the violation (worker ID, timestamp, and potentially the image),
is sent to the supervisor via email and an audio alert is triggered locally. The
email could be sent using the SMTP protocol, and the audio alert could be
played using a system sound or a dedicated audio device.

5.1.2 Algorithm Description


1. Load YOLOv8 Pre-trained Model

• Load a pre-trained YOLOv8 model for detecting helmets and workers. Using
a pre- trained model significantly reduces the amount of training data required
for the specific task. The model is typically pre-trained on a large dataset like
COCO (Common Objects in Context).

2. Pre-process the Input Image:

• Resize the input frame to the model’s expected input size (e.g., 640x640).
YOLOv8, like most deep learning models, requires input images to be of a
specific size. Resizing ensures compatibility.

• Normalize pixel values to be between 0 and 1. Normalization is a common tech-


nique used to improve the performance and stability of deep learning models.
It involves scaling the pixel values to a specific range.

3. Forward Pass (Inference):

• Pass the pre-processed frame through the YOLOv8 model to predict bound-
ing boxes for workers and helmets. The YOLOv8 model processes the input
image and outputs bounding boxes around detected objects, along with their
associated class probabilities.

4. Post-processing:

Dept.of AI&DS, S.I.T.,Tumakuru-03 28


Safety Helmet Detection In Industrial Site 2024-25
• Apply Non-Maximum Suppression (NMS) to remove redundant bounding boxes
based on confidence scores and Intersection over Union (IoU). NMS is a tech-
nique used to filter out overlapping bounding boxes that predict the same
object. It selects the bounding box with the highest confidence score and
suppresses other overlapping boxes.

5. Classify Detected Objects:

• Classify the detected objects into “worker” and “helmet” classes based on the
predicted class labels. The YOLOv8 model assigns class labels to each detected
object. These labels are used to distinguish between workers and helmets. A
confidence score is also associated with each classification.

5.1.3 Pseudocode for the Safety Monitoring System


listings

# Step 1: \ textbf { System Initialization }


YOLO_model = load_model ( ’ yolov8 ’)
face_encodings = load_face_encodings ( ’ face_encodings . json ’)
violat ion_images_directory = setup_violation_directory ()

# Step 2: \ textbf { Capture Input }


video_source = open_video_source ( ’ camera_or_video_file ’)
frame = capture_frame ( video_source )

# Step 3: \ textbf { Object Detection Using YOLO }


while video_source . is_open () :
frame = capture_frame ( video_source )
detected_objects = YOLO_model . detect_objects ( frame )
for worker in detected_objects [ ’ workers ’ ]:
worker_face = extract_face_features ( worker )
worker_id = match_face ( worker_face , face_encodings )

# Step 4: \ textbf { Helmet Compliance Check }

Dept.of AI&DS, S.I.T.,Tumakuru-03 29


Safety Helmet Detection In Industrial Site 2024-25

if check_helmet_compliance ( worker , detected_objects [ ’ helmets ’


]) :
continue
else :
violation_image = capture_violation_image ( worker )
save_image ( violation_images_directory , violation_image )
generate_alert ( worker_id )

# Step 5: \ textbf { Generate Alert }


timestamp = get_current_timestamp ()
send_email_alert ( worker_id , timestamp , violation_image )

5.2 Context Flow Diagram (CFD)


Figure 5.1 describes the Context Flow Diagram (CFD) for the safety monitoring system.
It visualizes the interaction between various modules in the system, showing the flow of
data and control. The CFD provides a high-level overview of the system’s architecture.

5.3 Data Flow Diagram (DFD)


Figure 5.2 offers a detailed visualization of the flow of data between different components
in the safety monitoring system.
The key components and data flow described in the DFD are as follows:

• Video Source: Provides input video frames for processing by the system.

• YOLO Detection: Detects objects such as helmets and workers, generating bounding
box data. Detected Data: Stores temporary information about detected objects in
the video frames.

• Face Recognition: Matches detected faces with stored encodings in the worker
database for identification extract. Worker Database: Stores facial encodings and
worker identification details for matching.

• Compliance Verification: Checks whether detected workers are compliant with safety
requirements by wearing helmets.

Dept.of AI&DS, S.I.T.,Tumakuru-03 30


Safety Helmet Detection In Industrial Site 2024-25

Figure 5.1: Context Flow Diagram (CFD) of the safety monitoring system.

• Alert System: Generates real-time alerts via email and logs violations for further
review.

• Violation Database: Maintains records of violations, including timestamps and cap-


tured images.

The DFD illustrates the internal data flow and processing steps necessary for effective
monitoring, compliance verification, and alert generation within the system.

5.4 Comparison between CFD and DFD


• CFD: Focuses on high-level interactions between components, providing a bird’s-eye
view of system communication and module relationships.

• DFD: Details the flow of data between components, focusing on the internal work-
ings and data transformations within the system. Usage: The CFD explains system-
level interactions, while the DFD is used to understand internal logic and data

Dept.of AI&DS, S.I.T.,Tumakuru-03 31


Safety Helmet Detection In Industrial Site 2024-25

Figure 5.2: Data Flow Diagram (DFD) of the Safety Monitoring System

processing workflows.

5.5 Conclusion
The proposed safety monitoring system effectively utilizes AI and computer vision tech-
nologies to automate helmet compliance checks in industrial environments. With YOLOv8
for object detection and face recognition for worker identification, the system ensures high
accuracy and scalability. Real-time alerts and image logging enhance workplace safety by
enabling prompt actions. Future improvements may include multi-camera support and
expanded safety gear detection to increase coverage and reliability. The software archi-
tecture ensure optimize performance and reliable real-time monitoring.

Dept.of AI&DS, S.I.T.,Tumakuru-03 32


Chapter 6
Results
This chapter presents the results obtained during the implementation and testing of the
safety monitoring system. It includes details of the test setup environment, test procedure
,test cases and actual versus expected results. The below given Fig 6.1 and Fig 6.2 are
the result image of the project. In Fig 6.1,image shows that how red colored bounded box
is created when worker is not wearing safety helmet and no box appears when violation
is not there.

Figure 6.1: A red bounded box will be created around worker’s face not wearing helmet.

After the detection, the model will save the id. Each worker has a unique id associated
with it and its stored in database .When id is recorded for the violation ,a mail is sent
to worker and also announcement is made to alert them. In the Fig 6.2 the alerts and
messages that are sent can be seen. Whenever a worker is not wearing the helmet a
message can be shown in terminal mentioning the id of violators. It’ll omention whether
mail is sent to authority.

33
Safety Helmet Detection In Industrial Site 2024-25

Figure 6.2: Terminal with ID of violator also whether mail is sent or not.

6.1 Test Set up Environment


The testing environment was designed to simulate real-world conditions to ensure the
system’s efficiency in industrial safety monitoring scenarios.

6.1.1 Input Components


• Video streams from cameras in industrial setups.

• Frame resolutions tested at 720p and 1080p.

6.1.2 Output Components


• Logs of detected violations.

• F Real-time alerts with saved images of violations.

6.1.3 Interfaces
• User interface for viewing live detections.

• Alert system for notifying supervisors.

Dept.of AI&DS, S.I.T.,Tumakuru-03 34


Safety Helmet Detection In Industrial Site 2024-25

6.1.4 Data Storage


• Encoded facial data stored in face encodings. Json.

• Violation logs saved locally with timestamps.

6.1.5 Evaluation Metrics


• Detection accuracy.

• Processing speed (frames per second).

• Robustness under poor lighting and occlusions.

6.2 Test Procedures and Test Cases


The testing process was conducted in phases, covering various scenarios to ensure sys-
term robustness. The results include expected vs. actual outcomes and traceability to
requirements.

6.2.1 Testing Procedure


Preconditions:

• YOLO model loaded successfully.

• Facial encodings accessible from the database.

• Video source properly configured.

Execution Steps:

1. Process video frames sequentially.

2. Detect objects (helmets, workers) using the YOLO model.

3. Match detected worker faces with the database for identification.

4. Verify compliance by checking helmet presence with bounding box overlap.

Postconditions:

• Violations logged with details (worker ID, timestamp).

• Alerts generated for non-compliance.

• Stored images accessible for review.

Dept.of AI&DS, S.I.T.,Tumakuru-03 35


Safety Helmet Detection In Industrial Site 2024-25

6.3 Snapshots and Graphs


The results are summarized in Figure 6.3 to highlight key trends.
Graphs showing accuracy and processing speed across test cases validate the system’s
performance.

Figure 6.3: Graph illustrating accuracy and processing speed trends across test cases.

6.4 Analysis
This section analyzes results based on test cases, providing detailed explanations of tables
and graphs. The Fig 6.4 shows the trends across multiple test cases.

6.4.1 Accuracy Analysis


Normal Conditions:

• Accuracy: 100% - All workers wearing helmets were identified without errors.

• Processing Speed: 25 fps - Ensures real-time processing.

• Remarks: Meets functional requirements perfectly.

Poor Lighting:

Dept.of AI&DS, S.I.T.,Tumakuru-03 36


Safety Helmet Detection In Industrial Site 2024-25
• Accuracy: 90% - Slight drop due to reduced image clarity.

• Processing Speed: 22 fps - Handles challenging conditions effectively.

• Remarks: Robust performance under difficult conditions.

Occlusion Cases:

• Accuracy: 85% - Partial visibility affected detection accuracy.

• Processing Speed: 20 fps - Computational complexity slightly impacted speed.

• Remarks: Acceptable performance aligning with robustness requirements.

Figure 6.4: Accuracy and processing speed trends across multiple test cases.

6.4.2 Traceability to Requirements


• Helmet Detection: Accurately detects compliance as specified in functional require-
ments.

• Violation Alerts: Generates precise alerts with minimal false positives.

• Robustness: Handles low-light and occlusion scenarios effectively.

Dept.of AI&DS, S.I.T.,Tumakuru-03 37


Safety Helmet Detection In Industrial Site 2024-25

6.4.3 Graph Analysis


Key trends include:

• Accuracy decreases slightly as conditions become more challenging but stays above
85

• Processing speed remains within real-time limits, ensuring usability.

6.4.4 Testcase
Table 6.1 illustrates the various test case the mode should be tested upon and the results
are analysed Below given is the summary and test cases on the basis of which the project
is evaluated.

Table 6.1: Helmet Detection Test Cases

Post Condi-
Test Scenario Precondition Expected Result Actual Result
tion
Worker wearing hel- System initial- Worker identified as Worker correctly Compliance log
met ized ‘Compliant’ identified updated
Worker not wearing System initial- Worker flagged as Worker flagged Alert generated
helmet ized ‘Non-compliant’ correctly (email/audio)
Worker reg-
Facial recognition Worker identified, Face detected No alert gener-
istered in
with helmet compliance recorded and matched ated
database
Worker reg- Alert logged
Facial recognition Worker identified, Face matched,
istered in with ID &
without helmet alert sent alert sent
database timestamp
Helmet detection
Dim lighting Reduced accu- Detection log
Low-light detection works with some
conditions racy updated
difficulty
Worker not in Unknown Worker identified as System logged as Alert sent with-
database worker in frame ‘Unknown’ ‘Unknown’ out ID
Alert notification sys- Worker without Alert sent to supervi- Email sent suc- Alert times-
tem helmet detected sor, message logged cessfully tamped
All detected, mi- Logs updated
Multiple workers in Two or more All workers detected,
nor overlap is- with multiple
same space workers in frame compliance checked
sues IDs

Dept.of AI&DS, S.I.T.,Tumakuru-03 38


Chapter 7
Conclusion
The project titled “Industrial Worker Safety Monitoring System Using YOLO and Face
Recognition” addresses the need for automated safety monitoring in industrial environ-
ments. It was successfully implemented and tested under various conditions to ensure its
effectiveness in detecting helmet compliance and identifying workers in real-time.
The primary objective was to develop a robust system that automates worker safety
monitoring, replacing error-prone manual methods. The safety helmet detection system,
as outlined in the Context Flow Diagram (CFD), presents a promising solution for en-
hancing workplace safety in industrial settings. By leveraging advanced computer vision
techniques like YOLO object detection and facial recognition, the system effectively mon-
itors worker compliance with safety regulations.
The real-time alerts and logging capabilities enable prompt intervention and accountabil-
ity, thereby reducing the risk of accidents and injuries. Moreover, the system’s ability to
generate valuable safety trend data can inform targeted interventions and drive continuous
improvements in workplace safety practices.
Overall, the implementation of this safety helmet detection system demonstrates a proac-
tive approach to workplace safety management. By combining technological innovation
with a focus on data-driven insights, the system significantly contributes to creating a
safer and more secure working environment for all employees.

7.1 Scope for future work


Although the project successfully met its objectives, there are areas for improvement and
extensions that could enhance its capabilities:

• Improving Accuracy in Extreme Conditions: Implement advanced image enhance-


ment techniques to handle low lighting and occlusions. exploring alternative classi-
fication heads.

• Integration with IoT: Connect the system to IoT-enabled devices, such as smart

39
Safety Helmet Detection In Industrial Site 2024-25
helmets or wearable sensors, for extended monitoring capabilities.

• Cloud-Based Deployment: Deploy the system on cloud platforms for centralized


monitoring across multiple industrial sites. knowledge to enhance classification ac-
curacy.

• Additional Safety Gear Detection: Extend the detection system to monitor other
safety equipment like gloves, goggles, and harnesses.

• Advanced Alert Mechanisms: Integrate with mobile applications or SMS services


for faster notifications to supervisors.

• Machine Learning Insights: Use collected data to analyze trends and generate pre-
dictive insights for proactive safety measures.

• Scalability Enhancements: Optimize the system to handle larger industrial setups


with multiple camera inputs and complex environments.

Dept.of AI&DS, S.I.T.,Tumakuru-03 40


Bibliography
[1] H. Liu, W. Zhang, and L. Wang. Real-time helmet detection using yolov8 in industrial
environments. Journal of AI in Industry, 25(4):250–260, 2024.

[2] J. Kim, S. Park, and Y. Lee. Deep learning-based helmet detection using yolo and
resnet. International Journal of Computer Vision, 48(7):1234–1245, 2022.

[3] A. Singh, R. Gupta, and P. Sharma. Hybrid model for helmet violation detec-
tion with IoT and AI integration. IEEE Transactions on Industrial Informatics,
17(5):1240–1248, 2021.

[4] T. Zhang, F. Lu, and L. Zhao. A comparative study of yolov5 and faster r-cnn for
helmet detection. Journal of Machine Learning in Industry, 39(2):187–194, 2020.

[5] X. Gao, M. Wu, and Y. Li. Facial recognition system for worker identification using
RFID integration. Journal of AI and Security, 28(1):45–59, 2023.

[6] J. Chen and H. Zhao. Privacy-preserving facial recognition for worker authentication
in industrial settings. Journal of Privacy and Security in AI, 6(8):315–329, 2022.

[7] T. Li and Z. Sun. Bias mitigation in facial recognition systems for industrial envi-
ronments. Journal of AI Ethics, 3(2):78–90, 2021.

[8] J. Wang and Q. Liu. Impact of illumination and pose variations on facial recognition
systems in dynamic industrial settings. Computer Vision and Pattern Recognition,
58(4):240–255, 2020.

[9] R. Patel, S. Kumar, and F. Ahmed. AI-powered safety monitoring system integrating
IoT in industrial environments. IEEE Transactions on Automation, 56(2):115–127,
2022.

[10] C. Alvarez and M. Perez. Predictive analytics for safety violation prevention in in-
dustrial workplaces. Safety Science, 123:108–120, 2021.

41
Safety Helmet Detection In Industrial Site 2024-25
[11] B. LL, J. Zhang, and P. Yang. “Ensemble models for safety compliance detection
systems in smart factories.” Journal of AI Applications in Manufacturing, 41(7):1680-
1695, 2023.

[12] W. Gao and J. Xu. “Hyperparameter optimization in YOLO models for enhanced
safety compliance detection in noisy environments.” Journal of Computer Vision,
35(3):122-134, 2023.

[13] Y. Huang and L. Zhang. “Scalability and robustness challenges of deploying AI mod-
els in large-scale industrial environments.” Journal of AI and Industrial Automation,
17(9):215-230, 2022.

[14] T. Smith and P. Lee. “Ethical considerations in the use of AI for workplace safety:
Privacy and worker consent.” AI and Society, 36(6):215-228, 2021.

[15] D. Johnson and K. Williams. “Integrating augmented reality with AI for immersive
safety training in industrial environments.” Journal of Workplace Safety Technolo-
gies, 8(1):50-60, 2023.

Dept.of AI&DS, S.I.T.,Tumakuru-03 42


Appendices

43
Appendix A
Project Management and Budget Es-
timination Details
A.1 Project Schedule
Phase 1: Project Initialization——Duration: 2 weeks
Activities: Setting up tools, software, and hardware required for the project.
Phase 2: Data Collection and Preprocessing——-Duration: 3 weeks
Activities: Gathering relevant datasets, cleaning, and preparing data for model training.
Phase 3: Model Development and Testing—–Duration: 4 weeks
Activities: Designing and training AI models, followed by validation to ensure accuracy
and reliability.
Phase 4: System Integration——–Duration: 2 weeks Activities: Integrating the trained
model into the hardware system for real-world use.
Phase 5: Final Testing and Presentation———-Duration: 2 week.

Figure A.1: User Interface for UNSW NB 15 3.csv -Confusion Matrix.

44
Safety Helmet Detection In Industrial Site 2024-25

A.2 Budget Details

Figure A.2: User Interface for UNSW NB 15 3.csv -Confusion Matrix.

Figure A.2 illustrates the cost estimation of the project.


[utf8]inputenc 20B9 The cost estimation for this safety helmet detection project ranges
from 4,50,000 to 9,00,000. Major expenses include hardware (CPUs, GPUs, cameras)
at 1,50,000 to 1,80,000, personnel costs at 2,40,000 to 4,90,000, and data preparation
(dataset creation, labeling) at 40,000 to 85,000. Software licenses, cloud storage, testing
deployment, and miscellaneous costs also contribute to the overall budget.

A.3 Dataset Details / Input Details


A.3.1 Dataset Details

Dataset Source

The dataset used in this project is a combination of publicly available datasets and custom-
collected images to simulate industrial safety environments.

Data Composition

• Total Images: 10,000+

Dept.of AI&DS, S.I.T.,Tumakuru-03 45


Safety Helmet Detection In Industrial Site 2024-25
• Categories:

– Workers wearing helmets.

– Workers not wearing helmets.

– Workers in partially occluded scenarios.

– Images under different lighting conditions (bright light, low light).

• Labelling: Each image was annotated with bounding boxes for helmets and workers.
Labels included: Helmet, No Helmet, and Worker.

Augmentation Techniques

To improve model robustness, the following augmentation techniques were applied:

• Rotation: Simulating camera angle variations.

• Brightness Adjustment: Testing under varying lighting conditions.

• Occlusion Simulation: Adding artificial occlusions.

• Scaling and Flipping: Enhancing diversity in positions and perspectives.

Dataset Splitting

• Training Set: 70% for model training.

• Validation Set: 20% for hyperparameter tuning.

• Test Set: 10% for final evaluation.

Face Recognition Dataset

• Face Images: 2,000+ face images of workers.

• Format: JPEG/PNG.

• Encoding: Pre-processed with the face recognition library.

Dept.of AI&DS, S.I.T.,Tumakuru-03 46


Safety Helmet Detection In Industrial Site 2024-25

A.3.2 Input Details

Input Components

• Video Feeds:

– Live video streams from industrial cameras.

– Pre-recorded videos for testing.

• Image Resolution:

– 1080p for high-quality processing.

– 720p for robustness testing under limited resources.

Input Formats

• Videos: .mp4, .avi.

• Images: .jpg, .png.

Preprocessing

• Scaling: Resizing images for optimized processing.

• Noise Reduction: Filters applied to reduce noise under poor lighting.

• Frame Extraction: Splitting video inputs into frames for analysis.

Metadata

Worker details stored in JSON format (face encodings.json) include:

• Worker ID.

• Name.

• Encoded facial features.

Dept.of AI&DS, S.I.T.,Tumakuru-03 47


Appendix B
Configuration Details
B.1 GitHub Repository Link
The Project’s code and related files are hosted at: https://github.com/sohanreddyk/
SafetyHelmetDetec

B.2 Repository Structure


• Root Directory: Contains main scripts (main.py, encode

• faces.py). face recognition/: Stores encodings and scripts for recognition.

• models/: Contains YOLOv8 model weights (yolov8s custom.pt).

• data/: Sample test videos and images for validation.

• logs/: Logs of violations.

• output/: Images of violations captured during testing.

B.3 Configuration Steps


B.3.1 Installation
• Clone repository:

git clone https://github.com/sohanreddyk/Safteyhelmetdetection

• Navigate to directory:

cd Industrial Worker Safety Monitoring

• Install dependencies:

pip install -r requirements.txt

48
Safety Helmet Detection In Industrial Site 2024-25

B.3.2 Execution
• Encode faces:

python encode_faces.py

• Start system:

python main.py

Dept.of AI&DS, S.I.T.,Tumakuru-03 49


Appendix C
Specifications and Standards
C.1 Specifications
C.1.1 Hardware Requirements
• Processor: Intel Core i7, 10th Gen or higher.

• RAM: 16 GB.

• GPU: NVIDIA GTX 1660 or equivalent.

• Storage: 512 GB SSD.

C.1.2 Software Requirements


• OS: Windows 10/Linux.

• Language: Python 3.8+.

• Frameworks:

– YOLOv8 (Ultralytics).

– Face Recognition (dlib).

– OpenCV for image processing.

– smtplib for alerts.

C.2 Standards
• ISO 45001: Occupational health and safety standards.

• IEEE 1012-2016: Verification and validation of software systems.

• COCO Standards: Bounding box annotations.

• GDPR Compliance: Secure storage and data privacy for face encodings.

50
Safety Helmet Detection In Industrial Site 2024-25

C.3 Performance Benchmarks


• Processing Speed: 25 fps in real-time operation.

• Accuracy:

– 100% in ideal conditions.

– 90% in low-light environments.

– 85% under occlusions.

Dept.of AI&DS, S.I.T.,Tumakuru-03 51


Appendix D
Self Assessment of the Project
Level Description:
1: Poor 2: Average 3: Good 4: Very Good 5: Excellent

PO PSO Contribution from the Project Level


PO1:Engineering Effectively implemented the mathematical con- 4
Knowledge: cepts (vector, probabilities etc) concerned with
Knowledge of mathematics, YOLO V8 and applied engineering fundamen-
engineering fundamentals, tals(YOLOV8,OpenCv,python libraries).
engineering specialization
to form complex engineer-
ing problems.
PO2:Problem Analysis: Effectively identified and analyzed the safety 5
Identify, formulate, review compliance problem using a comprehensive lit-
research literature, and an- erature survey and implemented first principle
alyze complex engineering approaches
problems reaching substan-
tiated conclusions with con-
sideration for sustainable
development.

52
Safety Helmet Detection In Industrial Site 2024-25

PO PSO Contribution from the Project Level


PO3:Design/development Designed a robust solution combining YOLOv8 5
of solutions: and facial recognition for real-time safety moni-
Design creative solutions for toring, considering scalability and environmen-
complex engineering prob- tal factors.
lems and design/develop
systems/ components/
processes to meet identified
needs with consideration
for the public health and
safety, whole-life cost, net
zero carbon, culture, so-
ciety and environment as
required.
PO4:Conduct investiga- Utilized research methods for testing and vali- 3
tions of complex prob- dation under varied conditions, leading to reli-
lems: able conclusions about system performance.
Conduct investigations of
complex engineering prob-
lems using research-based
knowledge including design
of experiments, modelling,
analysis and interpretation
of data to provide valid con-
clusions.

Dept.of AI&DS, S.I.T.,Tumakuru-03 53


Safety Helmet Detection In Industrial Site 2024-25

PO PSO Contribution from the Project Level


PO5:Modern tool usage: Successfully implemented modern tools like 5
Create, select and apply YOLOv8, Python libraries, and OpenCV for ac-
appropriate techniques, re- curate object detection and facial recognition.
sources and modern engi-
neering and IT tools, in-
cluding prediction and mod-
elling recognizing their limi-
tations to solve complex en-
gineering problems.
PO6:The Engineer and Addressed societal issues by enhancing work- 5
the world: place safety, demonstrating the ethical respon-
Analyze and evaluate soci- sibility of engineering practices.
etal and environmental as-
pects while solving com-
plex engineering problems
for its impact on sustain-
ability with reference to
economy, health, safety, le-
gal framework, culture and
environment.
PO7:Ethics: Followed ethical practices, including data pri- 4
Apply ethical principles vacy and compliance with safety norms, ensur-
and commit to professional ing worker confidentiality
ethics, human values, diver-
sity and inclusion; adhere to
national and international
laws.

Dept.of AI&DS, S.I.T.,Tumakuru-03 54


Safety Helmet Detection In Industrial Site 2024-25

PO PSO Contribution from the Project Level


PO8:Individual and Collaborated effectively within a team, dividing 5
Team Work: responsibilities and integrating individual con-
Function effectively as tributions into the project.
an individual, and as
a member or leader in
diverse/multi-disciplinary
teams.
PO9:Communication: Presented the project through well-structured 4
Communicate effectively reports, graphical analysis, and detailed docu-
and inclusively within the mentation.
engineering community and
society at large, such as
being able to comprehend
and write effective reports
and design documentation,
make effective presenta-
tions considering cultural,
language, and learning
differences.

Dept.of AI&DS, S.I.T.,Tumakuru-03 55


Safety Helmet Detection In Industrial Site 2024-25

PO PSO Contribution from the Project Level


PO10:Project Manage- Managed project timelines, budget considera- 4
ment and Finance: tions, and resource allocations effectively during
Apply knowledge and un- development and testing.
derstanding of engineering
management principles and
economic decision-making
and apply these to one’s
own work, as a member
and leader in a team,
and to manage projects
and in multidisciplinary
environments.
PO11:Life-long Learn- Applied advanced concepts in signal processing, 4
ing: AI, and embedded systems for real time safety
Recognize the need for, and monitoring solutions.
have the preparation and
ability for i) independent
and life-long learning ii)
adaptability to new and
emerging technologies and
iii) critical thinking in
the broadest context of
technological change.

Dept.of AI&DS, S.I.T.,Tumakuru-03 56


Safety Helmet Detection In Industrial Site 2024-25

PO PSO Contribution from the Project Level


PSO1:Computer based Applied advanced concepts in signal processing, 4
systems development: AI,knowledge of database to store worker’s data
Ability to apply the basic and embedded systems for real time safety mon-
knowledge of database sys- itoring solutions.
tems, computing, operating
system, digital circuits, mi-
crocontroller, computer or-
ganization and architecture
in the design of computer-
based systems.
PSO2:Software develop- Effectively developed a safety helmet detection 5
ment: application using YOLO V8,requiring software
Ability to specify, design design and the use of programming languages
and develop projects, ap- Python.
plication software and sys-
tem software by using the
knowledge of data struc-
tures, analysis and design
of algorithm, programming
languages, software engi-
neering practices and open
source tools.

Dept.of AI&DS, S.I.T.,Tumakuru-03 57


Safety Helmet Detection In Industrial Site 2024-25

PO PSO Contribution from the Project Level


PSO3: Computer com- It incorporates network communication through 3
munications and Inter- email notification and announcement to alert
net applications: individuals about helmet violation.
Ability to design and de-
velop network protocols and
internet applications by in-
corporating the knowledge
of computer networks, com-
munication protocol .

Dept.of AI&DS, S.I.T.,Tumakuru-03 58


Safety Helmet Detection In Industrial Site 2024-25
Sustainable Development Goals (SDG) addressed
Level Description:
1: Poor 2: Good 3: Excellent

Sustainable Development Goals (SDG) Level


No Poverty
Zero Hunger
Good Health and Well-being
Quality education
Gender Equality
Clean water and Sanitation 1
Affordable and Clean Energy
Decent work and Economic Growth 3
Industry, Innovation and Infrastructure 1
Reduced Inequalities
Sustainable cities and Communities
Responsible Consumption and production
Climate action
Life below water
Life on Land
Peace, Justice and Strong Institutions
Partnership’s for the Goals

Dept.of AI&DS, S.I.T.,Tumakuru-03 59

You might also like