Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
26 views67 pages

Chat GPT DOCUMENT

This project focuses on creating a hardware-based intelligent voice assistance system using ChatGPT, AI, and IoT technologies to aid visually impaired individuals in daily activities. The system aims to provide real-time information, navigation assistance, and control over smart devices through voice commands, enhancing safety and independence. It emphasizes affordability, energy efficiency, and user-friendly design to ensure accessibility for a broader population.

Uploaded by

Dhinesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views67 pages

Chat GPT DOCUMENT

This project focuses on creating a hardware-based intelligent voice assistance system using ChatGPT, AI, and IoT technologies to aid visually impaired individuals in daily activities. The system aims to provide real-time information, navigation assistance, and control over smart devices through voice commands, enhancing safety and independence. It emphasizes affordability, energy efficiency, and user-friendly design to ensure accessibility for a broader population.

Uploaded by

Dhinesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 67

DESIGN OF HARDWARE CHATGPT WITH VOICE

ASSISTANCE AND AI – IOT FOR VISUALLY


IMPAIRED

i
ABSTRACT

This project proposes the development of a hardware-based intelligent voice


assistance system utilizing ChatGPT, Artificial Intelligence (AI), and Internet of
Things (IoT) technologies to support visually impaired individuals in their daily
activities. The system combines advanced natural language processing, real-time
voice recognition, and IoT device integration to provide seamless interaction
between the user and their environment. Through simple voice commands, users
can access information, receive situational guidance, and control smart home
devices effortlessly. The device is designed to detect obstacles, read out
notifications, offer location-based assistance, and connect to emergency services
when needed, thereby improving safety, independence, and quality of life.
The hardware incorporates a compact microcontroller platform, audio input/output
modules, AI-based processing units, and wireless communication protocols to
ensure robust performance. By embedding conversational AI into portable
hardware, the system delivers personalized assistance that adapts to the user's needs
and surroundings. This project emphasizes affordability, energy efficiency,
portability, and user-friendly design to ensure it is accessible to a wider population.
Overall, the solution aims to bridge the gap between technology and accessibility,
creating a supportive ecosystem for the visually impaired through the fusion of AI,
IoT, and real-world interaction.

Keywords: Accessibility, Artificial Intelligence, Assistive Technology, ChatGPT,


Hardware Design, Internet of Things, Natural Language Processing, Smart Devices,
Voice Assistance, Visually Impaired.

ii
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT iv
LIST OF TABLES ix
LIST OF FIGURES x
ABBREVATION xi

1. INTRODUCTION 1

2. EMBEDDED SYSTEMS 2
2.1 Characteristics 3
2.2 User interface 3
2.3 Processors in embedded systems 3
2.4 Peripherals 4
2.5 Embedded software architectures 5
2.5.1.Simple control loop 5
2.5.2.Interrupt-controlled system 5
2.5.3.Cooperative multitasking 5
2.5.4 Preemptive multitasking or 6
Multi-threading
2.6 Applications 6
3. LITERATURE REVIEW 8
3.1 Introduction 8
3.2 Existing Work 8
3.3 Proposed Work 13

iii
4. HARD WARE DESCRIPTION 14
4.1 Block Diagram of Online Doctor Consultant 14
4.1.1 Hardware Requirements 14
4.1.2 Software Requirements 16
4.1.3 Operation 16
4.2 PIC Microcontroller 18
4.2.1 Introduction 18
4.2.2 Basic Features of PIC Microcontroller 19
4.2.3 Pin Descriptions 20
4.2.4 Applications of PIC Microcontroller 25
4.3 Peripheral Features 31
4.3.1 Advantages 32
4.4.2 Disadvantages 32

4.4 HEARTBEAT SENSOR 33


4.4.1 Introduction 33
4.4.2 Wire Connections 34
4.5.3 Product features 34
4.5.4 Timing diagram 35
4.5.5 Theory of Operation 35
4.5.6 Circuit Diagram of Heartbeat Sensor 37
4.5.7 Advantages of Heartbeat sensor 41
4.5.8 Disadvantages of Heartbeat Sensor 41
4.5.9 Applications 42

4.5 OXYGEN SENSOR 33


4.5.1 Introduction 33
4.5.2 Wire Connections 34
4.5.3 Product features 34

iv
4.5.4 Timing diagram 35
4.5.5 Theory of Operation 35
4.5.6 Circuit Diagram of Oxygen Sensor 37
4.5.7 Advantages of Oxygen Sensor 41
4.5.8 Disadvantages of Heartbeat Sensor 41
4.5.9 Applications 42

4.6 TEMPERATURE SENSOR 33


4.6.1 Introduction 33
4.6.2 Wire Connections 34
4.5.3 Product features 34
4.5.4 Timing diagram 35
4.5.5 Theory of Operation 35
4.5.6 Circuit Diagram of Temperature Sensor 37
4.5.7 Advantages of Temperature Sensor 41
4.5.8 Disadvantages of Temperature Sensor 41
4.5.9 Applications 42

4.6 LCD DISPLAY 46


4.7.1 Introduction 46
4.7.2 Parts of Vibration Motor 47
4.7.3 Vibration Motor Design and Working 49
4.7.4 Vibrator Motor Applications 51
4.7.5 Battery 52

5 PROJECT OVERVIEW 53

6 PROGRAM 54

v
7 RESULT AND DISCUSSION 59

8 APPLICATIONS 61

9 FUTURE SCOPE 62

10 CONCLUSION 63

11 REFERENCES 64

vi
LIST OF TABLES

CHAPTER NO TITLE PAGE NO

4 4.2.3 PIC Microcontroller Specifications 24


4 4.5.3 Electric Parameter 34
4 4.6.3 Pin Description of Sensors 44

vii
LIST OF FIGURES

CHAPTER NO TITLE PAGE NO

2. 2.1 Embedded Systems 2


4. 4.1 Block Diagram of Online doctor
Consultant 14
4.2.1 PIC Microcontroller 18
4.2.1 Pin Configuration 18
4.2.3 Pin Description of PIC board 20
4.5.1 Heartbeat Sensor 33
4.5.4 Temperature Sensor 35
4.5.5 Oxygen Sensor 36
4.5.6 Circuit diagram of Heartbeat Sensor 38
4.5.6 Heartbeat Sensor working 39
4.5.6 Circuit diagram of Temperature
Sensor 39
4.5.6 Temperature Sensor Working
4.6.3 Circuit Diagram of Oxygen Sensor 44
4.7.1 Oxygen Sensor Working 46
4.7.3 LCD Display 49
4.7.3 GSM 50

5. 5.1 Online Doctor Consultation for Covid-19 isolated


People Using IOT 52
7. 7.1 Right Alert 59
7.2 Left Alerts 59
7.3 Obstacle Alert 60
viii
ABBREVATION

IO - Input Output
US sensor - Ultrasonic sensor
SRAM - Static Random Access Memory
EEPROM - Electrically Erasable Programmable Read Only
Memory
FTDI - Future Technology Devices International
RX - Receiver
TX - Transmitter
USB - Universal Serial Bus
PWM - Pulse Width Modulation
SPI - Serial Peripheral Interface
SS - Stack Segment register
MOSI - Master Out Slave In
MISO - Master In Slave Out
SCK - Serial Clock
SDA - Serial Data
SCL - Serial Clock
PWI - Precision Winding
I2C - Inter Integrated Circuit
AREF - Analog Reference
OS - Operating System
RISC - Reduced Instruction Set
Computer
AVR - Advanced Virtual RISC
DTR - Data Terminal Ready

ix
ADC - Analog to Digital Converter
DAC - Digital to Analog Converter
UART - Universal Asynchronous Receiver /
Transmitter USART - Universal Synchronous / Asynchronous
Receiver /
Transmitter
MIPS - Million Instructions Per Second
PDIP - Payload Data Interface Panel
TQFP - Thin Quad Flat Pack
RTC - Real – Time Clock
BLDC - Brushless DC
SNR - Signal – to – Noise Ratio
BPS - Bits Per Second
MCU - Microcontroller Unit

x
1. INTRODUCTION

In today’s world, technology plays an essential role in making life easier for people
across different sections of society. However, despite all advancements, there
remains a significant digital divide for people with disabilities, particularly those
with visual impairments. For the visually impaired community, accessing digital
content, interacting with smart devices, navigating new environments, and
performing daily tasks independently remain considerable challenges. Traditional
aids like walking canes, Braille literacy systems, guide dogs, and audio books have
helped to some extent, but they come with limitations in adaptability, affordability,
and independence.
The advent of Artificial Intelligence (AI), conversational agents, and the Internet of
Things (IoT) has opened up immense possibilities to bridge this accessibility gap. AI-
powered models, especially language models like ChatGPT, demonstrate the ability
to understand and generate human-like conversations, providing opportunities for
natural and intuitive communication interfaces. When combined with voice
assistance and connected IoT devices, such technologies can create a supportive
ecosystem where visually impaired individuals can receive real-time information,
navigate physical spaces, and interact seamlessly with their environment.

The motivation behind this project lies in creating a cost-effective, intelligent, and
portable hardware system that leverages ChatGPT, voice recognition, IoT, and smart
sensing to offer real-time guidance, interaction, and support. A device capable of
understanding and responding naturally can empower visually impaired individuals
with unprecedented autonomy, enhancing their overall quality of life, fostering
inclusion, and enabling them to participate actively in the digital world.
While various assistive technologies exist, most current systems are either too
expensive, too bulky, require complex interactions, or offer only limited
functionality. Many voice assistants available today, such as Siri, Alexa, or Google
Assistant, are highly dependent on cloud services and cannot function effectively
1
offline or in low-connectivity environments. Furthermore, they are often designed for
general users, lacking the specific adaptations needed for the visually impaired.
Visually impaired individuals not only need voice-based interaction but also require
real-time awareness of their surroundings, reliable navigation aids, and control over
their IoT-enabled environments. Existing solutions often fail to combine all these
elements into a single, integrated device that can provide continuous, natural, and
context-aware assistance. The specific problem addressed by this project is the lack
of an affordable, standalone, conversational AI-powered device that supports voice
interaction, environment sensing, IoT control, and real-world situational assistance
for visually impaired users. Developing such a system requires a careful integration
of natural language processing, real-time hardware interfacing, low-power design,
and user-centric interaction models.

1. Background and Motivation


Accessibility to technology is a major challenge faced by visually impaired
individuals worldwide. Vision loss not only affects mobility but also significantly
impacts communication, information access, and the ability to live independently.
Although assistive technologies such as white canes, Braille systems, and guide dogs
have made significant contributions, modern advancements in Artificial Intelligence
(AI) and the Internet of Things (IoT) have the potential to revolutionize the way
visually impaired individuals interact with the world.
Recent developments in conversational AI, particularly large language models like
ChatGPT, have demonstrated remarkable capabilities in natural language
understanding and contextual assistance. Combined with voice interaction and IoT
integration, these technologies can offer real-time solutions that are more intuitive
and effective than traditional assistive tools.

2. Problem Statement
Despite the existence of various assistive devices, most current solutions are limited
by high costs, limited functionality, complexity of use, or lack of real-time
2
adaptability. Devices often require tactile input or are restricted to predefined tasks,
thus limiting user flexibility. Visually impaired users face difficulties in navigating
unknown environments, accessing real-time information, and controlling smart
devices independently.
There is a critical need for a comprehensive, voice-driven, AI-enabled hardware
solution that can bridge the gap between the digital world and the real needs of the
visually impaired community. Such a system must be portable, affordable, and
capable of understanding natural human language commands while providing smart
feedback and control over IoT devices.

3. Objectives of the Project


The primary objectives of the "Hardware ChatGPT with Voice Assistance and AI–
IoT for Visually Impaired" project are:
 To develop a hardware-based voice assistance system integrated with
ChatGPT for natural conversation.
 To incorporate AI-driven real-time decision-making for obstacle detection,
location assistance, and smart device control.
 To enable IoT communication for controlling home appliances, receiving
sensor data, and improving quality of life.
 To ensure portability, energy efficiency, and a user-friendly design tailored for
visually impaired users.
 To design an affordable system accessible to a large population with minimal
setup complexity.

4. Scope of the Work


The scope of this project includes the integration of multiple technologies such as:
 Natural Language Processing (NLP): For accurate understanding and
generation of human language via voice.

3
 IoT Connectivity: Enabling communication between the device and other
smart devices or sensors over Wi-Fi, Bluetooth, or cellular networks.
 Voice Recognition and Synthesis: For converting spoken commands into
machine instructions and reading out responses.
 Obstacle Detection and Navigation Assistance: Using sensors like
ultrasonic, infrared, or LIDAR to enhance mobility.
 Hardware Design: Building a compact, low-power hardware platform
suitable for wearable or portable use cases.
 AI-Based Adaptation: Learning user preferences and frequently used
commands to personalize the experience over time.

5. Significance of the Project


This project holds significant value for several reasons:
 Empowerment: It empowers visually impaired users to interact independently
with their environment.
 Real-Time Assistance: Provides instant help in navigation, emergency
situations, and daily activities.
 Affordability: Focuses on building a cost-effective solution as opposed to
expensive commercial devices.
 Innovation: Combines conversational AI with physical hardware in a
seamless way, creating a pioneering assistive system.
 Inclusivity: Contributes to building a more inclusive society by bridging the
technology gap for the differently-abled.

6. Technology Overview
 ChatGPT Integration: ChatGPT serves as the core brain behind the system’s
understanding and conversation generation. Fine-tuned models can be
employed for better handling of common queries and specific user needs.

4
 Hardware Components: Microcontrollers like ESP32, Raspberry Pi Zero, or
custom boards are utilized along with microphones, speakers, camera modules,
and sensors.
 Voice Processing: Open-source speech-to-text and text-to-speech engines are
deployed for real-time interaction.
 IoT Framework: MQTT, HTTP, or CoAP protocols are employed for data
communication across IoT networks.

7. Challenges and Considerations


Building such a system poses several technical and practical challenges:
 Ensuring high accuracy in noisy outdoor environments.
 Managing power consumption for longer battery life.
 Maintaining real-time performance with low latency.
 Ensuring data privacy and secure communication.
 Making the device robust, durable, and comfortable to use.

8. Methodology
The methodology adopted for this project includes:
 Requirement Analysis: Studying the needs of visually impaired users.
 System Architecture Design: Designing both the software and hardware
frameworks.
 Prototyping: Developing initial models for testing different functions like
voice command processing, obstacle detection, and IoT control.
 Testing and Validation: Field-testing the prototypes in real-world
environments.
 Iteration and Enhancement: Incorporating feedback to improve system
functionality and reliability.

9. Structure of the Report

5
The project report is organized as follows:
 Chapter 1: Introduction and background
 Chapter 2: Literature review on related works
 Chapter 3: System design and architecture
 Chapter 4: Hardware and software implementation
 Chapter 5: Testing, results, and evaluation
 Chapter 6: Conclusion and future enhancements

This project holds immense significance for social, technological, and humanitarian
reasons. For the visually impaired community, it promises to deliver a new degree of
independence by allowing users to control their environment, receive situational
assistance, and interact with information systems naturally and intuitively. By
removing barriers to access, the solution aims to enhance the autonomy, dignity, and
overall quality of life of visually impaired individuals.
Technologically, the project pushes the boundaries of how AI models like ChatGPT
can be utilized beyond traditional chatbots or virtual assistants, embedding them into
hardware for physical-world applications. It also demonstrates how AI and IoT
convergence can deliver smart, adaptive systems capable of real-time decision-
making and communication.
From a humanitarian perspective, ensuring accessibility to modern technology is
crucial for creating an inclusive society where disabilities are not viewed as
limitations. The project's cost-effectiveness makes it scalable and replicable, opening
possibilities for deployment across different geographies, particularly in
underdeveloped or resource-constrained regions where commercial assistive
technologies are often out of reach.

6
2. EMBEDDED SYSTEMS

An embedded system is a computer system—a combination of a


computer processor, computer memory, and input/output peripheral devices
—that has a dedicated function within a larger mechanical or electrical
system. It is embedded as part of a complete device often including
electrical or electronic hardware and mechanical parts.

Because an embedded system typically controls physical operations


of the machine that it is embedded within, it often has real-time computing
constraints. Modern embedded systems are often based on microcontrollers
(i.e. microprocessors with integrated memory and peripheral interfaces), but
ordinary microprocessors (using external chips for memory and peripheral
interface circuits) are also common, especially in more complex systems.

Fig 2.1 Embedded Systems

Embedded systems range from portable devices such as digital


watches and MP3 players, to large stationary installations like traffic light
controllers, programmable logic controllers, and large complex systems like
hybrid vehicles, medical imaging systems, and avionics.

Complexity varies from low, with a single microcontroller chip, to


very high with multiple units, peripherals and networks mounted inside a
large equipment rack.

7
2.1 Characteristics

Embedded systems are designed to do some specific task, rather than


be a general-purpose computer for multiple tasks. Some also have real-time
performance constraints that must be met, for reasons such as safety and
usability; others may have low or no performance requirements, allowing
the system hardware to be simplified to reduce costs.

Embedded systems are not always standalone devices. Many


embedded systems consist of small parts within a larger device that serves a
more general purpose. The program instructions written for embedded
systems are referred to as firmware, and are stored in read-only memory or
flash memory chips.

They run with limited computer hardware resources: little memory,


small or non-existent keyboard or screen.

2.2 User interface

Embedded systems range from no user interface at all, in systems


dedicated only to one task, to complex graphical user interfaces that
resemble modern computer desktop operating systems.

Some systems provide user interface remotely with the help of a


serial (e.g. RS- 232, USB, I²C, etc.) or network (e.g. Ethernet) connection.

2.3 Processors in embedded systems

Embedded processors can be broken into two broad categories.

8
 Ordinary microprocessors (μP): Use separate integrated
circuits for memory and peripherals.
 Microcontrollers (μC): Have on-chip peripherals, thus
reducing power consumption, size and cost.

PC/104 and PC/104+ are examples of standards for ready-made


computer boards intended for small, low-volume embedded and ruggedized
systems, mostly x86-based.

These are often physically small compared to a standard PC, although


still quite large compared to most simple (8/16-bit) embedded systems.
They often use DOS, Linux, NetBSD, or an embedded real-time operating
system such as MicroC/OS-II, QNX or VxWorks.

2.4 Peripherals

Embedded systems talk with the outside world via peripherals, such as:

 Serial Communication Interface (SCI): RS-232, RS-422, RS-485, etc.


 Synchronous Serial Communication Interface:
I2C, SPI, SSC and ESSI (Enhanced Synchronous Serial Interface)
 Universal Serial Bus (USB)
 Multi Media Cards (SD cards, Compact Flash, etc.)
 Networks: Ethernet, Lon Works, etc.
 Field buses: CAN-Bus, LIN-Bus, PROFIBUS, etc.
 Timers: PLL(s), Capture/Compare and Time Processing Units
 Discrete IO: General Purpose Input/output (GPIO)
 Analog to Digital/Digital to Analog (ADC/DAC)
 Debugging: JTAG, ISP, BDM Port, BITP, and DB9 ports.

9
2.5 Embedded software architectures

There are several different types of software architecture in common


use.

2.5.1 Simple control loop

The software simply has a loop. The loop calls subroutines,


each of which manages a part of the hardware or software.

2.5.2 Interrupt-controlled system

Some embedded systems are predominantly controlled by


interrupts. This means that tasks performed by the system are
triggered by different kinds of events.

An interrupt could be generated for example by a timer in a


predefined frequency, or by a serial port controller receiving a byte.
These kinds of systems are used if event handlers need low latency
and the event handlers are short and simple.

2.5.3 Cooperative multitasking

A non-preemptive multitasking system is very similar to the


simple control loop scheme, except that the loop is hidden in an API.
The programmer defines a series of tasks, and each task gets its own
environment to “run” in. When a task is idle, it calls an idle routine,
usually called “pause”, “wait”, “yield”, “nop” (stands for no
operation), etc.

10
2.5.4 Preemptive multitasking or multi-threading

In this type of system, low-level piece of code switches


between tasks or threads based on a timer (connected to an interrupt).

This is the level at which the system is generally considered to


have an "operating system" kernel.

Depending on how much functionality is required, it


introduces more or less of the complexities of managing multiple
tasks running conceptually in parallel.

2.5.5 Exotic custom operating systems

A small fraction of embedded systems require safe, timely,


reliable or efficient behavior unobtainable with the one of the above
architectures. In this case an organization builds a system to suit.

In some cases, the system may be partitioned into a


"mechanism controller" using special techniques, and a "display
controller" with a conventional operating system. A communication
system passes data between the two.

2.6 Applications

 Consumer, industrial, automotive, home appliances, medical,


commercial and military applications.

 Telecommunications systems employ numerous embedded systems


from telephone switches for the network to cell phones at the end

11
user. Computer networking uses dedicated routers and network
bridges to route data.

 Consumer electronics include MP3 players, mobile phones, video


game consoles, digital cameras, GPS receivers, and printers.
Household appliances, such as microwave ovens, washing machines
and dishwashers, include embedded systems to provide flexibility,
efficiency and features.

 Advanced HVAC systems use networked thermostats to more


accurately and efficiently control temperature that can change by
time of day and season.

 Automobiles, electric vehicles, and hybrid vehicles increasingly use


embedded systems to maximize efficiency and reduce pollution.

 Automotive safety systems include anti-lock braking system (ABS),


Electronic Stability Control (ESC/ESP), traction control (TCS) and
automatic four-wheel drive.

 Medical equipment uses embedded systems for vital signs


monitoring, electronic stethoscopes for amplifying sounds, and
various medical imaging (PET, SPECT, CT, and MRI) for non-
invasive internal inspections.

2.7 Role of Embedded Systems


Embedded systems form the backbone of this project, providing the necessary
processing capabilities, hardware control, and real-time responsiveness required for
seamless interaction with the user and the environment. In the proposed system, an
embedded platform acts as the central control unit that integrates all hardware
components, processes voice commands, handles data from sensors, connects to IoT
devices, and communicates with AI services like ChatGPT.
12
Unlike general-purpose computing systems, embedded systems are highly
specialized and optimized for specific tasks, such as voice recognition, obstacle
detection, and network communication. Their compact size, low power consumption,
reliability, and real-time operational capacity make them ideal for wearable and
portable assistive devices. By using an embedded approach, the device can remain
lightweight, efficient, affordable, and easily customizable to meet the specific needs
of visually impaired users.

2.8 Embedded Hardware Platform


For this project, microcontrollers or microprocessors such as ESP32, Raspberry Pi
Zero W, or similar platforms are selected. These devices offer a good balance
between computational power, wireless connectivity, and energy efficiency.
 ESP32 is a dual-core microcontroller with built-in Wi-Fi and Bluetooth
capabilities, making it ideal for real-time IoT communication and lightweight
AI processing.
 Raspberry Pi Zero W provides a more powerful Linux-based environment
capable of running more sophisticated AI models locally, along with full
support for voice libraries and external module integration.
The selection between these platforms depends on the computational
complexity of the AI models, storage requirements, and intended operating
environments (offline vs online). The embedded platform is responsible for
connecting all peripheral devices such as microphones, speakers, obstacle
sensors, GPS modules, and IoT devices into a unified system.

2.9 Key Functions Handled by Embedded System


The embedded controller in the project is responsible for executing several critical
tasks simultaneously, including:
 Voice Recognition and Processing: Capturing voice input from the user,
processing it locally or via lightweight APIs, converting speech to text, and

13
preparing it for AI-based interpretation.
 ChatGPT Integration: Handling the communication pipeline between the
embedded device and the ChatGPT language model (either via cloud API or a
localized lightweight inference engine), interpreting the user's commands, and
generating appropriate responses.
 Speech Synthesis: Converting the AI-generated text response back into
speech output that can be clearly understood by the user.
 Obstacle Detection and Alerting: Reading distance data from ultrasonic or
infrared sensors, identifying potential hazards, and immediately notifying the
user via voice or sound alarms.
 IoT Device Control: Sending commands to IoT-enabled devices (smart lights,
doors, alarms, etc.) via Wi-Fi or Bluetooth communication protocols, allowing
users to manage their surroundings effortlessly.
 Emergency Handling: Triggering SOS alerts by sending messages through
GSM modules or Wi-Fi networks when emergencies are detected or manually
initiated by the user.

2.10 Real-Time Operation


One of the major advantages of using embedded systems in this project is their
ability to operate in real-time. For a visually impaired user, system latency could
lead to critical situations. The embedded device is programmed to ensure that speech
recognition, obstacle detection, and environment updates happen with minimal delay,
thus maintaining user safety and confidence. Real-time interrupt handling, task
prioritization, and efficient memory management are incorporated at the software
level to meet strict timing constraints.

14
2.11 Power Management in Embedded Systems
Since the device is intended for portable use, low power consumption is a priority.
Embedded systems are inherently energy-efficient and are optimized further by
implementing techniques like:
 Using sleep and deep sleep modes when the device is idle.
 Activating sensors only when necessary.
 Reducing the transmission frequency for non-critical data.
 Selecting low-power wireless protocols (e.g., BLE over classic Bluetooth).
Battery-operated embedded platforms are carefully chosen and configured to provide
several hours of uninterrupted operation, ensuring the device remains practical for
everyday use.
2.12 Advantages of Embedded Systems in This Project
The decision to use embedded systems architecture brings multiple benefits to the
project:
 Portability: The small form factor enables the entire system to be integrated
into wearable or handheld devices.
 Energy Efficiency: Ensures long operational life even on limited battery
resources.
 Cost-Effectiveness: Off-the-shelf microcontrollers and modules significantly
reduce the overall project cost.
 Customization: Embedded programming allows fine-tuning features
according to user feedback and evolving requirements.
 Reliability: Rugged and stable operation suitable for continuous use in diverse
environments.

2.13 Future Scope of Embedded Systems in Assistive Technologies


Looking forward, embedded systems are expected to play an even more significant
role in assistive technologies. Future enhancements could include the integration of
Machine Learning models for offline voice recognition, smarter obstacle

15
identification using vision modules, dynamic real-time location tracking, and even
predictive analytics for user health monitoring. By leveraging advances in embedded
AI, Edge computing, and low-power communications, next-generation assistive
devices could become even smarter, more intuitive, and more personalized for each
individual user's needs.

16
3. LITERATURE REVIEW

1. Voice-Based Assistive Technologies for the Visually Impaired (2020)


This study highlighted the growing significance of voice-based systems for aiding
visually impaired individuals in day-to-day activities. Researchers proposed a
smartphone-integrated voice recognition module that could perform tasks such as
sending messages, navigating menus, and providing weather updates. The system
was trained on large voice datasets to improve accuracy. However, a major challenge
remained with environmental noise interference, often leading to incorrect
recognition in outdoor conditions. It stressed the need for noise-cancellation
techniques and lightweight AI models suitable for embedded environments.

2. IoT-Based Smart Walking Stick for the Blind (2019)


This research presented a smart walking stick embedded with ultrasonic sensors,
GPS modules, and vibration motors, helping blind users detect obstacles and navigate
efficiently. An IoT module transmitted real-time location data to caretakers. The
smart stick could detect obstacles up to 4 meters and vibrated at different intensities
based on obstacle proximity. However, the absence of AI-based decision-making
limited its adaptability in dynamic environments like crowded streets. The study laid
a strong foundation for integrating AI to enhance real-world usability.

3. Design of Smart Assistive Systems for Blind People Using Machine Learning
(2021)
The researchers developed a lightweight machine learning model capable of running
on microcontrollers (e.g., STM32, ESP32) to classify nearby objects like cars,
people, and trees. Using datasets of obstacle images, they trained a compact
convolutional neural network (CNN) and implemented it on a wearable device. This
advancement significantly enhanced environmental awareness. The study also
pointed out the importance of continuous learning and updating models to adapt to
changing urban environments, suggesting future devices should incorporate OTA
17
(Over-the-Air) learning updates.

4. Implementation of Speech-to-Text Systems for Real-Time Applications (2020)


This paper analyzed how speech-to-text conversion could be optimized for embedded
platforms to deliver real-time performance. Techniques like noise filtering, phoneme
extraction, and feature compression were discussed in detail. For visually impaired
users, accurate and fast speech recognition can mean independence. The research
found that hybrid approaches, combining rule-based processing and lightweight deep
learning, outperformed purely statistical models on low-power devices. The findings
are essential for designing assistive gadgets with limited hardware capabilities.

5. ChatGPT and Language Models in Assistive Technologies (2023)


This very recent study explored the application of large language models like
ChatGPT in assistive devices. It demonstrated how AI could help visually impaired
users by understanding complex commands, offering natural conversations,
explaining environments, and even reading documents aloud. Through fine-tuning,
models were adapted to understand common phrases used by visually impaired
individuals. The study revealed that while ChatGPT integration enhances the user
experience dramatically, hardware constraints remain a bottleneck, leading to the
exploration of smaller, embedded versions of AI models.

6. Smart Voice-Activated IoT Devices: A Survey (2022)


A comprehensive survey outlined the existing smart IoT devices controllable via
voice commands, like Alexa, Google Home, and custom-built devices. It stressed the
relevance of fast processing, reliable voice recognition, and real-time IoT actuation
for the visually impaired. Case studies indicated how smart homes could be enhanced
with embedded voice processors capable of managing tasks like light control,
emergency alarms, and appliance management through simple voice instructions.
Challenges included the need for offline functionality due to internet dependence.

18
7. A Study on Speech Synthesis Techniques for Assistive Devices (2018)
This study discussed different methods for generating synthetic speech in assistive
devices. Techniques like concatenative synthesis (joining pre-recorded speech
samples) and parametric synthesis (using mathematical models) were compared.
Researchers concluded that parametric methods, particularly deep neural network-
based speech synthesis, offer superior adaptability and naturalness for embedded
assistive systems. In the context of this project, high-quality speech output is crucial
for delivering intelligible and emotionally neutral communication to the visually
impaired.

8. GPS and IoT-based Navigation Systems for Visually Impaired (2017)


A GPS and IoT-based navigation aid was proposed that provided real-time guidance
using voice alerts. It utilized GPS tracking modules with IoT data transmission to a
cloud server, enabling both self-navigation and external monitoring by caretakers.
However, one major drawback highlighted was poor indoor performance due to GPS
signal attenuation. The study recommended hybrid indoor navigation methods such
as BLE beacon triangulation or visual markers, which could be integrated into future
IoT-based assistive systems.

9. Wearable Assistive Technology: Current Trends and Future Scope (2021)


This review emphasized the evolution of wearable assistive technologies from simple
vibratory devices to complex AI-powered wearables. It pointed out the growing trend
of embedding Wi-Fi, Bluetooth, and AI algorithms into compact, body-worn
hardware. The paper suggested that future assistive devices must prioritize
ergonomics, low power consumption, and continuous learning capabilities. For
visually impaired users, unobtrusive wearable AI could dramatically enhance daily
independence and safety.

19
10. Real-Time Object Detection for Visually Impaired Assistance Using Embedded
Vision (2022)
A system using low-power CNNs like MobileNetV2 was proposed to identify objects
such as stairs, vehicles, and pedestrians in real time. Implemented on devices like
Raspberry Pi and NVIDIA Jetson Nano, the object detection systems provided audio
feedback on nearby hazards. However, the study noted that energy consumption was
a serious constraint, suggesting further exploration into event-based cameras and
neuromorphic processors for ultra-low power real-time vision.

11. Enhancement of Obstacle Detection in Smart Sticks Using AI (2020)


This research integrated AI classification with obstacle detection. Instead of just
detecting the presence of obstacles, the system identified the type (moving vehicle,
pedestrian, static wall) using ultrasonic and infrared sensors. AI models like decision
trees and support vector machines (SVM) were used to process sensor fusion data.
This approach provided richer information to the user, enabling better decision-
making while moving through complex environments.

12. Energy-Efficient IoT Communication Protocols for Wearables (2019)


This study explored how communication protocols affect the battery life of wearable
devices. Protocols like Bluetooth Low Energy (BLE), ZigBee, LoRaWAN, and Wi-
Fi were compared based on power consumption, data rate, and range. BLE emerged
as the most suitable for small-scale wearable applications where continuous internet
connectivity isn't mandatory. The findings guided future designs to adopt energy-
conscious networking layers, which are crucial for wearable assistive technologies
requiring long-term operation without frequent charging.

13. Integration of AI Chatbots in Assistive Educational Tools (2022)


Researchers implemented AI-based chatbots for blind students, focusing on academic
support such as answering curriculum-related questions, providing lecture

20
summaries, and conducting verbal quizzes. The chatbot was fine-tuned to accept
voice inputs and deliver audio responses with a human-like tone. Integration with
cloud platforms like Dialogflow and GPT-3 improved the richness of interaction.
Such findings prove the potential of conversational AI in the broader landscape of
assistive devices beyond basic navigation.

14. Voice-Controlled Home Automation System for Disabled People (2018)


This study developed a complete home automation solution controlled by simple
voice commands. Users could control lights, fans, door locks, and security alarms via
a Bluetooth-connected microcontroller. Although primarily designed for physically
disabled individuals, the core idea of voice-activated control laid the groundwork for
smart IoT environments suitable for the visually impaired, enhancing their
independence inside homes.

15. Emergency Alert Systems for Visually Impaired Using GSM and IoT (2021)
A GSM and IoT-based emergency alert system was developed where visually
impaired users could trigger an alert by pressing a hidden button or speaking a
predefined word. An SMS or IoT notification would immediately be sent to
caregivers, along with location data if available. The study stressed that having multi-
channel emergency communication (voice, GSM, IoT cloud) greatly increases
reliability and ensures that users can seek help even if one network fails.

21
3.1 Existing Work

Assistive technologies for the visually impaired have witnessed significant growth
over the past two decades, with major advancements in voice-based systems, obstacle
detection, and IoT-based navigation aids. Existing solutions largely focus on
providing support through smart canes, smartphone applications, wearable sensors,
and voice-enabled devices. Each of these innovations has aimed to bridge the gap
between the visually impaired and their environment by enhancing mobility, safety,
and access to information.
One of the earliest and most widely adopted technologies was the smart walking
stick, embedded with ultrasonic sensors to detect nearby obstacles. When an object is
detected within a certain range, the stick vibrates or emits a sound, warning the user
of the obstruction ahead. While effective in static environments, these systems often
struggle in dynamic conditions like crowded urban areas where real-time decision-
making and adaptive navigation are crucial.
Another major stream of development has been voice-assisted smartphones and
screen readers such as TalkBack (Android) and VoiceOver (iOS). These tools
convert text to speech and allow visually impaired users to navigate their devices
independently. While powerful, their dependence on a stable internet connection for
full functionality often limits their use in rural or low-connectivity regions.
Moreover, these solutions are often not deeply integrated with physical navigation
support or obstacle detection in real-world environments.
IoT-based solutions have further enhanced accessibility by introducing real-time
monitoring and assistance. Some devices send location data to caregivers or
emergency services when needed. However, these systems often lack robust AI
capabilities for predictive assistance or complex environment interpretation.
Moreover, most existing IoT devices operate on basic data transmission models
without intelligent contextual processing.
Voice-activated home automation systems have also emerged, enabling visually
impaired individuals to control household appliances through simple voice
22
commands. Integration with smart hubs like Amazon Alexa and Google Assistant has
made life easier indoors. Nevertheless, these systems are not specifically designed
with visually impaired users in mind and may sometimes require visual
confirmations or interactions that are not accessible.
Recent works involving AI-based object recognition have begun to demonstrate the
potential of machine learning in assistive devices. Systems using lightweight CNN
models can now identify objects like traffic signals, stairs, and moving vehicles.
However, running such computationally intensive models on low-power embedded
hardware remains a major challenge. While platforms like Raspberry Pi or Jetson
Nano can support these models, they often lack the battery efficiency and
compactness required for wearables.
Chatbot integration and voice-based AI assistance have shown promise for
improving communication and access to information. However, most chatbot
systems are cloud-dependent, requiring stable internet access and considerable
processing power, which limits their portability and real-time responsiveness in
mobile or wearable devices. Additionally, current systems are limited in their
contextual awareness and cannot fully understand or predict the complex needs of
visually impaired users while navigating real-world environments.
Despite these advancements, current systems exhibit several limitations:
Limited integration of AI-driven real-time decision-making.
Dependency on internet connectivity for core functionality.
Lack of highly optimized embedded AI models suitable for wearable devices.
Minimal proactive support based on environmental context.
No single compact system that combines voice assistance, AI-based object
recognition, IoT data transmission, and emergency response.
These gaps highlight the need for a comprehensive, hardware-based embedded
system that combines ChatGPT-level natural language processing, real-time
obstacle detection, voice-based assistance, and IoT connectivity in a lightweight,
portable, and energy-efficient device specifically tailored for visually impaired users.

23
The existing research serves as a critical foundation, but there remains a vast scope
for innovation by integrating AI, Embedded Systems, Speech Recognition, and IoT
into a single, intelligent wearable platform—which this project aims to achieve.

3.2 Proposed Work

The proposed system aims to develop a hardware-based, AI-integrated, voice-


assisted wearable device specially designed for visually impaired individuals. Unlike
conventional systems that rely heavily on basic obstacle detection or simple voice
alerts, this project introduces an intelligent assistant capable of understanding natural
language commands (like ChatGPT), recognizing obstacles and environmental
factors using embedded AI models, and seamlessly communicating essential
information to users in real-time through voice output.
The core of the system is built around a high-performance embedded microcontroller
such as ESP32, Raspberry Pi Zero 2 W, or a similar compact platform capable of
running lightweight AI models. The device integrates multiple sensors, including
ultrasonic sensors for obstacle detection, camera modules for real-time object
recognition, GPS modules for location tracking, and vibration motors for tactile
feedback. Environmental sensors such as temperature, humidity, and light sensors
can be added to enhance situational awareness.
For voice assistance, a custom AI chatbot engine—either local or hybrid
(edge+cloud)—is deployed, enabling users to interact naturally with the device.
Commands such as “What’s around me?”, “Guide me to the nearest exit,” or “Send a
location alert” can be processed intelligently. The voice output is provided through
an embedded text-to-speech (TTS) system that ensures offline functionality, allowing
uninterrupted use even in areas with poor internet connectivity.
The system is also equipped with IoT capabilities to connect to cloud platforms like
ThingSpeak or AWS IoT Core. This allows real-time monitoring, emergency
notifications to caregivers or family members, data logging for pattern analysis, and
remote updates. In case of emergencies like a fall, abnormal temperature detection, or
24
user distress signals, the system can automatically send an SMS alert or push a
notification containing the user's live GPS coordinates.
An additional layer of intelligence is incorporated through contextual AI models that
can predict potential dangers based on sensor fusion. For example, if the device
detects an obstacle approaching quickly (like a moving vehicle) and the user is near a
road (using GPS mapping), the system can immediately issue a high-priority warning
through vibration and voice.
The hardware design is focused on compactness, lightweight build, energy
efficiency, and wearability. The device can be integrated into smart glasses, a
wristband, or a chest-worn module depending on user comfort. A rechargeable
battery system with solar-assisted charging is also considered to ensure long
operational times without frequent recharging.
In summary, the proposed system is a highly integrated, AI-powered, IoT-connected
wearable device that not only helps visually impaired users navigate their
environment safely but also empowers them to interact naturally with technology,
fostering greater independence, safety, and quality of life.

25
4. HARD WARE DESCRIPTION

Fig 4.1 Block Diagram

4.1.1 Hardware requirements


 ESP32 Microcontroller
 MAX4403 Amplifier
 Speaker
 MAX4404 Microphone
 Power Source

26
TEXT TO SPEECH

Introduction to Text-to-Speech (TTS)


Text-to-Speech (TTS) technology is a form of speech synthesis that converts
written text into spoken voice output. In this project, TTS plays a crucial role by
enabling the device to verbally communicate information, instructions, and
feedback to the visually impaired user, ensuring better understanding and
independence.
By using TTS, the system reads out sensor outputs, messages generated by the AI
(ChatGPT), object detections, environmental alerts, and any necessary navigation
or IoT control feedback — effectively becoming the "voice" of the device for the
27
user.

Purpose of TTS in the Project


Assist Visually Impaired Users:
Since users cannot rely on visual information, TTS ensures that all important data
is delivered audibly in a clear, understandable voice.
Real-Time Feedback:
Alerts related to obstacles, surroundings, device status, or IoT appliance control are
instantly converted to speech, helping the user react quickly.
Natural Interaction with AI:
The ChatGPT responses generated for queries (e.g., "What is the weather today?")
are converted into speech, allowing users to have real-time conversational
experiences.
Emergency Assistance:
In critical situations (e.g., obstacle detection, low battery, system malfunction), the
TTS system can warn the user audibly, preventing accidents.

Implementation in the Project


Microcontroller:
The ESP32 is used, which can handle TTS using lightweight libraries.
TTS Libraries and Tools:
Arduino TTS Library (like Talkie or ESP8266SAM library adapted for ESP32)
External TTS Modules (optional, like DFPlayer Mini with pre-recorded audio)
Online API Integration (optional for cloud-based TTS, e.g., Google TTS APIs if
internet is available)
Working Flow:
Text is generated by sensors or AI (ChatGPT module).
Text is sent to the TTS engine.
The TTS engine processes and converts it into a speech signal.

28
Output is played through a speaker or headphone connected to the hardware.

Advantages of TTS in This Project


Hands-Free Communication:
Users do not need to press buttons or read screens; everything is spoken aloud.
Enhanced Accessibility:
Even users unfamiliar with advanced technology can easily operate the system
through simple voice interactions.
Emotional Engagement:
With natural-sounding TTS voices, communication becomes more friendly,
comforting, and human-like.
Language Flexibility:
Future updates can support multiple languages and accents to cater to different
users worldwide.

Challenges and Solutions


Challenge Solution
Robotic or unnatural voice Use more advanced TTS libraries or integrate
quality cloud-based TTS APIs
Processing delays on Use pre-cached or optimized TTS models
microcontroller
Limited memory for high- Compress and optimize speech files, or offload
quality speech synthesis some processing to a connected smartphone

Summary
In this project, Text-to-Speech technology is not just an add-on but a core
communication bridge between the hardware AI system and the visually impaired
user. By effectively delivering audible responses, TTS empowers users to interact
naturally with their environment, making technology more inclusive, accessible,

29
and human-centered.

SPEECH TO TEXT

Introduction to Speech-to-Text (STT)


Speech-to-Text (STT) technology, also known as Automatic Speech Recognition
(ASR), enables the conversion of spoken language into written or machine-
readable text. In this project, STT is a critical component because it allows the
visually impaired user to give voice commands to the system, interact with the
ChatGPT AI model, and control IoT devices easily and naturally.
Rather than using a traditional interface like keyboards or touchscreens (which are
inaccessible to the visually impaired), the system captures the user's spoken words,
converts them into text, processes the command or query, and provides an
appropriate response via Text-to-Speech (TTS).

30
Purpose of STT in the Project
Voice Command Input:
The user can control connected IoT devices (e.g., turn on/off lights, fans) simply by
speaking.
Interaction with ChatGPT AI:
Users can ask questions, give instructions, or seek assistance, and their voice is
converted into text that the AI can understand and process.
Obstacle and Navigation Queries:
Users can ask about nearby obstacles, navigation help, or environmental conditions
(e.g., "Is there an object ahead?").
Emergency Commands:
Quick emergency voice commands (e.g., "Call for help") can trigger predefined
actions like sending alerts to caregivers.

Implementation in the Project


Microcontroller:
The ESP32 is used because it supports simple offline voice recognition libraries,
and can also work with external modules if advanced processing is needed.
STT Libraries and Tools:
Offline Voice Recognition:
Using libraries like Arduino Voice Recognition V3 Module or simple keyword-
based voice detection (for basic commands).
Cloud-based STT (if Wi-Fi available):
Integration with Google Cloud Speech-to-Text API or Vosk API for more
complex, flexible conversation recognition.
External Modules:
Modules like Elechouse Voice Recognition Module V3 can be used for offline
voice command recognition without needing the internet.

31
Working Flow:
User speaks a command or question.
Microphone captures the audio input.
STT engine processes the speech and converts it into text.
The text is either interpreted as a command or sent to the ChatGPT AI.
A corresponding action or response is triggered.

Advantages of STT in This Project


Hands-Free Control:
No need for physical interaction — ideal for users who rely solely on auditory
feedback.
Natural Interaction:
Users interact naturally in their own language without needing to memorize button
sequences.
Speed and Efficiency:
Speech input is often faster than typing or pressing buttons, especially for complex
queries.
Improved Safety:
During navigation, users can issue voice commands while keeping their hands free
to move or use a cane.

Challenges and Solutions


Challenge Solution
Background noise affecting Use noise-cancelling microphones and speech
recognition filtering algorithms
Limited offline vocabulary for Integrate predefined command lists or fallback
embedded STT to online STT APIs
Processing speed and memory Optimize command list, prioritize essential
constraints functions for offline recognition

32
Summary
In this project, Speech-to-Text technology transforms the user’s spoken language
into actionable digital commands, making interaction with AI and IoT devices
seamless, efficient, and accessible for visually impaired individuals. Combined
with TTS, it creates a full voice-based two-way communication system that
eliminates the need for screens, keyboards, or buttons, enhancing independence and
quality of life.

ESP32 Microcontroller

The ESP32 microcontroller plays a central role in the development of the proposed
hardware ChatGPT with Voice Assistance and AI–IoT system for visually
impaired users. The ESP32 is a powerful, low-cost, dual-core microcontroller with
built-in Wi-Fi and Bluetooth connectivity, making it ideal for real-time data
communication, IoT integration, and wireless voice command processing. In this
project, the ESP32 is responsible for managing all sensor inputs, including
ultrasonic sensors for obstacle detection, GPS modules for location tracking, and
environmental sensors for enhanced situational awareness. Its multi-tasking
capability allows it to handle simultaneous operations such as running lightweight
AI models for obstacle recognition, processing voice commands, and
communicating with cloud platforms like ThingSpeak or AWS IoT.
Moreover, the ESP32’s high processing speed (up to 240 MHz) and large SRAM
(520 KB) enable it to support embedded AI functionalities such as keyword
spotting for offline voice recognition, and basic natural language understanding to
respond intelligently to user queries. The microcontroller’s GPIO flexibility
facilitates the integration of various output components like vibration motors,
buzzers, and voice output modules, ensuring timely and appropriate feedback to the
user.

33
The device's low-power modes are particularly beneficial in designing a wearable
solution, allowing the system to conserve battery life during idle periods and
extend operational time. Additionally, the dual Wi-Fi and Bluetooth functionality
ensures that the device can both connect to IoT cloud servers for real-time
monitoring and communicate locally with smartphones or emergency systems
without any extra hardware. In case of network unavailability, the ESP32 can
operate in offline mode using its onboard processing capabilities, maintaining
essential services like obstacle warnings and voice interactions.
Overall, the ESP32 microcontroller offers the perfect balance of processing power,
connectivity, low energy consumption, and versatility, making it an indispensable
component for creating an intelligent, reliable, and efficient assistive device for
visually impaired individuals in this project.

Power Source
The system is powered using a regulated power supply, typically 5V or 3.3V
depending on the sensor and microcontroller requirements. The power source may
include a DC adapter, battery, or rechargeable power bank, depending on
deployment conditions. Ensuring an uninterrupted power supply is essential for
continuous monitoring and communication in industrial automation systems.
Backup power options like batteries may be used for wearable components or GSM
alerts during power failures.

5.PROJECT OVERVIEW

The "Design of Hardware ChatGPT with Voice Assistance and AI–IoT for
Visually Impaired" project seeks to develop a wearable assistive device that
combines cutting-edge technologies like AI-based natural language processing
(NLP), IoT integration, voice assistance, and real-time environmental sensing to
help visually impaired individuals navigate and interact with their surroundings
independently. This device will provide a seamless and intelligent system to aid
34
users in daily activities, offering both safety and autonomy through the
combination of hardware and software solutions.
The project integrates a hardware platform built around the ESP32 microcontroller,
known for its low energy consumption, dual-core processing power, and built-in
Wi-Fi and Bluetooth capabilities. This allows the device to communicate in real-
time with other devices, such as smartphones or cloud-based servers, enabling
remote monitoring and emergency alerting. The ESP32 is complemented by a
series of environmental sensors including ultrasonic sensors for obstacle detection,
GPS for location tracking, and voice feedback systems for direct interaction with
the user.
The system aims to empower visually impaired users with AI-driven voice
assistance, enabling them to interact with the device using natural language. By
integrating ChatGPT-level language models, the device can answer questions,
provide contextual navigation support, and offer real-time information about the
environment, such as identifying objects, detecting nearby people, and providing
directions. This voice-driven interface will eliminate the need for manual controls,
giving the user hands-free access to critical information.
In addition to real-time navigation and environmental assistance, the system
supports IoT-based monitoring and emergency alerting. The device is capable of
sending distress signals and user location details to caregivers or emergency
contacts in case of sudden incidents such as falls or distress. Moreover, the device’s
location-based features allow for dynamic adjustments to guidance depending on
the user’s surroundings, such as providing specific alerts when approaching street
crossings or obstacles.
The device also features offline functionality, ensuring that it continues to operate
effectively in areas with poor or no internet connectivity. The system utilizes local
processing capabilities of the ESP32 and lightweight AI models to run essential
features such as obstacle detection and voice interaction without needing constant
cloud access.

35
Ultimately, the Project Overview aims to create a compact, energy-efficient, multi-
sensor wearable device that enhances the mobility, safety, and independence of
visually impaired users. It combines AI-powered assistance, real-time
environmental awareness, and IoT communication in one integrated solution,
bridging the gap between users and their surroundings. This system seeks to
empower the visually impaired by providing them with tools that foster confidence,
autonomy, and access to information in their daily lives.

6. PROGRAM

//IR Sensor acting as WakeUp Button


#define button 23

// RGB LEDs for status indication


#define led_1 15
#define led_2 2
#define led_3 4

// UART Pins of Other ESP32 for Text to Speech


#define RXp2 16
#define TXp2 17

// Necessary Libraries
#include "Audio.h"
#include "CloudSpeechClient.h"

int i = 0;
void setup()
{
pinMode(button, INPUT);
pinMode(led_1, OUTPUT);
pinMode(led_2, OUTPUT);
pinMode(led_3, OUTPUT);
36
Serial.begin(115200);
Serial2.begin(115200, SERIAL_8N1, RXp2, TXp2);
Serial2.println("Intialising");
// Serial.println(My_Data);
}

void loop()
{

digitalWrite(led_1, 0);
digitalWrite(led_2, 0);
digitalWrite(led_3, 0);

if (i == 0) {
Serial.println("Press button");
i = 1;
}
// if(i==1){delay(1);}

delay(500);
if (digitalRead(button) == 0)
{
Serial2.println("\r\nPlease Ask!\r\n");
// Green LED ON
digitalWrite(led_1, 1);
digitalWrite(led_2, 0);
digitalWrite(led_3, 0);
delay(2100);
Serial.println("\r\nRecord start!\r\n");
Audio* audio = new Audio(ADMP441);
audio->Record();
Serial.println("Processing your Audio File");
// Blue LED ON
digitalWrite(led_1, 0);
digitalWrite(led_2, 1);
digitalWrite(led_3, 0);

CloudSpeechClient* cloudSpeechClient = new


CloudSpeechClient(USE_APIKEY);
cloudSpeechClient->Transcribe(audio);
delete cloudSpeechClient;
delete audio;
i = 0;
}
37
if (digitalRead(button) == 1)
{
delay(1);
}
}

/*
This is the code for Text to Speech conversion of
String respose coming from other ESP32 board.

To use this code succesfully:


1. You need to downgrade your ESP32 Boards Package to version 1.0.6.
2. You need to download and install the Audio.h library mentioned in this code
itself and remove other
Audio.h libraries if you have already installed.
3. Add your Credentials like WiFi SSID Name, Password below to make this
project
connect with Internet
4. Tested with Arduino IDE version 1.8.19

If you still facing any issues, kindly watch out our video about this project on
our YouTube channel
YouTube channel - https://www.youtube.com/techiesms

*/

#include "Arduino.h"
#include "WiFi.h"
#include "Audio.h" // Download this Library ->
https://github.com/schreibfaul1/ESP32-audioI2S

#define uart_en 15
#define RXp2 16
#define TXp2 17
#define I2S_DOUT 25
#define I2S_BCLK 27
#define I2S_LRC 26

// Your WiFi Credentials


const char *ssid = "SSID"; // WiFi SSID Name
const char *password = "PASS";// WiFi Password ( Keep it blank if your WiFi
router is open )

38
Audio audio;

void setup()
{

Serial.begin(115200);
Serial2.begin(115200,SERIAL_8N1, RXp2, TXp2);

WiFi.disconnect();
WiFi.mode(WIFI_STA);
WiFi.begin( ssid, password);

while (WiFi.status() != WL_CONNECTED)


delay(1500);

audio.setPinout(I2S_BCLK, I2S_LRC, I2S_DOUT);


audio.setVolume(100);
audio.connecttospeech("Starting", "en"); // Google TTS
}

void loop()

if (Serial2.available()) {
String Answer = Serial2.readString();

//-----
// Split the answer into chunks and send each chunk to connecttospeech
size_t chunkSize = 80; // Define chunk size (adjust if necessary)
for (size_t i = 0; i < Answer.length(); i += chunkSize) {

String chunk = Answer.substring(i, (i + chunkSize));


Serial.println(chunk);
audio.connecttospeech(chunk.c_str(), "en");

while(audio.isRunning()){
audio.loop();

}
//--------

}
39
}
audio.loop();

void audio_info(const char *info) {

SOFTWARE DESCRIPTION

The Software System for the "Design of Hardware ChatGPT with Voice Assistance
and AI–IoT for Visually Impaired" integrates multiple layers of technologies to
ensure seamless functionality, from sensor data processing to voice interaction and
cloud communication. The software system is divided into several components,
each responsible for specific functions such as sensor data acquisition, AI
processing, voice recognition, and cloud connectivity. Below, we provide a
detailed description of the software components and how they work together to
deliver a fully integrated system for visually impaired users.

1. Embedded System Software (ESP32 Firmware)


The ESP32 microcontroller serves as the heart of the project, controlling the sensor
inputs, processing the data, and managing communication with other devices (such
as cloud services and smartphones). The software running on the ESP32 is
developed in Arduino IDE, using the ESP32 Arduino core, which supports the
various hardware features of the microcontroller. The following key software
components are implemented in the ESP32 firmware:
40
a. Sensor Data Acquisition
The software collects data from various sensors, including:
Ultrasonic sensors for obstacle detection. The ESP32 measures the distance to
objects in the user’s path and triggers alerts if an obstacle is detected within a
certain range.
GPS Module for location tracking. The software periodically retrieves the user's
current coordinates, enabling location-based services such as route guidance and
emergency alerts.
Environmental Sensors (optional) like temperature, humidity, and light sensors to
provide context about the user's environment.
This data is continuously monitored and processed to provide real-time feedback.
b. Voice Command Processing
For the voice assistant functionality, the ESP32 uses offline speech recognition
libraries or connects to cloud-based speech processing services:
Offline voice recognition: The ESP32 can utilize small, lightweight models for
recognizing simple commands (e.g., "What is around me?" or "Send location
alert").
Cloud-based speech recognition: More advanced commands and conversational
capabilities are processed using services like Google Speech-to-Text or Amazon
Alexa, which send the voice input to a cloud server, returning text or action-based
responses to the ESP32.

c. Obstacle Avoidance and Safety Alerts


When an obstacle is detected by the ultrasonic sensors, the ESP32 immediately
triggers safety mechanisms, such as:
Voice feedback to inform the user of the proximity and type of obstacle (e.g.,
"There is a wall ahead").

41
Vibration feedback through motors to alert the user of nearby obstacles in their
immediate path.
Emergency notification: If a critical obstacle is detected or a fall is detected
(through additional sensors), the system sends an SMS alert or push notification to
caregivers or emergency contacts via SMS API or IoT cloud platforms.

d. Communication with IoT Cloud Platforms


The ESP32 uses its Wi-Fi and Bluetooth capabilities to connect to IoT platforms
such as ThingSpeak, AWS IoT Core, or Google Cloud IoT. The following features
are supported:
Real-time monitoring: The system logs data such as location, sensor readings, and
status updates to a cloud platform.
Emergency Alerts: In case of an emergency, the device sends the user’s location to
a caregiver via SMS or an app.
Data logging: User interaction data and environmental data can be stored for future
analysis, improving the device's performance over time.
Remote updates: Firmware or software updates can be deployed remotely to
improve system functionality.

2. Natural Language Processing (NLP) and Voice Assistance


The AI-based voice assistance software is integral to this project, enabling natural
language interaction and assistance for the visually impaired user. The NLP system
is powered by ChatGPT or other AI models, which are designed to handle
conversational commands and queries. The following steps outline how voice
commands are processed:
a. Speech-to-Text
The first step in voice interaction is speech recognition. The user’s voice input is
converted into text using either offline or cloud-based speech-to-text models.
Offline models may include frameworks like Vosk or PocketSphinx, while cloud

42
models can leverage services like Google Speech API or Amazon Transcribe. The
speech-to-text engine provides the text that is then processed by the AI system.
b. Natural Language Understanding (NLU)
Once the speech is converted to text, the next step is natural language
understanding. This component identifies the intent behind the command or query,
whether it’s a request for navigation, obstacle detection, location updates, or
general information. The ChatGPT API or a customized NLP model processes the
input to generate an appropriate response.
For example, if the user says, “What’s around me?”, the AI will respond with
information gathered from the sensors, such as “There is a wall 1 meter ahead,” or
“You are near a bus stop.”
c. Text-to-Speech (TTS)
Once the AI system processes the query or command, the result is then converted
back to speech using Text-to-Speech (TTS) technology. The response is spoken
aloud to the user through an onboard speaker or earphones. The system uses TTS
services such as eSpeak or Google Text-to-Speech (for cloud-based options) to
provide clear and natural responses.

3. User Interface and Mobile App Integration


While the core software of the device operates on the ESP32, an optional mobile
app can be developed for additional features, such as real-time monitoring, settings
management, and emergency alert handling. The app can be built for both Android
and iOS platforms, integrating with the ESP32 via Bluetooth or Wi-Fi. The app
provides a Graphical User Interface (GUI) for caregivers or family members to:
Monitor the user’s location and sensor data.
Receive push notifications for emergencies.
Configure settings for the wearable device (e.g., sensitivity of obstacle detection).
Additionally, the app can provide location history and analytics for improving the
assistance over time based on user behavior patterns.

43
4. System Integration
All software components work in concert to deliver a seamless experience for the
user:
Real-time data processing occurs on the ESP32, ensuring that obstacle detection,
voice interaction, and IoT communication are handled locally or via cloud-based
services as needed.
The cloud integration ensures that data is logged and shared in real-time with
emergency contacts or caregivers, enabling remote monitoring.
AI-powered voice assistance provides users with interactive feedback, navigation
aids, and real-time object identification, ensuring that the device offers dynamic,
personalized support.

In summary, the Software System is designed to create an intuitive, responsive, and


efficient platform for visually impaired individuals, combining AI, voice
interaction, IoT communication, and real-time environmental awareness into a
single integrated solution. The system allows users to interact with the world
around them in a natural and hands-free manner, ensuring that they are safe,
informed, and connected.

8. APPLICATIONS

1. Assistance for Visually Impaired Individuals


 Provides real-time voice-based guidance for visually impaired people to
navigate safely indoors and outdoors.
 Detects obstacles in the path and alerts the user through voice or vibration
feedback.
 Enables users to interact conversationally with the environment through AI-

44
powered voice assistance.

2. Smart Wearable Devices


 Can be integrated into wearable devices like smart glasses, belts, or
wristbands for easy and portable assistance.
 Supports hands-free operation, making it ideal for daily use without physical
interaction with the device.

3. Emergency Response Systems


 Sends automatic alerts and live location to guardians or caregivers if the user
encounters obstacles, falls, or enters an unsafe environment.
 Provides fast response time in emergencies, improving user safety and
confidence in traveling independently.

4. Personal Mobility Aids


 Acts as a smart mobility assistant when integrated with walking canes,
wheelchairs, or personal navigation devices.
 Enhances navigation in complex environments like malls, airports, bus
stations, and city streets.

5. Smart Cities and Infrastructure


 Can be used in smart city projects to assist in creating inclusive
environments for people with disabilities.
 Helps to make public spaces, buildings, and transportation systems more
accessible.

6. Healthcare Monitoring
 With additional health sensors (like heart rate, temperature, or fall detection),
the system can monitor the user’s vital signs and alert medical personnel if

45
necessary.
 Reduces dependency on hospital visits by providing real-time health
monitoring at home.

7. Educational Support
 Can assist visually impaired students by reading out study material,
recognizing text, and guiding them in academic environments like
classrooms, libraries, or campuses.
 Supports voice-based learning modules and information access.

8. Daily Life Assistance


 Helps in daily tasks like identifying locations, getting weather updates,
scheduling reminders, reading signs or notices in public places, etc.
 Improves the quality of life by providing independence in personal and
social activities.

9. AI-Based Navigation System


 When coupled with machine learning and AI vision systems, it can identify
and classify objects, vehicles, stairs, doors, and other important landmarks.
 Improves the accuracy of navigation and provides more descriptive feedback
to users.

10. IoT-Based Home Automation


 Integrates with smart home devices (lights, fans, door locks, alarms) so users
can control their home environment via voice commands.
 Allows visually impaired individuals to live more independently and
comfortably.

Thus, the applications of this system span across various fields, making life easier,

46
safer, and more independent for visually impaired users. It leverages the power of
AI, IoT, and embedded technology to create a truly smart assistive device that
brings practical benefits to both individuals and society as a whole.

47
9.FUTURE SCOPE

The future of IoT in industrial automation and security holds immense potential as it
continues to evolve and transform industries globally. With advancements in IoT
devices, sensor technologies, machine learning, artificial intelligence, and cloud
computing, the applications of IoT will further extend into more advanced and
intelligent systems that enhance operational efficiency, safety, and sustainability.
Below are several key areas where IoT will continue to have a significant impact in
the future:

1. Advanced Predictive Maintenance


One of the most promising future developments for IoT in industrial automation is
the advancement of predictive maintenance systems. While current IoT solutions
monitor the real-time health of machines, future systems will integrate AI-driven
predictive analytics to not only predict failures but also offer actionable insights into
the causes of wear and tear. By analyzing historical data and combining it with real-
time sensor information, these systems will be able to predict the exact timing for
maintenance tasks, optimize service schedules, and prevent costly breakdowns before
they occur.
Furthermore, machine learning algorithms will improve over time, allowing IoT
systems to continuously learn and enhance predictive models. This will lead to even
higher accuracy and more efficient use of resources, reducing downtime and
extending the life of critical machinery.

2. Autonomous Industrial Operations


The future of IoT will include autonomous industrial operations, where IoT
devices, robots, and machinery work together in a completely automated system. For
example, in manufacturing and warehouse environments, IoT-connected robots,
conveyor belts, and drones could work seamlessly without human intervention.
These systems would communicate in real time, adjusting to changes in demand,
48
detecting and responding to faults autonomously, and performing tasks such as
sorting, assembling, and packaging products.
AI-powered robots will collaborate with IoT devices to execute complex tasks more
efficiently, and self-healing systems will be able to detect issues and fix them
autonomously, reducing the need for human oversight and intervention. These
advancements will not only improve productivity but also allow for more flexible,
agile manufacturing systems capable of adjusting to changing market demands.

3. Enhanced Worker Safety with Wearables


The future of worker safety will see an increasing reliance on wearable IoT devices.
These devices will monitor various health parameters such as heart rate, body
temperature, posture, and fatigue levels, providing real-time data on worker well-
being. Advanced wearables will be capable of predicting health hazards, such as
overheating, dehydration, or stress, and can send immediate alerts to supervisors or
medical teams, ensuring rapid intervention.
In hazardous environments like chemical plants or mines, wearables could also detect
harmful gases, excessive noise levels, or dangerous vibrations, providing another
layer of security. Over time, these devices will become even more advanced,
integrating more sensors and offering more accurate predictions to prevent accidents
or health issues before they happen.

4. Integration of 5G for Real-Time, High-Speed Communication


The rollout of 5G networks will significantly enhance the capabilities of IoT in
industrial settings. With low latency and high-speed communication, 5G will allow
IoT devices to communicate in real-time with minimal delay, enabling faster
decision-making and more responsive systems. For instance, in a factory with
numerous sensors monitoring machinery and processes, 5G will enable real-time
feedback and control, leading to quicker adjustments and more optimized production
cycles.

49
Additionally, 5G will facilitate the implementation of smart factories, where
machines, robots, sensors, and workers are all interconnected in a real-time
ecosystem, allowing for more efficient and flexible production lines. Enhanced
communication networks will support the massive number of IoT devices expected to
be deployed, enabling the seamless integration of different systems and technologies
across industries.

5. Smart Supply Chain Management


IoT will revolutionize supply chain management by creating smart supply chains
that use real-time data to track and manage goods from the moment they are
produced to when they reach the consumer. RFID tags, GPS tracking, and IoT
sensors will be used to monitor the condition and location of goods in transit,
providing accurate, real-time visibility into the entire supply chain.
The future of smart supply chains will involve predictive analytics to forecast
supply and demand, reducing waste and ensuring that resources are allocated
efficiently. IoT-enabled systems will be able to optimize inventory levels,
automatically reorder supplies, and reroute shipments in real time based on
unexpected delays or changes in demand. This will lead to cost reductions, faster
delivery times, and a more sustainable supply chain.

6. Blockchain for IoT Security and Data Integrity


As IoT devices proliferate, concerns about security and data privacy will become
increasingly important. In the future, blockchain technology may play a crucial role
in ensuring the security and integrity of data transmitted by IoT devices.
Blockchain's decentralized and immutable nature will make it an ideal solution for
ensuring that data collected from sensors and devices cannot be tampered with,
providing a secure and transparent record of all actions and data exchanges.
For industrial applications, blockchain-based smart contracts can automate and
secure transactions, such as supply chain activities, equipment leasing, and other

50
contractual agreements. This will enhance trust between parties, reduce fraud, and
streamline operations in industries that rely on secure data exchanges.

7. AI and Machine Learning for IoT Analytics


In the future, IoT systems will be heavily integrated with artificial intelligence (AI)
and machine learning (ML) technologies. These systems will process and analyze
the vast amounts of data generated by IoT devices, uncovering patterns and insights
that were previously difficult to detect. AI algorithms will be able to make real-time
decisions, autonomously adjusting processes and operations to optimize
performance, reduce waste, and improve efficiency.
For example, in a factory setting, AI could optimize energy usage, predict machine
failure, or dynamically adjust production schedules based on market demand, all in
real time. Similarly, in industrial security, AI-powered IoT systems will detect
anomalies and threats, allowing for quicker identification of risks such as equipment
malfunctions or security breaches.

8. Sustainable and Eco-Friendly Solutions


As industries face growing pressure to reduce their carbon footprint and operate more
sustainably, IoT will play a key role in driving green initiatives. IoT sensors can
monitor energy consumption, water usage, and waste generation, providing real-
time feedback that can help optimize resource use and reduce environmental impact.
Smart grids will use IoT to balance energy loads more effectively, while smart
water management systems will help industries minimize water usage.
In the future, IoT will enable circular economy models, where materials and
products are reused, refurbished, or recycled, minimizing waste and reducing the
demand for raw materials. Advanced IoT systems will play a central role in tracking
resources throughout their lifecycle, ensuring that industries can meet their
sustainability goals while continuing to operate efficiently.
The future scope of IoT in industrial automation and security is expansive and

51
promising. As technologies like AI, 5G, blockchain, and advanced sensors continue
to evolve, IoT systems will become more autonomous, intelligent, and efficient. The
integration of IoT in industrial settings will lead to smarter factories, safer working
environments, and more sustainable industrial practices. As industries embrace these
innovations, IoT will remain a cornerstone in the future of industrial automation,
transforming the way businesses operate and interact with their environments.

52
10.CONCLUSION

The project titled “Design of Hardware ChatGPT with Voice Assistance and
AI–IoT for Visually Impaired” presents a significant step forward in the
development of intelligent, assistive technologies for individuals with visual
impairments. Through the integration of AI, IoT, embedded systems, and
voice-based communication, this system provides real-time, interactive support
that enhances user mobility, independence, and confidence.
By using the ESP32 microcontroller, voice recognition, and ChatGPT-based
conversational AI, the system is capable of offering natural, human-like
interactions. It allows users to receive auditory guidance, detect obstacles,
request information, and control IoT devices seamlessly. The use of sensors and
real-time data processing ensures that users are kept safe from immediate
environmental hazards, while the IoT capabilities enable communication with
caregivers or emergency services if needed.
The project's hardware design is compact, portable, and adaptable, making it
suitable for daily wear in different environments, from homes to public spaces.
The integration of smart features like obstacle detection, voice feedback, and
remote monitoring provides a complete solution addressing both safety and
convenience.
Furthermore, the future scope of the system is immense, with opportunities to
integrate advanced AI vision, GPS navigation, health monitoring, machine
learning personalization, and smart home control. These enhancements would
elevate the system from an assistive tool to a comprehensive lifestyle support
device.
In conclusion, the Hardware ChatGPT with Voice Assistance and AI–IoT for
Visually Impaired project not only addresses a crucial societal need but also
demonstrates how modern technology can bridge the accessibility gap. It
empowers visually impaired individuals to lead more independent, safer, and
53
fuller lives, showcasing the transformative potential of AI and IoT in creating
an inclusive world.

54
11.REFERENCES

1. M. H. Yassin, M. A. Azis, "Obstacle Detection and Voice Alert System for


Blind People," IEEE Transactions on Consumer Electronics, Vol. 63, No. 2,
pp. 150-157, 2019.
2. S. Kanagaraj, R. P. Subash, "An Intelligent Walking Stick for Blind People
Using Raspberry Pi," International Journal of Engineering and Technology
(IJET), Vol. 7, No. 4, 2020.
3. K. P. Suresh, T. Laxmi, "IoT Based Smart Guidance System for Visually
Impaired," International Journal of Advanced Research in Computer and
Communication Engineering (IJARCCE), Vol. 7, Issue 6, June 2018.
4. G. Sainarayan, D. Kumar, "Voice Controlled Smart Stick for Visually
Impaired," International Journal of Recent Technology and Engineering
(IJRTE), Vol. 8, Issue 5, January 2020.
5. D. B. Suresh, "Implementation of AI Based Virtual Assistant for the Visually
Challenged," IEEE International Conference on Artificial Intelligence
Trends (ICAIT), 2021.
6. Espressif Systems, "ESP32 Series Datasheet," Version 3.4, [Online].
Available: https://www.espressif.com/en/products/socs/esp32/resources
7. OpenAI, "ChatGPT: Optimizing Language Models for Dialogue," [Online].
Available: https://openai.com/research/chatgpt
8. R. Patel, V. Shah, "Application of IoT and AI for Disabled Persons: A
Review," International Journal of Engineering Research & Technology
(IJERT), Vol. 9, Issue 7, July 2020.
9. S. Arunkumar, P. Soundarapandian, "Smart Assistive Device for Blind
People Using Arduino," International Journal of Engineering and
Technology Innovation, Vol. 10, No. 4, 2020.
10.M. F. Iqbal, A. Naeem, "Machine Learning for Assistive Technology for the
Disabled," Procedia Computer Science, Vol. 175, 2020.
55
A. V. Spagnoletti, G. Rescigno, "Wearable Technologies for Assistance to
Visually Impaired People," IEEE Sensors Journal, Vol. 20, No. 15, pp.
8778-8785, 2020.
11.H. R. Sheikh, M. F. Shaikh, "Real-Time Object Detection for the Blind
Using Deep Learning," IEEE Access, Vol. 8, pp. 24845-24853, 2020.
12.J. R. Vergara, P. A. Estévez, "AI-Powered Speech-to-Text Systems for
Visually Impaired," Artificial Intelligence Review, Vol. 53, 2020.
13.R. Shrivastava, P. Sharma, "Integration of IoT Devices for Smart Cities: A
Disability Perspective," IEEE Communications Surveys & Tutorials, Vol. 21,
No. 4, pp. 3027-3050, 2020.
14.Arduino, "Arduino IDE Official Documentation," [Online]. Available:
https://www.arduino.cc/en/software
15.Yan, X., Zhang, Z., & Chen, J. (2019). Intelligent Industrial IoT:
Architectures, Technologies, and Applications. IEEE Access, 7, 10424-
10441. doi:10.1109/ACCESS.2019.2895564.
16.Mou, Y., & Zhao, S. (2018). IoT-Based Real-Time Industrial Monitoring
and Control System. Proceedings of the 2018 International Conference on
Electronics, Communications and Control Engineering.
17.Tao, F., Zhang, M., & Xu, L. (2018). IoT-based Smart Manufacturing
Systems. International Journal of Advanced Manufacturing Technology,
97(5), 1-14. doi:10.1007/s00170-018-2294-0.
18.Liu, Y., & Li, W. (2019). Intelligent Industrial Automation and Security
Based on IoT Technologies. International Journal of Automation and
Computing, 16(2), 224-235. doi:10.1007/s11633-019-1167-3.
19.Khare, R., & Singh, R. (2020). Industrial IoT Security: Challenges,
Solutions, and Future Directions. Journal of Sensors, 2020, 1-17.
doi:10.1155/2020/1679168.
20.Chien, C. F., & Chen, Y. W. (2018). A Survey of IoT Technologies in
Industrial Automation. Journal of Industrial Integration and Management,

56
3(3), 301-314. doi:10.1142/S2424906418400137.
21.Zhang, X., & Lin, C. (2020). IoT-Enabled Smart Factory: Challenges,
Applications, and Future Directions. Journal of Manufacturing Science and
Engineering, 142(2), 021014. doi:10.1115/1.4045880.
22.Samaras, S., & Dooley, L. (2018). Challenges and Opportunities in IoT for
Smart Cities and Industries. International Journal of Internet of Things and
Cyber-Assurance, 1(1), 1-10.
23.Mahmoud, M. M., & Elmogy, M. (2017). Security Challenges in Industrial
IoT Applications. The 13th International Symposium on Autonomous
Decentralized Systems (ISADS). doi:10.1109/ISADS.2017.52.
24.Islam, S. R., & Kwak, D. (2015). The Internet of Things for Smart Cities:
Applications, Opportunities, and Challenges. IEEE Access, 3, 1886-1899.
doi:10.1109/ACCESS.2015.2440935.
25.Das, S., & Banerjee, M. (2021). Real-Time Industrial Automation System
Based on IoT. Proceedings of the 2021 IEEE International Conference on
Smart Devices, Circuits, and Systems.

57

You might also like