Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
112 views91 pages

Child Autism Final Report

The document discusses detecting emotions in autistic children using image processing. It aims to detect four emotions - sad, happy, neutral, and angry - from facial expressions in images of autistic children. The proposed system uses image processing and adds audio signals and colored lights to help manage the children's moods and improve social communication abilities.

Uploaded by

sankaridevi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views91 pages

Child Autism Final Report

The document discusses detecting emotions in autistic children using image processing. It aims to detect four emotions - sad, happy, neutral, and angry - from facial expressions in images of autistic children. The proposed system uses image processing and adds audio signals and colored lights to help manage the children's moods and improve social communication abilities.

Uploaded by

sankaridevi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 91

EMOTION DETECTION OF AUTISTIC CHILDREN USING

IMAGE PROCESSING

ABSTRACT
Facial Emotion Detection is an approach towards detecting human

emotions through facial expressions. Autism Spectrum Disorder is an

advance neurobehavioral disorder. Autistic people have repetitive, rude

behavior. They are not ready to do social communication. People with

this syndrome have problems with emotion recognition. So we are going

to detecting the emotions of autistic children from the expression of their

faces. It will works on four emotions. These emotions are sad, happy,

neutral, and angry. To detect the emotion of autistic children is

performed with image processing and also a solution for the moody

children.
CHAPTER 1

1.1 INTRODUCTION

Detection of emotion is a difficult area for researchers for a very long time. We
express our feelings with facial expressions. Once we interact with others, our
expressions show some essential signs such as our level of interest, our willingness
to participate in speaking and to respond continuously. It helps in improving social
communication.

However, there are problems in detecting the emotions of people with autism.
Autism spectrum disorder is a neurodevelopment disorder characterized by
difficulties with social communication and interactions. The mainstay of social
difficulties is the identification of one's own feelings and the feelings of other
people. As indicated by the Diagnostic and Statistical Manual of Mental Disorders,
Fifth Edition, autism spectrum disorder is characterized by a dye of defects.
1) Impairments in social interaction and communication
2) Restricted, repetitive patterns of behavior, interests or activities.
Autism spectrum disorder was introduced in the year 1940 by Leo Kaneural
networker and Hans Asperger. According to him autism spectrum disorder is "an
example of congenital autistic disturbances of affection contact". It is clearly
defined that emotion processing challenges are part of autism spectrum disorder.
The rest of the article is organized as follows.
In section 2, the emotion detection system is introduced. In section 3, the local
binary pattern is introduced and how it is used in feature extraction. In section 4,
techniques for classifying emotions are introduced. In Section 5, local binary
pattern, support vector machine and neural network are implemented. Section 6
contains the concluding remarks.
1.2 EXISTING SYSTEM

In existing system is only to detect the emotion of autistic children whether


they are sad, angry neutral or angry. Here there is no use only knowing about their
moods.

1.2.1 DISADVANTAGES OF EXISTING SYSTEM


 The autistic children emotions only detected.

 There is no alternative idea to change the kid’s mood.

1.3 PROPOSED SYSTEM


In proposed system the autistic children's emotions like for example Happy,
Sad, Natural, Anger, Surprise, Hatred and Fear. Image processing is a rapidly
evolving field of computers in engineering. Its development has been influenced
by technological developments in processed imaging, personal computer
processors and mass storage gadgets. Here we also develop the audio signal, voice
board for the children if she/ he is angry the melody music will be played
background and LED is also proven that it is helpful for autistics children.

1.3.1 ADVANTAGES OF PROPOSED SYSTEM


 By adding audio signal and voice board with speakers he/she will be soft

nature handled with the music.

 With the colored LED light also help to handle the autistics children's

emotions. For example he/ she are angry the blue LED light will recover

from that mood.


CHAPTER 2

2.1 BLOCK DIAGRAM


2.2 HARDWARE REQUIREMENT

• ARDUINO UNO
• LCD

• LED

• UART

• PC

• VOICE BOARD

• SPEAKERS

2.3 SOFFTWARE REQUIREMENTS

• ARDUINO IDE

• EMBEDDED C
CHAPTER 3
MODULE DESCRIPTION
HARDWARE REQUIREMENT
3.1 ARDUINO UNO

Arduino is basically an Advanced Virtual RISC (AVR) micro-controller. It


supports the data up to eight (8) bits. Arduino has 32KB internal builtin memory.
This micro-controller has a lot of other characteristics. You should also have a
look at Introduction to PIC16F877a (it's a PIC Microcontroller) and then compare
functions of these two Microcontrollers. Arduino has 1KB Electrically Erasable
Programmable Read Only Memory (EEPROM).

This property shows if the electric supply supplied to the micro-controller


is removed, even then it can store the data and can provide results after providing
it with the electric supply. Moreover, Arduino has 2KB Static Random Access
Memory (SRAM). Other characteristics will be explained later. Arduino has
several different features which make it the most popular device in today's
market.

These features consist of advanced RISC architecture, good performance,


low power consumption, real timer counter having separate oscillator,
6 PWM pins, programmable Serial USART, programming lock for software
security, throughput up to 20 MIPS etc.
Arduino is mostly used in Atmega-328. The further details about Arduino
will be given later in this section.

 ATmega328 is an 8-bit and 28 Pins AVR Microcontroller, manufactured by


Microchip, follows RISC Architecure and has a flash type program memory of
32KB.

 It has an EEPROM memory of 1KB and its SRAM memory is of 2KB.

 It has 8 Pin for ADC operations, which all combines to form PortA ( PA0 -
PA7 ).

 It also has 3 builtin Timers, two of them are 8 Bit timers while the third one
is 16-Bit Timer.

 You must have heard of Arduino UNO, UNO is based on atmega328


Microcontroller. It's UNO's heart. :)

 It operates ranging from 3.3V to 5.5V but normally we use 5V as a


standard.

 Its excellent features include the cost efficiency, low power dissipation,
programming lock for security purposes, real timer counter with separate
oscillator.
 It's normally used in Embedded Systems applications. You should have a
look at these Real Life Examples of Embedded Systems, we can design all of
them using this Microcontroller.

 The following table shows the complete features of ATmega328:

ATmega328 Features

No. of Pins 28

CPU RISC 8-Bit AVR

Operating Voltage 1.8 to 5.5 V

Program Memory 32KB

Program Memory Type Flash

SRAM 2048 Bytes

EEPROM 1024 Bytes

ADC 10-Bit

Number of ADC Channels 8

PWM Pins 6

Comparator 1

8-pin PDIP32-lead TQFP28-pad QFN/MLF32-


Packages (4)
pad QFN/MLF
ATmega328 Features

Oscillator up to 20 MHz

Timer (3) 8-Bit x 2 & 16-Bit x 1

Enhanced Power on Reset Yes

Power Up Timer Yes

I/O Pins 23

Manufacturer Microchip

SPI Yes

I2C Yes

Watchdog Timer Yes

Brown out detect (BOD) Yes

Reset Yes

USI (Universal Serial


Yes
Interface)

Minimum Operating
-40 C to +85 C
Temperature
3.1.1 Pin diagram

ATmega-328 is an AVR Microcontroller having twenty eight (28) pins in total.

 Through pin out diagram we can understand the configurations of the pins
of any electronic device, so you are working on any Engineering Project then you
must first read the components' pin out.
 Arduino pin out diagram is shown in the figure given below.
 Functions associated with the pins must be known in order to use the device
appropriately.

 Arduino pins are divided into different ports which are given in detail
below.

VCC is a digital voltage supply.AVCC is a supply voltage pin for analog to


digital converter.GND denotes Ground and it has a 0V.

Port A consists of the pins from PA0 to PA7. These pins serve as analog input


to analog to digital converters.

If analog to digital converter is not used, port A acts as an eight (8) bit


bidirectional input/output port.Port B consists of the pins from PB0 to PB7. 

This port is an 8 bit bidirectional port having an internal pull-up resistor. Port
C consists of the pins from PC0 to PC7. 

The output buffers of port C has symmetrical drive characteristics with source
capability as well high sink.

Port D consists of the pins from PD0 to PD7. It is also an 8 bit input/output


port having an internal pull-up resistor.

3.1.2 ARDUINO Architecture

 An architecture of a device presents each information about the particular


device.

 Arduino architecture is shown in the figure given below.


3.1.3 ARDUINO Memory

 Arduino has three types of memories e.g. EEPROM, SRAM etc.

 The capacity of each memory is explained in detail below.


Flash Memory has 32KB capacity. It has an address of 15 bits. It is a
Programmable Read Only Memory (ROM). It is non volatile
memory.SRAM stands for Static Random Access Memory. It is a volatile
memory i.e. data will be removed after removing the power
supply.EEPROM stands for Electrically Erasable Programmable Read Only
Memory. It has a long term data.

3.1.4 ARDUINO Registers

 Arduino has thirty two (32) General Purpose (GP) registers.

 These all of the registers are the part of Static Random Access Memory
(SRAM).
 ATmega328 Packages
 The different versions of the same device are denoted by the different
packages of that device.

 Each package has different dimensions, in order to differentiate easily.

 Arduino packages are given in the table shown in the figure given below.
3.1.5 ARDUINO Block Diagram

 Block diagram shows the internal circuitry and the flow of the program of
any device.

 Arduino block diagram is shown in the figure given below.

3.1.6 ARDUINO Features

 To perform any task we can select a device on the basis of its features. i.e
whether its features match to obtain the desired results or not.
 Some of the main features of an AVR Microcontroller ATmega328 are shown
in the table given in the figure below.
3.1.7 Applications

 A complete package including Arduino can be used in several different real


life applications.
 It can be used in Embedded Systems Projects.
 It can also be used in robotics.
 Quad-copter and even small aero-plane can also be designed through it.
 Power monitoring and management systems can also be prepared using this
device.
 i have designed this Home Security System using Arduino, you should have a
look at it.

3.1.8 How to start working on Atmega328

 If you want to start working on this Microcontroller then I would suggest you
to do it using Arduino.
 The benefit of using Arduino is that you get to use all of its built-in libraries,
which will make the work a lot easier.
 After designing your project on Arduino, then design the basic circuit of
Arduino which is quite simply and I have discussed above.
 Now you must be careful while using its Pins, Atmega328 and Arduino Pins
are discussed above.
 Another thing to mention here is that before working on hardware, you should
first design its Proteus Simulation.
 Download Arduino Library for Proteus and then design your project on it.
 Once you are confirmed that everything's correct then design its circuitry on
Wero Board or PCB (Printed Circuit Board)

3.2 MATLAB

MATLAB (an abbreviation of "matrix laboratory") is a proprietary multi


paradigm programming language and numeric computing environment developed
by MathWorks. MATLAB allows matrix manipulations, plotting of functions and
data, implementation of algorithms, creation of user interfaces, and interfacing
with programs written in other languages.

Although MATLAB is intended primarily for numeric computing, an


optional toolbox uses the MuPAD symbolic engine allowing access to symbolic
computing abilities. An additional package, Simulink, adds graphical multi-domain
simulation and model-based design for dynamic and embedded systems.

As of 2020, MATLAB has more than 4 million users worldwide. MATLAB users
come from various backgrounds of engineering, science, and economics.
3.2.1 Origins

MATLAB was invented by mathematician and computer programmer Cleve


Moler. The idea for MATLAB was based on his 1960s PhD thesis. Moler became
a math professor at the University of New Mexico and started developing
MATLAB for his students as a hobby. He developed MATLAB's initial linear
algebra programming in 1967 with his one-time thesis advisor, George
Forsythe. This was followed by Fortran code for linear equations in 1971.

The first early version of MATLAB was completed in the late 1970s. The
software was disclosed to the public for the first time in February 1979 at
the Naval Postgraduate School in California. Early versions of MATLAB were
simple matrix calculators with 71 pre-built functions. At the time, MATLAB was
distributed for free to universities. Moler would leave copies at universities he
visited and the software developed a strong following in the math departments of
university campuses.

In the 1980s, Cleve Moler met John N. Little. They decided to reprogram


MATLAB in C and market it for the IBM desktops that were replacing mainframe
computers at the time. John Little and programmer Steve Bangert re-programmed
MATLAB in C, created the MATLAB programming language, and developed
features for toolboxes.

3.2.2 Commercial development

MATLAB was first released as a commercial product in 1984 at the


Automatic Control Conference in Las Vegas. MathWorks, Inc. was founded to
develop the software and the MATLAB programming language was released.
[23]
 The first MATLAB sale was the following year, when Nick Trefethen from
the Massachusetts Institute of Technology bought ten copies.
By the end of the 1980s, several hundred copies of MATLAB had been sold
to universities for student use. The software was popularized largely thanks to
toolboxes created by experts in various fields for performing specialized
mathematical tasks. Many of the toolboxes were developed as a result
of Stanford students that used MATLAB in academia, then brought the software
with them to the private sector.

Over time, MATLAB was re-written for early operating systems created
by Digital Equipment Corporation, VAX, Sun Microsystems, and for Unix
PCs. Version 3 was released in 1987.The first MATLAB compiler was developed
by Stephen C. Johnson in the 1990s.

In 2000, MathWorks added a Fortran-based library for linear algebra in


MATLAB 6, replacing the software's original LINPACK and EISPACK
subroutines that were in C. MATLAB's Parallel Computing Toolbox was released
at the 2004 Supercomputing Conference and support for graphics processing units
(GPUs) was added to it in 2010.

3.2.3 Recent history

Some especially large changes to the software were made with version 8 in
2012. The user interface was reworked and Simulink's functionality was
expanded. By 2016, MATLAB had introduced several technical and user interface
improvements, including the MATLAB Live Editor notebook, and other features.

3.2.4 Syntax

The MATLAB application is built around the MATLAB programming


language. Common usage of the MATLAB application involves using the
"Command Window" as an interactive mathematical shell or executing text files
containing MATLAB code.
3.2.5 Variables

Variables are defined using the assignment operator,=. MATLAB is


a weakly typed programming language because types are implicitly converted. It is
an inferred typed language because variables can be assigned without declaring
their type, except if they are to be treated as symbolic objects, and that their type
can change. For example:

>> x = 17
x=
17

>> x = 'hat'
x=
hat

>> x = [3*4, pi/2]


x=
12.0000 1.5708

>> y = 3*sin(x)
y=
-1.6097 3.0000

3.2.6 Vectors and matrices

A simple array is defined using the colon syntax: initial : increment : terminator.


For instance:

>> array = 1:2:9


array =
13579
Defines a variable named  array  (or assigns a new value to an existing

variable with the name  array ) which is an array consisting of the values 1, 3, 5, 7,
and 9. That is, the array starts at 1 (the initial value), increments with each step
from the previous value by 2 (the increment value), and stops once it reaches (or is
about to exceed) 9 (the terminator value).

The increment value can actually be left out of this syntax (along with one of
the colons), to use a default value of 1.

>> ari = 1:5


ari =
12345

Assigns to the variable named  ari  an array with the values 1, 2, 3, 4, and 5,
since the default value of 1 is used as the increment.

Indexing is one-based,[35] which is the usual convention for matrices in


mathematics, unlike zero-based indexing commonly used in other programming
languages such as C, C++, and Java.

Matrices can be defined by separating the elements of a row with blank


space or comma and using a semicolon to terminate each row. The list of elements
should be surrounded by square brackets  [] . Parentheses  ()  are used to access
elements and subarrays (they are also used to denote a function argument list).

>> A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]


A=
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1

>> A(2,3)
ans =
11

Sets of indices can be specified by expressions such as  2:4 , which evaluates

to  [2, 3, 4] . For example, a submatrix taken from rows 2 through 4 and columns 3
through 4 can be written as:

>> A(2:4,3:4)
ans =
11 8
7 12
14 1

A square identity matrix of size n can be generated using the function  eye ,


and matrices of any size with zeros or ones can be generated with the
functions  zeros  and  ones , respectively.

>> eye(3,3)
ans =
100
010
001

>> zeros(2,3)
ans =
000
000

>> ones(2,3)
ans =
111
111

Transposing a vector or a matrix is done either by the function  transpose  or


by adding dot-prime after the matrix (without the dot, prime will
perform conjugate transpose for complex arrays):

>> A = [1 ; 2], B = A.', C = transpose(A)


A=
1
2
B=
1 2
C=
1 2

>> D = [0 3 ; 1 5], D.'


D=
0 3
1 5
ans =
0 1
3 5

Most functions accept arrays as input and operate element-wise on each


element. For example,  mod(2*J,n)  will multiply every element in J by 2, and then
reduce each element modulo n. MATLAB does include
standard  for  and  while  loops, but (as in other similar applications such as R),
using the vectorized notation is encouraged and is often faster to execute.

The following code, excerpted from the function magic.m, creates a magic


square M for odd values of n (MATLAB function  mesh grid  is used here to
generate square matrices I and J containing 1:n):
[J,I] = meshgrid(1:n);
A = mod(I + J - (n + 3) / 2, n);
B = mod(I + 2 * J - 2, n);
M = n * A + B + 1;

3.2.7 Structures

MATLAB supports structure data types. Since all variables in MATLAB are


arrays, a more adequate name is "structure array", where each element of the array
has the same field names. In addition, MATLAB supports dynamic field
names (field look-ups by name, field manipulations, etc.).

3.2.8 Functions

When creating a MATLAB function, the name of the file should match the
name of the first function in the file. Valid function names begin with an alphabetic
character, and can contain letters, numbers, or underscores. Variables and
functions are case sensitive.

3.2.9 Function handles

MATLAB supports elements of lambda calculus by introducing function


handles, or function references, which are implemented either in .m files or
anonymous/nested functions.

3.2.10 Classes and object-oriented programming

MATLAB supports object-oriented programming including classes,


inheritance, virtual dispatch, packages, pass-by-value semantics, and pass-by-
reference semantics.

3.3 LCD

LCD (Liquid Crystal Display) screen is an electronic display module and


find a wide range of applications. A 16x2 LCD display is very basic module and is
very commonly used in various devices and circuits. These modules are preferred
over seven segments and other multi segment LEDs. The reasons being: LCDs are
economical; easily programmable; have no limitation of displaying special &
even custom characters (unlike in seven segments), animations and so on.

A 16x2 LCD means it can display 16 characters per line and there are 2 such
lines. In this LCD each character is displayed in 5x7 pixel matrix. This LCD has
two registers, namely, Command and Data.

The command register stores the command instructions given to the LCD. A
command is an instruction given to LCD to do a predefined task like initializing it,
clearing its screen, setting the cursor position, controlling display etc. The data
register stores the data to be displayed on the LCD. The data is the ASCII value of
the character to be displayed on the LCD. Click to learn more about internal
structure of a LCD.

S. Pin No. Pin Name Pin Type Pin Description Pin Connection
No

1 Pin 1 Ground Source Pin This is a ground pin of Connected to the ground
LCD of the MCU/ Power
source

2 Pin 2 VCC Source Pin This is the supply voltage Connected to the supply
pin of LCD pin of Power source

3 Pin 3 V0/VEE Control Pin Adjusts the contrast of the Connected to a variable
LCD. POT that can source 0-
5V

4 Pin 4 Register Select Control Pin Toggles between Connected to a MCU


Command/Data Register pin and gets either 0 or
1.
0 -> Command Mode
1-> Data Mode

5 Pin 5 Read/Write Control Pin Toggles the LCD between Connected to a MCU
Read/Write Operation pin and gets either 0 or
1.
0 -> Write Operation
1-> Read Operation

6 Pin 6 Enable Control Pin Must be held high to Connected to MCU and
perform Read/Write always held high.
Operation

7 Pin 7-14 Data Bits (0-7) Data/Command Pins used to send In 4-Wire Mode
Pin Command or data to the Only 4 pins (0-3) is
LCD. connected to MCU
In 8-Wire Mode
All 8 pins(0-7) are
connected to MCU

8 Pin 15 LED Positive LED Pin Normal LED like operation Connected to +5V
to illuminate the LCD

9 Pin 16 LED Negative LED Pin Normal LED like operation Connected to ground
to illuminate the LCD
connected with GND.
These black circles consist of an interface IC and its associated components
to help us use this LCD with the MCU. Because our LCD is a 16*2 Dot matrix
LCD and so it will have (16*2=32) 32 characters in total and each character will
be made of 5*8 Pixel Dots.  A Single character with all its Pixels enabled is
shown in the below picture.

So Now, we know that each character has (5*8=40) 40 Pixels and for 32
Characters we will have (32*40) 1280 Pixels. Further, the LCD should also be
instructed about the Position of the Pixels. It will be a hectic task to handle
everything with the help of MCU, hence an Interface IC like HD44780 is used,
which is mounted on LCD Module itself.

The function of this IC is to get the Commands and Data from the MCU


and process them to display meaningful information onto our LCD Screen. Let’s
discuss the different type of mode and options available in our LCD that has to be
controlled by our Control Pins.
3.3.1 4-bit and 8-bit Mode of LCD:

The LCD can work in two different modes, namely the 4-bit mode and the
8-bit mode. In 4 bit mode we send the data nibble by nibble, first upper nibble
and then lower nibble.

For those of you who don’t know what a nibble is: a nibble is a group of
four bits, so the lower four bits (D0-D3) of a byte form the lower nibble while the
upper four bits (D4-D7) of a byte form the higher nibble. This enables us to send
8 bit data.

Whereas in 8 bit mode we can send the 8-bit data directly in one stroke
since we use all the 8 data lines. 8-bit mode is faster and flawless than 4-bit
mode. But the major drawback is that it needs 8 data lines connected to the
microcontroller.

This will make us run out of I/O pins on our MCU, so 4-bit mode is widely
used. No control pins are used to set these modes. It's just the way of
programming that change. 

3.3.2 Read and Write Mode of LCD:

As said, the LCD itself consists of an Interface IC. The MCU can either
read or write to this interface IC.

Most of the times we will be just writing to the IC, since reading will make
it more complex and such scenarios are very rare.

Information like position of cursor, status completion interrupts etc. can be


read if required, but it is out of the scope of this tutorial.

The Interface IC present in most of the LCD is HD44780U, in order to


program our LCD we should learn the complete datasheet of the IC.
The datasheet is given here.
 3.3.3 LCD Commands:

There are some preset commands instructions in LCD, which we need to


send to LCD through some microcontroller. Some important command
instructions are given below:

Hex Code Command to LCD Instruction Register

0F LCD ON, cursor ON

01 Clear display screen

02 Return home

04 Decrement cursor (shift cursor to left)

06 Increment cursor (shift cursor to right)

05 Shift display right

07 Shift display left

0E Display ON, cursor blinking

80 Force cursor to beginning of first line

C0 Force cursor to beginning of second line

38 2 lines and 5×7 matrix

83 Cursor line 1 position 3

3C Activate second line

08 Display OFF, cursor OFF

C1 Jump to second line, position 1

OC Display ON, cursor OFF

C1 Jump to second line, position 1

C2 Jump to second line, position 2

3.4 IMAGE PROCESSING IN USER SAFETY


Image Processing forms core research area within engineering and computer
science disciplines too. Image processing basically includes the following three
steps. Importing the image with optical scanner or by digital photography.
Analyzing and manipulating the image which includes data compression and
image enhancement and spotting patterns that are not to human eyes like satellite
photographs. Output is the last stage in which result can be altered image or report
that is based on image analysis. Driver Drowsiness System: Driver drowsiness
system is used to detect the drowsiness. Drowsiness is the main reasons of
accident. Safe driving is a major concern of societies all over the world. Thousands
of people are killed or seriously injured due to drivers falling asleep at the wheels
each year. It is essential to develop a real time safety system for drowsiness related
road accident prevention.

There are many methods for detecting the driver drowsiness. Driver fatigue
is a significant factor in a large number of vehicle accidents. It includes the
measurements of physiological features like EEG, heart rate, pulse rate, eyelid
movement, gaze, head movement and behaviors of the vehicle, lane deviations and
steering movements. After long hours of driving or in absent of alert mental state,
the eyelids of driver will become heavy due to fatigue. The attention of driver
starts to lose focus and that creates risks for accidents.

These are typical reactions of fatigue, which is very dangerous. Recent


statistics estimate that annually 1,200 deaths and 76,000 injuries can be attributed
to fatigue related crashes. These accidents can be controlled by development of
technologies for detecting or preventing drowsiness. The drowsiness detection
fatigue is involves sequence of images of a face.
The analysis of face images is a popular research area with applications such
as face recognition, virtual tools, and human identification security systems. The
requirements for an effective drowsy driver detection system are as follows: A non
intrusive monitoring system that will not distract the driver. A real time monitoring
system, to insure accuracy in detecting drowsiness. A system that will work in both
daytime and nighttime conditions.
Techniques for Detecting Drowsy Drivers The techniques for driver
drowsiness can be classified into different categories. As, Sensing of physiological
characteristics Sensing of driver operation Sensing of vehicle response Monitoring
the response of driver. Now days, there are many road accidents occur. Driver
drowsiness detection is a car safety technology which prevents accidents when the
driver is getting drowsy.

There are many reasons of the road accidents. The reasons of the road
accidents are there are many functions that can use in the driver drowsiness. The
driver drowsiness can be detect by analyzing the driving behavior Warns the driver
if the risk is high of falling asleep Sleepiness and driving is a dangerous
combination. Most people are aware of the dangers of drinking and driving. The
people don’t realize that drowsy driving can be just as fatal, for example: alcohol,
sleepiness slows reaction time, decreases awareness, impairs judgment and
increases your risk of crashing. There are many underlying causes of sleepiness,
fatigue and drowsy driving.

It includes the sleep loss from the restriction, interruption or fragmented


sleep; chronic sleep debt; circadian factors associated with driving patterns or work
schedules; undiagnosed or untreated sleep disorders; time spent on a task; the use
of sedating medications; and the consumption of alcohol when already tired.
These factors have cumulative effects and a combination of any of these can
greatly increase one’s risk for a fatigue-related crash. Signs Of Drowsy Driving

There are many signs of the driver’s drowsiness: Driver may be yawn
frequently. Driver is unable to keep eyes open. Driver catches him nodding off and
has trouble keeping head up. The thoughts of the person wander and take focus off
from the road. The driver can't remember driving the last few miles. Driver is
impatient, in a hurry, and grouchy. The person ends up too close to cars in front of
you. The person misses road signs or drive past your turn. Drift into the other lane
or onto the shoulder of the road.

3.5 Segmentation of face

This is the very first module in which the face is segmented from the input
image that is initially whatever the video that is recorded by the camera will be
fragmented into the frames and then into the image, this image will be given as
input for segmenting the face.

The partial segmentation of the image by selecting the appropriate threshold


is based on dividing the image into the background and foreground classes.
Thresholding is primarily concerned with selecting an appropriate threshold
according to image histogram.
That is, the value of thresholding or border as the brightness intensity is
considered as the basis of the division and the brightness intensities greater and
less than threshold is equal to 1 and zero respectively.
The purpose of face detection is to minimize the error rate in identifying
facial expressions. The importance of this part is to measure the position of the
eyes, the mouth and the head.
3.5.1 Histogram
A histogram is a graphical representation of the distribution of data.
There are two types of histogram they are as follows
 Image histogram
 Color histogram
Image histogram is a type of histogram that acts as a graphical
representation of the tonal distribution in a digital image. It plots the number of
pixels for each tonal value. Image histograms are present on many modern digital
cameras. Horizontal axis of the graph represents the tonal variations, while the
vertical axis represents the number of pixels in that particular tone. In the field of
computer vision image histograms can be useful tool for thresholding. This
threshold value can be used for edge detection, image segmentation and co-
occurrence matrix.

Representing an histogram of an image in matlab

A=imread(‘sample.jpg’);

hist(A);

Here imread() reads a grayscale or color image from the file specified by
the string filename. If the file is not in the current folder, or in a folder on the
MATLAB path, specify the full pathname. Basically here we are converting an
image to histogram only for getting the threshold values in order to separate the
foreground and background class.
Here the below figure shows the image in RGB color space and its
histogram. Here we are finding histogram of the image in order to find the
threshold value of the image.Using this threshold value we are classifying

Digital image Histogram of an image

3.5.2 YCbCr Color Space


Initially the camera will record the video of a person who is driving so that
all the images in that recorded video will be in the RGB color space that includes
driver face along with surrounding area in the vehicle.
The purpose of face detection is to minimize the error rate in identifying
facial expressions. The importance of this part is to measure the position of the
eyes, the mouth and the head.

YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y′CBCR, is a


family of color space  used as a part of the color image pipeline in video and digital
photography systems. Y′ is the luma component and CB and CR are the blue-
difference and red-difference chroma components. Y′ (with prime) is
distinguished from Y, which is luminance, meaning that light intensity is
nonlinearly encoded based on gamma corrected RGB primaries.
Y′CbCr is not an absolute color space; rather, it is a way of
encoding RGB information. The actual color displayed depends on the
actual RGB primaries used to display the signal. Therefore a value expressed as Y
′CbCr is predictable only if standard RGB primary chromaticities are used.

The first step in the face detection algorithm is using skin segmentation to
reject as much non- image based on skin color converting the RGB picture to
YCbCr space or to HSV space. An YCbCr space segments the image into a
luminosity component and color components.
The main advantage of converting the image to the YCbCr domain is that
influence of luminosity can be removed during our image processing. In the RGB
domain, each component of the picture (red, green and blue) has a different
brightness.
However, in the YCbCr domain all information about the brightness is given
by the Y-component, since the Cb (blue) and Cr (red) components are independent
from the luminosity.
There are many ways of segmenting indication on whether a pixel is part of
the skin or not. Background and faces can be distinguished by applying maximum
and minimum threshold values for both Cb and Cr components.

Considering there is no standard color-image database which can assess the


faces in images, we set up the color-image database with 500 skin images. There
are some sample images that are shown in the following figure.
Sample of skin images

3.5.3 Algorithm for converting an RGB image to YCbCr image

Formula used for convert an RGB pixel to YCbCr pixel is as follows

Y=0.299R+0.5879G+0.114B
Cb=-0.169R-0.331G+0.5B
Cr=0.5R-0.419G-0.081B
Step 1: Reading an input image
RGB= imread('sample.jpg');

Step 2: Converting an RGB image to YCbCr image


YCBCR = rgb2ycbcr(RGB)
Face detection process
3.5.4 Detection of Eyes condition
Important factor which helps detect driver fatigue is the state of eyes, i.e.
whether they are open or closed. In the state of fatigue, eyelid muscles
subconsciously attempt to accelerate the process of going to sleep. Using this
property, determining whether eyes are open or closed is done by relying on the
difference of brightness intensity of the pupil in the image and its symmetry.
Locating the position of the eye in the frame taken from the drivers face is
difficult. The position of the eye can be identified by drawing on geometry
properties and symmetry. Edging is concerned with locating the position of areas
or pixels where the brightness intensity has considerably increased. One of the
effective operators for edge detection is the Sobel operator.
The position of the driver's eye is determined by using appropriate threshold.
These two areas are separated using edge detection and in accordance with as well
as the symmetrical properties of the eye, the gravity center of the eye is
determined. Finally the pupil is identified. If eyes are open then it is treated as the
normal state during which the alarm is not set off. If eyes are closed then it is
treated as the fatigue state during which the alarm is set on.
Edge detection is the process of localizing pixel intensity transitions. The
edge detection has been used by object recognition, target tracking, segmentation,
and etc. Therefore, the edge detection is one of the most important parts of image
processing. There mainly exist several edge detection methods like Sobel .These
methods have been proposed for detecting transitions in images. Early methods
determined the best gradient operator to detect sharp intensity variations.
Commonly used method for detecting edges is to apply derivative operators on
images. Computing the gradient in several directions and combining the result of
each gradient. The value of the gradient magnitude and orientation is estimated
using two differentiation masks.
In this work, Sobel which is an edge detection method is considered.
Because of the simplicity and common uses, this method is preferred by the others
methods in this work. The Sobel edge detector uses two masks, one vertical and
one horizontal. These masks are generally used 3×3 matrices. Especially, the
matrices which have 3×3 dimensions are used in matlab (edge.m). The masks of
the Sobel edge detection are extended to 5×5 dimensions are constructed in this
work. A matlab function, called as Sobel5×5 is developed by using these new
matrices toolboxes.

3.5.5 Algorithm for Sobel edge detection


Step 1: Accept the input image.
imread(input image)
Reads a rgb color image from the file specified by the string filename. If the file is
not in the current folder, or in a folder on the MATLAB path, specify the full
pathname

Step 2: Specifies the Sobel method and the dimension of image

BW = edge(I,'sobel')
Sobel method takes input image as its input, and returns a binary image BW of the
same size with 1's where the function finds edges in I and 0's elsewhere. And this
syntax also specifies the dimension of the image to be 1 dimension.

Step 3: Reading the BW image


imread(BW)
It reads a grayscale image from the file specified by the string filename. If the file
is not in the
current folder or in a folder on the MATLAB path, specify the full pathname.

Step 4: Specifies the threshold for the Sobel method


BW = edge(I,'s]’;obel',thresh)
It specifies the threshold for the Sobel method. Edge ignores all edges that are not
stronger than thresh. If you do not specify thresh, or if thresh is empty ([]), edge
chooses the value automatically. Threshold values ranging from 365 to 535

Step 5: Specifies the direction of detection for the Sobel method


BW = edge (I,'sobel', thresh, direction)
It specifies the direction of detection for the Sobel method. Direction is a string
specifying whether to look for 'horizontal' or 'vertical' edges i.e.; gv and gh or 'both'
(the default).

Step 6: Returns the threshold value.


[BW, thresh] = edge (I,'sobel',...)
Provides an optional string input. String 'no thinning' speeds up the operation of the
algorithm by skipping the additional edge thinning stage. By default, or when
'thinning' string is specified, the algorithm applies edge thinning.

Step 7: Two masks are used for having the maximum edge at vertical and
horizontal level i.e; gv and gh.

Mask along horizontal direction i.e. gh


A1 A1 A1 … A1 M1 M1 M1 B1 B1 B1 …. B1
1 2 3 … K + 1 2 3 1 2 3 . K

A2 A2 A2 … A2 M2 M2 M2 B2 B2 B2 …. B2
1 2 3 … K 1 2 3 1 2 3 . K

A3 A3 A3 … A3 M3 M3 M3 B3 B3 B3 …. B3
1 2 3 … K 1 2 3 1 2 3 . K
: : : : : : : : : :
Input image Mask gh output image

B22=(A11*M11)+(A12*M12)+(A13*M13)+(A21*M21)+(A22*M22)+
(A23*M23)+(A31*M3) +(A32*M32)+(A33*M33)

Mask along horizontal direction i.e. gv

A1 A1 A1 … A1 M1 M2 M3 B1 B1 B1 …. B1
1 2 3 … K + 1 1 1 1 2 3 . K

A2 A2 A2 … A2 M1 M2 M3 B2 B2 B2 …. B2
1 2 3 … K 2 2 2 1 2 3 . K

A3 A3 A3 … A3 M1 M2 M3 B3 B3 B3 …. B3
1 2 3 … K 3 3 3 1 2 3 . K
: : : : : : : : : :
Input image Mask gv output image
Mask along vertical direction i.e. gv

B22=(A11*M11)+(A12*M21)+(A13*M31)+(A21*M12)+(A22*M22)+
(A23*M32)+(A31*M1)+(A32*M23)+(A33*M33)

Step 8: Returns vertical and horizontal edge responses to Sobel operators.

[BW,thresh,gv,gh] = edge(I,'sobel',...)

Returns the threshold value of eyes along vertical and horizontal value and the
status of the eyes are determined

Step 9: Template matching is done based on status of eyes given from step8.

Step 10: Finally if the fatigue is detected then the alarm is generated to the user.

3.6 LIGHT EMITING DIODE (LED)

A light-emitting diode (LED) is an electronic light source. The first LED


was built in the 1920s by Oleg Vladimirovich Losev, a radio technician who
noticed that diodes used in radio receivers emitted light when current was passed
through them. The LED was introduced as a practical electronic component in
1962.
All early devices emitted low-intensity red light, but modern LEDs are
available across the visible, ultraviolet and infra red wavelengths, with very high
brightness.

LEDs are based on the semiconductor diode. When the diode is forward
biased (switched on), electrons are able to recombine with holes and energy is
released in the form of light. This effect is called electroluminescence and the color
of the light is determined by the energy gap of the semiconductor. The LED is
usually small in area (less than 1 mm2) with integrated optical components to shape
its radiation pattern and assist in reflection.[2]

LEDs present many advantages over traditional light sources including


lower energy consumption, longer lifetime, improved robustness, smaller size and
faster switching. However, they are relatively expensive and require more precise
current and heat management than traditional light sources.

Applications of LEDs are diverse. They are used as low-energy indicators


but also for replacements for traditional light sources in general lighting and
automotive lighting. The compact size of LEDs has allowed new text and video
displays and sensors to be developed, while their high switching rates are useful in
communications
technology.
3.7 VOICE BOARD:
The APR9600 experimental board is an assembled PCB board consisting of
an APR9600 IC, an electret microphone, support components and necessary
switches to allow users to explore all functions of the APR9600 chip. The
oscillation resistor is chosen so that the total recording period is 60 seconds with a
sampling rate of 4.2 kHz.

3.7.1 Features
• Single-chip, high-quality voice recording & playback solution
- No external ICs required
- Minimum external components
• Non-volatile Flash memory technology
- No battery backup required
• User-Selectable messaging options
- Random access of multiple fixed-duration messages
- Sequential access of multiple variable-duration messages
• User-friendly, easy-to-use operation
- Programming & development systems not required
- Level-activated recording & edge-activated play back switches
• Low power consumption
- Operating current: 25 mA typical
- Standby current: 1 uA typical
- Automatic power-down
• Chip Enable pin for simple message expansion

3.7.2 Pin functions of the IC :


During sound recording, sound is picked up by the microphone. A
microphone pre-amplifier amplifies the voltage signal from the microphone.

An AGC circuit is included in the pre-amplifier, the extent of which is


controlled by an external capacitor and resistor. If the voltage level of a sound
signal is around 100 mV peak to-peak, the signal can be fed directly into the IC
through ANA IN pin (pin 20).

The sound signal passes through a filter and a sampling and hold circuit. The
analogue voltage is then written into non-volatile flash analogue RAMs. It has a 28
pin DIP package. Supply voltage is between 4.5V to 6.5V. During recording and
replaying, current consumption is 25 mA. In idle mode, the current drops to 1 A.
The APR9600 device offers true single-chip voice recording, non- volatile
storage, and playback capability for 40 to 60 seconds. The device supports both
random and sequential access of multiple messages. Sample rates are user-
selectable, allowing designers to customize their design for unique quality and
storage time needs.

Integrated output amplifier, microphone amplifier, and AGC circuits greatly


simplify system design. The device is ideal for use in portable voice recorders,
toys, and many other consumer and industrial applications.

APLUS integrated achieves these high levels of storage capability by using its
proprietary analog/multilevel storage technology implemented in an advanced
Flash non-volatile memory process, where each memory cell can store 256 voltage
levels. This technology enables the APR9600 device to reproduce voice signals in
their natural form. It eliminates the need for encoding and compression, which
often introduce distortion.

3.7.3 FUNCTIONAL DESCRIPTION:


The APR9600 block diagram is included in order to give understanding of
the APR9600 internal architecture. At the left hand side of the diagram are the
analog inputs. A differential Microphone amplifier, including integrated AGC, is
included on-chip for applications requiring its use.
The amplified microphone signal is fed into the device by connecting the
Ana_Out pin to the Ana_In pin through an external DC blocking capacitor.
Recording can be fed directly into the Ana_In pin through a DC blocking
capacitor, however, the connection between Ana_In and Ana_Out is still required
for playback. The next block encountered by the input signal is
the internal anti-aliasing filter.
The filter automatically adjusts its response according to the sampling
frequency selected so Shannon’s Sampling Theorem is satisfied. After anti-aliasing
filtering is accomplished the signal is ready to be clocked into the memory array.
This storage is accomplished through a combination of the Sample and Hold
circuit and the Analog Write/Read circuit.
These circuits are clocked by either the Internal Oscillator or an external
clock source. When playback is desired the previously stored recording is retrieved
from memory, low pass filtered, and amplified as shown on the right hand side of
the diagram. The signal can be heard by connecting a speaker to the SP+ and SP-
pins.
Chip-wide management is accomplished through the device control block
shown in the upper right hand corner. Message management is controlled through
the message control block represented in the lower center of the block diagram.

3.7.4 Message Management General Description:

Playback and record operations are managed by on chip circuitry. There are
several available messaging modes depending upon desired operation. These
message modes determine message management style, message length, and
external parts count. Therefore, the designer must select the appropriate operating
mode before beginning the design.
Operating modes do not affect voice quality, for information on factors
affecting quality refer to the Sampling Rate & Voice Quality section.

The device supports three message management modes (defined by the


MSEL1, MSEL2 and /M8_Option pins shown in Figures 1 and 2):

• Random access mode with 2, 4, or 8 fixed-duration messages


• Tape mode, with multiple variable-duration messages, provides
two options:

- Auto rewind
- Normal

Modes cannot be mixed. Switching of modes after the device has recorded
an initial message is not recommended. If modes are switched after an initial
recording has been made some unpredictable message fragments from the previous
mode may remain present, and be audible on playback, in the new mode.
These fragments will disappear after a record operation in the newly selected
mode.

Table 1 defines the decoding necessary to choose the desired mode. An


important feature of the APR9600 message management capabilities is the ability
to audibly prompt the user to changes in the device’s status through the use of
“beeps” superimposed on the device’s output. This feature is enabled by asserting a
logic high level on the BE pin.
3.7.5 Random Access Mode:
Random access mode supports 2, 4, or 8 messages segments of fixed
duration. As suggested recording or playback can be made randomly in any of the
selected messages. The length of each message segment is the total recording
length available (as defined by the selected sampling rate) divided by the total
number of segments enabled (as decoded in Table1). Random access mode
provides easy indexing to message segments.

3.7.6 Functional Description of Recording in Random Access Mode:


On power up, the device is ready to record or play back, in any of the
enabled message segments. To record, /CE must be set low to enable the device
and /RE must be set low to enable recording. You initiate recording by applying a
low level on the message trigger pin that represents the message segment you
intend to use.

The message trigger pins are labeled /M1_Message - /M8_Option on pins 1-


9 (excluding pin 7) for message segments 1-8 respectively.
Note: Message trigger pins /M1_Message, /M2_Next, /M7_END, and
/M8_Option, have expanded names to represent the different functionality that
these pins assume in the other modes.

In random access mode these pins should be considered purely message


trigger pins with the same functionality as /M3, /M4, /M5, and /M6. For a more
thorough explanation of the functionality of device pins in different modes please
refer to the pin description table that appears later in this document.

When actual recording begins the device responds with a single beep (if the
BE pin is high to enable the beep tone) at the speaker outputs to indicate that it has
started recording.

Recording continues as long as the message pin stays low. The rising edge
of the same message trigger pin during record stops the recording operation
(indicated with a single beep).

If the message trigger pin is held low beyond the end of the maximum
allocated duration, recording stops automatically (indicated with two beeps),
regardless of the state of the message trigger pin.

The chip then enters low-power mode until the message trigger pin returns
high. After the message triggerpin returns to high, the chip enters standby mode.

3.7.7 Tape Mode:


Tape mode manages messages sequentially much like traditional cassette
tape recorders. Within tape mode two options exist, auto rewind and normal. Auto
rewind mode configures the device to automatically rewind to the beginning of the
message immediately following recording or playback of the message. In tape
mode, using either option, messages must be recorded or played back sequentially,
much like a traditional cassette tape recorder.

3.7.8 Function Description Recording in Tape Mode :

On power up, the device is ready to record or play back, starting at the first
address in the memory array. To record, /CE must be set low to enable the device
and /RE must be set low to enable recording. A falling edge of the /M1_Message
pin initiates voice recording (indicated by one beep). A subsequent rising edge of
the /M1_Message pin during recording stops the recording (also indicated by one
beep).
If the /M1_Message pin is held low beyond the end of the available
memory, recording will stop automatically (indicated by two beeps). The device
will then assert a logic low on the /M7_END pin for a duration equal to 1600
cycles of the sample sample clock, regardless of the state of the /M1_Message pin.

The device returns to standby mode when the /M1_Message pin goes high
again. After recording is finished the device will automatically rewind to the
beginning of the most recently recorded message and wait for the next user input.
The auto rewind function is convenient because it allows the user to immediately
playback and review the message without the need to rewind.
However, caution must be practiced because a subsequent record operation
will overwrite the last recorded message unless the user remembers to pulse the
/M2_Next pin in order to increment the device past the current message.

A subsequent falling edge on the /M1_Message pin starts a new record


operation, overwriting the previously existing message. You can preserve the
previously recorded message by using the /M2_Next input to initiate recording in
the next available message segment. To perform this function,the /M2_Next pin
must be pulled low for at least 400 cycles of the sample clock.

The auto rewind mode allows the user to record over the previous message
simply by initiating a record sequence without first toggling the /M2_Next pin. To
record over any other message however requires a different sequence. You must
pulsethe /CE pin low once to rewind the device to the beginning of the voice
memory.

The /M2_Next pin must then be pulsed low for the specified number of
times to move to the start of the message you wish to overwrite. Upon arriving at
the desired message a record sequence can be initiated to overwrite the previously
recorded material.

After you overwrite the message it becomes the last available message and
all previously recorded messages following this message become inaccessible. If
during a record operation all the available memory is used the device will stop
recording automatically, (double beep) and set the /M7_END pin low for a
duration equal to 1600 cycles of the sample clock.
Playback can be initiated on this last message, but pulsing the /M2_Next pin
will put the device into an “overflow state”. Once the device enters an overflow
state any subsequent pulsing of /M1_Message or /M2_Next will only result in a
double beep and setting of the /M7_END pin low for a duration equal to 400 cycles
of the sample clock.

To proceed from this state the user must rewind the device to the beginning
of the memory array. This can be accomplished by toggling the /CE pin low or
cycling power. All inputs, except the /CE pin, are ignored during recording.

3.7.9 Function Description of Playback in Tape Mode using Auto Rewind


Option:

On power-up, the device is ready to record or play back,starting at the first


address in the memory array. Before you can begin playback, the /CE input must
be set to low to enable the device and /RE must be set to high to disable recording
and enable playback.

The first high to low going pulse of the /M1_Message pin initiates
playback from the beginning of the current message, on power up the first message
is the current message.

When the /M1_Message pin pulses low the second time, playback of the
current message stops immediately. When the /M1_Message pin pulses low a third
time, playback of the current message starts again from its beginning. If you hold
the /M1_Message pin low continuously the same message will play continuously
in a looping fashion.
A 1,530 ms period of silence is inserted during looping as an indicator to the
user of the transition between the beginning and end of the message.Note that in
auto rewind mode the device always rewinds to the beginning of the current
message.
To listen to a subsequent message the device must be fast forwarded
past the current message to the next message. This function is accomplished by
toggling the /M2_Next pin from high to low.The pulse must be low for least 400
cycles of the sampling clock.

After the device is incremented to the desired message the user can initiate
playback of the message with the playback sequence described above. A special
case exists when the /M2_Next pin goes low during playback.

Playback of the current message will stop, the device will beep,
advance to the next message and initiate playback of the next message.
(Note that if /M2_Next goes low when not in playback mode, the device will
prepare to play the next message, but will not actually initiate playback).

If the /CE pin goes low during playback, playback of the current message
will stop, the device will beep, reset to the beginning of the first message, and wait
for a subsequent playback command.

When you reach the end of the memory array, any subsequent pulsing
of /M1_Message or /M2_Next will only result in a double beep. To proceed from
this state the user must rewind the device to the beginning of the memory array.
This
Can be accomplished by toggling the /CE pin low or cycling power.
3.8 SPEAKER

A loudspeaker (or "speaker") is an electroacoustic transducer that converts


an electrical signal into sound. The speaker pulses in accordance with the
variations of an electrical signal and causes sound waves to propagate through a
medium such as air or water.

Loudspeakers (and other electroacoustic transducers) are the most variable


elements in a modern audio system and are usually responsible for most distortion
and audible differences when comparing sound systems

Driver design

Cutaway view of a dynamic loudspeaker


A traditional stamped loudspeaker frame is clearly visible

The most common type of driver uses a lightweight diaphragm, or cone,


connected to a rigid basket, or frame, via a flexible suspension that constrains a
coil of fine wire to move axially through a cylindrical magnetic gap. When an
electrical signal is applied to the voice coil, a magnetic field is created by the
electric current in the voice coil, making it an electromagnet. The coil and the
driver's magnetic system interact, generating a mechanical force that causes the
coil (and thus, the attached cone) to move back and forth, thereby reproducing
sound under the control of the applied electrical signal coming from the amplifier.
The following is a description of the individual components of this type of
loudspeaker.

The diaphragm is usually manufactured with a cone- or dome-shaped


profile. A variety of different materials may be used, but the most common are
paper, plastic, and metal. The ideal material would be stiff, to prevent uncontrolled
cone motions; light, to minimize starting force requirements; and well-damped, to
reduce vibrations from continuing after the signal has stopped. In practice, all three
of these criteria cannot be met simultaneously using existing materials; thus, driver
design involves trade-offs. For example, paper is light and typically well-damped,
but not stiff; metal can be made stiff and light, but it is not usually well-damped;
plastic can be light, but typically, the stiffer it is made, the less well-damped it is.
As a result, many cones are made of some sort of composite material. This
can be a matrix of fibers, including Kevlar or fiberglass; a layered or bonded
sandwich construction; or simply a coating applied to stiffen or damp a cone.

The basket, or frame, must be designed for rigidity to avoid deformation,


which would change the magnetic conditions in the magnet gap and could even
cause the voice coil to rub against the walls of the gap. Baskets are typically cast or
stamped metal, although molded plastic baskets are becoming common, especially
for inexpensive drivers. The frame also plays a considerable role in conducting
heat away from the coil.

The suspension system keeps the coil centered in the gap and provides a
restoring force that causes the speaker cone to return to a neutral position after
moving. A typical suspension system consists of two parts: the "spider", which
connects the diaphragm or voice coil to the frame and provides the majority of the
restoring force, and the "surround", which helps center the coil/cone assembly and
allows free piston-like motion aligned with the magnetic gap. The spider is usually
made of a corrugated fabric disk, generally with a coating of a material intended to
improve mechanical properties.

The name comes from the shape of early suspensions, which were two
concentric rings of Bakelite material, joined by six or eight curved "legs".
Variations of this topology included adding a felt disc to provide a barrier to
particles that might otherwise cause the voice coil to rub. A German company,
Rulik, still offers a spider made of wood. The surround can be a roll of rubber or
foam, or a ring of corrugated fabric (often coated), attached to the outer
circumference of the cone and to the frame.
The choice of suspension materials affects driver life, especially in the case
of foam surrounds, which are susceptible to aging and environmental damage.

The wire in a voice coil is usually made of copper, though aluminium—and,


rarely, silver—may be used. Voice-coil wire cross sections can be circular,
rectangular, or hexagonal, giving varying amounts of wire volume coverage in the
magnetic gap space. The coil is oriented co-axially inside the gap; it moves back
and forth within a small circular volume (a hole, slot, or groove) in the magnetic
structure. The gap establishes a concentrated magnetic field between the two poles
of a permanent magnet; the outside of the gap being one pole, and the center post
(called the pole piece) being the other. The pole piece and backplate are often a
single piece, called the poleplate or yoke.

Modern driver magnets are almost always permanent and made of ceramic,
ferrite, Alnico, or, more recently, neodymium magnet. A trend in design—due to
increases in transportation costs and a desire for smaller, lighter devices (as in
many home theater multi-speaker installations)—is the use of neodymium magnets
instead of ferrite types. Very few manufacturers use electrically powered field
coils, as was common in the earliest designs. The size and type of magnet and
details of the magnetic circuit differ, depending on design goals.

For instance, the shape of the pole piece affects the magnetic interaction
between the voice coil and the magnetic field, and is sometimes used to modify a
driver's behavior. A "shorting ring", or Faraday loop, may be included as a thin
copper cap fitted over the pole tip or as a heavy ring situated within the magnet-
pole cavity. The benefits of this are reduced impedance at high frequencies,
providing extended treble output, reduced harmonic distortion, and a reduction in
the inductance modulation that typically accompanies large voice coil excursions.
On the other hand, the copper cap requires a wider voice-coil gap, with
increased magnetic reluctance; this reduces available flux, requiring a slightly
larger magnet for equivalent performance.

3.8.1 Driver design

Including the particular way two or more drivers are combined in an


enclosure to make a speaker system—is both an art and science. Adjusting a design
to improve performance is done using magnetic, acoustic, mechanical, electrical,
and material science theory; high precision measurements; and the observations of
experienced listeners.

Designers can use an anechoic chamber to ensure the speaker can be


measured independently of room effects, or any of several electronic techniques
which can, to some extent, replace such chambers. Some developers eschew
anechoic chambers in favor of specific standardized room setups intended to
simulate real-life listening conditions.

A few of the issues speaker and driver designers must confront are
distortion, lobing, phase effects, off-axis response, and crossover complications.

The fabrication of finished loudspeaker systems has become segmented,


depending largely on price, shipping costs, and weight limitations. High-end
speaker systems, which are heavier (and often larger) than economic shipping
allows outside local regions, are usually made in their target market area and can
cost $140,000 or more per pair.

The lowest-priced speaker systems and most drivers are manufactured in


China or other low-cost manufacturing locations.
3.8.2 Driver types

An audio engineering rule of thumb is that individual electrodynamic drivers


provide quality performance over at most about three octaves. Multiple drivers
(e.g., subwoofers, woofers, mid-range drivers, and tweeters) are generally used in a
complete loudspeaker system to provide performance beyond three octaves.

A full-range driver is designed to have the widest frequency response


possible, despite the rule of thumb cited above. These drivers are small, typically 3
to 8 inches (7.6 to 20 cm) in diameter to permit reasonable high frequency
response, and carefully designed to give low-distortion output at low frequencies,
though with reduced maximum output level. Full-range (or more accurately, wide-
range) drivers are most commonly heard in public address systems and in
televisions, although some models are suitable for hi-fi listening.

In hi-fi speaker systems, the use of wide-range drive units can avoid
undesirable interaction between multiple drivers caused by non-coincident driver
location or crossover network issues. Fans of wide-range driver hi-fi speaker
systems claim a coherence of sound, said to be due to the single source and a
resulting lack of interference, and likely also to the lack of crossover components.
Detractors typically cite wide-range drivers' limited frequency response and
modest output abilities, together with their requirement for large, elaborate,
expensive enclosures—such as transmission lines, or horns—to approach optimum
performance.

Full-range drivers often employ an additional cone called a whizzer: a small,


light cone attached to the joint between the voice coil and the primary cone. The
whizzer cone extends the high-frequency response of the driver and broadens its
high frequency directivity, which would otherwise be greatly narrowed due to the
outer diameter cone material failing to keep up with the central voice coil at higher
frequencies. The main cone in a whizzer design is manufactured so as to flex more
in the outer diameter than in the center.

The result is that the main cone delivers low frequencies and the whizzer
cone contributes most of the higher frequencies. Since the whizzer cone is smaller
than the main diaphragm, output dispersion at high frequencies is improved
relative to an equivalent single larger diaphragm.

Limited-range drivers are typically used in computers, toys, and clock


radios. These drivers are less elaborate and less expensive than wide-range drivers,
and they may be severely compromised to fit into very small mounting locations.
In these applications, sound quality is a low priority.

The human ear is remarkably tolerant of poor sound quality, and the
distortion inherent in limited-range drivers may enhance their output at high
frequencies, increasing clarity when listening to spoken word material.

A subwoofer is a woofer driver used only for the lowest part of the audio
spectrum: typically below 120 Hz. Because the intended range of frequencies in
these is limited, subwoofer system design is usually simpler in many respects than
for conventional loudspeakers, often consisting of a single speaker enclosed in a
suitable box or enclosure.

To accurately reproduce very low bass notes without unwanted resonances


(typically from cabinet panels), subwoofer systems must be solidly constructed and
properly braced; good ones are typically extraordinarily heavy.

Many subwoofer systems include power amplifiers and electronic sub-


filters, with additional controls relevant to low-frequency reproduction. These
variants are known as "active subwoofers” "Passive" subwoofers require external
amplification.

A woofer is a driver that reproduces low frequencies. Some loudspeaker


systems use a woofer for the lowest frequencies, making it possible to avoid using
a subwoofer. Additionally, some loudspeakers use the woofer to handle middle
frequencies, eliminating the mid-range driver. This can be accomplished with the
selection of a tweeter that responds low enough combined with a woofer that
responds high enough that the two drivers add coherently in the middle
frequencies.

A mid-range speaker is a loudspeaker driver that reproduces middle


frequencies. Mid-range drivers can be made of paper or composite materials, or
they can be compression drivers. If the mid-range driver is cone-shaped, it can be
mounted on the front baffle of a loudspeaker enclosure, or it can be mounted at the
throat of a horn for added output level and control of radiation pattern. If it is a
compression driver, it is invariably mated to a horn.

A tweeter is a high-frequency driver that typically reproduces the highest


frequency band of a loudspeaker.
3.9 UART

A universal asynchronous receiver/transmitter is a type of "asynchronous


receiver/transmitter", a piece of computer hardware that translates data between
parallel and serial forms. UARTs are commonly used in conjunction with other
communication standards such as EIA RS-232.

A UART is usually an individual (or part of an) integrated circuit used for
serial communications over a computer or peripheral device serial port. UARTs are
now commonly included in microcontrollers. A dual UART or DUART combines
two UARTs into a single chip. Many modern ICs now come with a UART that can
also communicate synchronously; these devices are called USARTs.

The Universal Asynchronous Receiver/Transmitter (UART) controller is the


key component of the serial communications subsystem of a computer. The UART
takes bytes of data and transmits the individual bits in a sequential fashion. At the
destination, a second UART re-assembles the bits into complete bytes. Serial
transmission of digital information (bits) through a single wire or other medium is
much more cost effective than parallel transmission through multiple wires. A
UART is used to convert the transmitted information between its sequential and
parallel form at each end of the link. Each UART contains a shift register which is
the fundamental method of conversion between serial and parallel forms.

3.9.1 MAX232:

The MAX232 is an integrated circuit that converts signals from an RS-232


serial port to signals suitable for use in TTL compatible digital logic circuits. The
MAX232 is a dual driver/receiver and typically converts the RX, TX, CTS and
RTS signals.
The drivers provide RS-232 voltage level outputs (approx. ± 7.5 V) from a
single + 5 V supply via on-chip charge pumps and external capacitors. This makes
it useful for implementing RS-232 in devices that otherwise do not need any
voltages outside the 0 V to + 5 V range, as power supply design does not need to
be made more complicated just for driving the RS-232 in this case.

The receivers reduce RS-232 inputs (which may be as high as ± 25 V), to


standard 5 V TTL levels. These receivers have a typical threshold of 1.3 V, and a
typical hysteresis of 0.5 V.

The later MAX232A is backwards compatible with the original MAX232


but may operate at higher baud rates and can use smaller external capacitors –
0.1 μF in place of the 1.0 μF capacitors used with the original device.

3.9.2 Pin Diagram:


CHAPTER 4

SOFTWARE ANALYSIS

4.1 EMBEDDED C

Embedded C is a set of language extensions for the C Programming


language by the C Standards committee to address commonality issues that exist
between C extensions for different embedded systems. Historically, embedded C
programming requires nonstandard extensions to the C language in order to
support exotic features such as fixed-point arithmetic, multiple distinct memory
banks, and basic I/O operations.

In 2008, the C Standards Committee extended the C language to address


these issues by providing a common standard for all implementations to adhere to.
It includes a number of features not available in normal C, such as, fixed-point
arithmetic, named address spaces, and basic I/O hardware addressing.

Embedded C uses most of the syntax and semantics of standard C, e.g.,


main() function, variable definition, datatype declaration, conditional statements
(if, switch, case), loops (while, for), functions, arrays and strings, structures and
union, bit operations, macros, etc.

A Technical Report was published in 2004 and a second revision in 2006.

4.1.1 NECESSITY
During infancy years of microprocessor based systems, programs were
developed using assemblers and fused into the EPROMs. There used to be no
mechanism to find what the program was doing. LEDs, switches, etc. were used to
check for correct execution of the program. Some ‘very fortunate’ developers had
In-circuit Simulators (ICEs), but they were too costly and were not quite reliable as
well. As time progressed, use of microprocessor-specific assembly-only as the
programming language reduced and embedded systems moved onto C as the
embedded programming language of choice. C is the most widely used
programming language for embedded processors/controllers. Assembly is also
used but mainly to implement those portions of the code where very high timing
accuracy, code size efficiency, etc. are prime requirements.
As assembly language programs are specific to a processor, assembly
language didn’t offer portability across systems. To overcome this disadvantage,
several high level languages, including C, came up. Some other languages like
PLM, Modula-2, Pascal, etc. also came but couldn’t find wide acceptance.
Amongst those, C got wide acceptance for not only embedded systems, but also for
desktop applications. Even though C might have lost its sheen as mainstream
language for general purpose applications, it still is having a strong-hold in
embedded programming.
Due to the wide acceptance of C in the embedded systems, various kinds of
support tools like compilers & cross-compilers, ICE, etc. came up and all this
facilitated development of embedded systems using C. Assembly language seems
to be an obvious choice for programming embedded devices.
However, use of assembly language is restricted to developing efficient
codes in terms of size and speed. Also, assembly codes lead to higher software
development costs and code portability is not there. Developing small codes are
not much of a problem, but large programs/projects become increasingly difficult
to manage in assembly language. Finding good assembly programmers has also
become difficult nowadays. Hence high level languages are preferred for
embedded systems programming.
4.1.2 ADVANTAGES
 It is small and simpler to learn, understand, program and debug.
 Compared to assembly language, C code written is more reliable and
scalable, more portable between different platforms.
 C compilers are available for almost all embedded devices in use today, and
there is a large pool of experienced C programmers.
 Unlike assembly, C has advantage of processor-independence and is not
specific to any particular microprocessor/microcontroller or any system.
This makes it convenient for a user to develop programs that can run on
most of the systems.
 As C combines functionality of assembly language and features of high level
languages, C is treated as a ‘middle-level computer language’ or ‘high level
assembly language’.
 It is fairly efficient.
 It supports access to I/O and provides ease of management of large
embedded projects.
 Java is also used in many embedded systems but Java programs require the
Java Virtual Machine (JVM), which consumes a lot of resources. Hence it is
not used for smaller embedded devices.
 Other High-level programming language like Pascal, FORTRAN also
provide some of the advantages.

4.1.3 EMBEDDED SYSTEMS PROGRAMMING

Embedded systems programming is different from developing applications


on a desktop computers. Key characteristics of an embedded system, when
compared to PCs, are as follows:
 Embedded devices have resource constraints(limited ROM, limited RAM,
limited stack space, less processing power)
 Components used in embedded system and PCs are different; embedded
systems typically uses smaller, less power consuming components. 
Embedded systems are more tied to the hardware.
 

Two salient features of Embedded Programming are code speed and code


size. Code speed is governed by the processing power, timing constraints, whereas
code size is governed by available program memory and use of programming
language.  Goal of embedded system programming is to get maximum features in
minimum space and minimum time.

Embedded systems are programmed using different type of languages:

 Machine Code
 Low level language, i.e., assembly
 High level language like C, C++, Java, Ada, etc.
 Application level language like Visual Basic, scripts, Access, etc.
 

Assembly language maps mnemonic words with the binary machine codes
that the processor uses to code the instructions. Assembly language seems to be an
obvious choice for programming embedded devices. However, use of assembly
language is restricted to developing efficient codes in terms of size and speed.
Also, assembly codes lead to higher software development costs and code
portability is not there. Developing small codes are not much of a problem, but
large programs/projects become increasingly difficult to manage in assembly
language. Finding good assembly programmers has also become difficult
nowadays. Hence high level languages are preferred for embedded systems
programming.

Use of C in embedded systems is driven by following advantages

 It is small and reasonably simpler to learn, understand, program and debug.


 C Compilers are available for almost all embedded devices in use today, and
there is a large pool of experienced C programmers.
 Unlike assembly, C has advantage of processor-independence and is not
specific to any particular microprocessor/ microcontroller or any system. This
makes it convenient for a user to develop programs that can run on most of the
systems.
 As C combines functionality of assembly language and features of high level
languages, C is treated as a ‘middle-level computer language’ or ‘high level
assembly language’
 It is fairly efficient
 It supports access to I/O and provides ease of management of large embedded
projects.
 

Many of these advantages are offered by other languages also, but what sets
C apart from others like Pascal, FORTRAN, etc. is the fact that it is a middle level
language; it provides direct hardware control without sacrificing benefits of high
level languages.
 Compared to other high level languages, C offers more flexibility because C
is relatively small, structured language; it supports low-level bit-wise data
manipulation.

Compared to assembly language, C Code written is more reliable and


scalable, more portable between different platforms (with some changes).
Moreover, programs developed in C are much easier to understand, maintain and
debug. Also, as they can be developed more quickly, codes written in C offers
better productivity. C is based on the philosophy ‘programmers know what they
are doing’; only the intentions are to be stated explicitly. It is easier to write good
code in C & convert it to an efficient assembly code (using high quality compilers)
rather than writing an efficient code in assembly itself. Benefits of assembly
language programming over C are negligible when we compare the ease with
which C programs are developed by programmers.

  Objected oriented language, C++ is not apt for developing efficient


programs in resource constrained environments like embedded devices. Virtual
functions & exception handling of C++ are some specific features that are not
efficient in terms of space and speed in embedded systems. Sometimes C++ is used
only with very few features, very much as C.

  And, also an object-oriented language, is different than C++. Originally


designed by the U.S. DOD, it didn’t gain popularity despite being accepted as an
international standard twice (Ada83 and Ada95). However, Ada language has
many features that would simplify embedded software development.

 
Java is another language used for embedded systems programming. It
primarily finds usage in high-end mobile phones as it offers portability across
systems and is also useful for browsing applications. Java programs require Java
Virtual Machine (JVM), which consume lot of resources. Hence it is not used for
smaller embedded devices.

Dynamic C and B# are some proprietary languages which are also being
used in embedded applications.Efficient embedded C programs must be kept small
and efficient; they must be optimized for code speed and code size. Good
understanding of processor architecture embedded C programming and debugging
tools facilitate this.

4.1.4 DIFFERENCE BETWEEN C AND EMBEDDED C

Though C and embedded C appear different and are used in different


contexts, they have more similarities than the differences. Most of the constructs
are same; the difference lies in their applications.

C is used for desktop computers, while embedded C is for microcontroller


based applications. Accordingly, C has the luxury to use resources of a desktop PC
like memory, OS, etc. While programming on desktop systems, we need not bother
about memory. However, embedded C has to use with the limited resources (RAM,
ROM, I/Os) on an embedded processor. Thus, program code must fit into the
available program memory. If code exceeds the limit, the system is likely to crash.
  Compilers for C (ANSI C) typically generate OS dependant
executables. Embedded C requires compilers to create files to be downloaded to
the microcontrollers/microprocessors where it needs to run. Embedded compilers
give access to all resources which is not provided in compilers for desktop
computer applications.

  Embedded systems often have the real-time constraints, which is usually not
there with desktop computer applications.Embedded systems often do not have a
console, which is available in case of desktop applications.

  So, what basically is different while programming with embedded C is the


mindset; for embedded applications, we need to optimally use the resources, make
the program code efficient, and satisfy real time constraints, if any. All this is done
using the basic constructs, syntaxes, and function libraries of ‘C’.

4.2 PROTUES:

Proteus (PROcessor for TExt Easy to USe) is a fully functional, procedural


programming language created in 1998 by Simone Zanella. Proteus incorporates
many functions derived from several other
languages: C, BASIC, Assembly, Clipper/dBase; it is especially versatile in
dealing with strings, having hundreds of dedicated functions; this makes it one of
the richest languages for text manipulation.

Proteus owes its name to a Greek god of the sea (Proteus), who took care of
Neptune's crowd and gave responses; he was renowned for being able to transform
himself, assuming different shapes. Transforming data from one form to another is
the main usage of this language.
4.2.1 INTRODUCTION
Proteus was initially created as a multiplatform (DOS, Windows, Unix)
system utility, to manipulate text and binary files and to create CGI scripts. The
language was later focused on Windows, by adding hundreds of specialized
functions for: network and serial communication, database interrogation, system
service creation, console applications, keyboard emulation, ISAPI scripting
(for IIS). Most of these additional functions are only available in the Windows
flavor of the interpreter, even though a Linux version is still available.

Proteus was designed to be practical (easy to use, efficient, complete), readable and
consistent.
 Its strongest points are:
 powerful string manipulation;
 comprehensibility of Proteus scripts;
 availability of advanced data structures: arrays, queues (single or
double), stacks, bit maps, sets, AVL trees.
 The language can be extended by adding user functions written in Proteus
or DLLs created in C/C++.

4.2.2 LANGUAGE FEATURES


At first sight, Proteus may appear similar to Basic because of its straight syntax,
but similarities are limited to the surface:
 Proteus has a fully functional, procedural approach;
 variables are un typed, do not need to be declared, can be local or public and
can be passed by value or by reference;
 all the typical control structures are available (if-then-else; for-next; while-
loop; repeat-until; switch-case);
 new functions can be defined and used as native functions.

Data types supported by Proteus are only three: integer numbers, floating
point numbers and strings. Access to advanced data structures (files, arrays,
queues, stacks, AVL trees, sets and so on) takes place by using handles, i.e. integer
numbers returned by item creation functions.

Type declaration is unnecessary: variable type is determined by the function


applied – Proteus converts on the fly every variable when needed and holds
previous data renderings, to avoid performance degradation caused by repeated
conversions.
There is no need to add parenthesis in expressions to determine the
evaluation order, because the language is fully functional (there are no operators).

Proteus includes hundreds of functions for:


 accessing file system;
 sorting data;
 manipulating dates and strings;
 interacting with the user (console functions)
 Calculating logical and mathematical expressions.
 Proteus supports associative arrays (called sets) and AVL trees, which are
very useful and powerful to quickly sort and lookup values.
Two types of regular expressions are supported:
 extended (Unix like);
 basic (Dos like, having just the wildcards "?" and "*").

Both types of expressions can be used to parse and compare data.


The functional approach and the extensive library of built-in functions allow to
write very short but powerful scripts; to keep them comprehensible, medium-
length keywords were adopted.
The user, besides writing new high-level functions in Proteus, can add new
functions in C/C++ by following the guidelines and using the templates available
in the software development kit; the new functions can be invoked exactly the
same way as the predefined ones, passing expressions by value or variables by
reference.
Proteus is an interpreted language: programs are loaded into memory, pre-
compiled and run; since the number of built-in functions is large, execution speed
is usually very good and often comparable to that of compiled programs.
One of the most interesting features of Proteus is the possibility of running scripts
as services or ISAPI scripts.
Running a Proteus script as a service, started as soon as the operating system
has finished loading, gives many advantages:
 No user needs to login to start the script;
 A service can be run with different privileges so that it cannot be stopped by
a user.
This is very useful to protect critical processes in industrial environments
(data collection, device monitoring), or to avoid that the operator inadvertently
closes a utility (keyboard emulation).
The ISAPI version of Proteus can be used to create scripts run through
Internet Information Services and is equipped with specific functions to cooperate
with the web server.
For intellectual property protection Proteus provides:
 Script encryption;
 Digital signature of the scripts, by using the development key (which is
unique);
 The option to enable or disable the execution of a script (or part of it) by
using the key of the customer.
Proteus is appreciated because it is relatively easy to write short, powerful and
comprehensible scripts; the large numbers of built-in functions, together with the
examples in the manual keep low the learning curve.
The development environment includes a source code editor with syntax
highlighting and a context-sensitive guide. Proteus does not need to be installed:
the interpreter is a single executable (below 400 Kb) that does not require
additional DLLs to be run on recent Windows systems.

4.2.3 SYNOPSIS AND LICENSING


The main features of this language are:
 fully functional, procedural language;
 multi-language support: Proteus is available in several languages (keywords
and messages);
 no data types: all variables can be used as integer numbers, floating point
numbers or strings; variables are interpreted according to the functions being
applied – Proteus keeps different representations of their values between
calls, to decrease execution time in case of frequent conversions between
one type and the other;
 no pre-allocated structures: all data used by Proteus are dynamically
allocated at execution time; there are no limits on: recursion, maximum data
size, number of variables, etc.;
 no operators: Proteus is a completely functional language – there are no
operators; thus, there is no ambiguity when evaluating expressions and
parenthesis are not needed;
 large library of predefined functions: Proteus is not a toy-language, it comes
with hundreds of library functions ready to be used for working on strings,
dates, numbers, for sorting, searching and so on;
 advanced data access (DAO), pipes, Windows sockets, serial ports: in the
Windows version, Proteus includes hundreds of system calls which are
operating system-specific;
 clear and comprehensible syntax: the names of the library functions
resamble those of corresponding functions in C, Clipper/Flagship and
Assembly; by using medium-length keywords, Proteus programs are very
easy to understand;
 native support for high-level data structures: arrays, queues (single or
double), stacks, bit maps, sets, AVL trees are already available in Proteus
and do not require additional code or libraries to be used;
 ISAPI DLL and Windows Service versions: Proteus is available as a
Windows service or as an ISAPI DLL (for using together with Microsoft
Internet Information Server);
 user libraries: it is possible to write user defined functions (UDF) in separate
files, and include them (even conditionally and recursively) inside new
programs; UDFs can be referenced before or after the definition; it is also
possible to write external functions in Visual C++ and invoke them from a
Proteus script;
 native support for Ms-Dos/Windows, Macintosh and Unix text files (all
versions);
 three models for dates (English, American, Japanese), with functions to
check them and to do calculations according to gregorian calendar;
 epoch setting for 2-digit-year dates;
 support for time in 12 and 24 hour format;
 support for simple (Dos-like) and extended (Unix-like) regular expressions,
in all versions;
 intellectual property protection, by using digital signature and cryptography;
 extensive library of functions to write interactive console programs.
 Proteus is available in demo version (script execution limited to three
minutes) and registered version, protected by a USB dongle. At the moment,
is available as a Windows or Ubuntu package and is distributed by SZP.

4.2.4 EXAMPLE PROGRAMS


Hello World
The following example prints out "Hello world!".
CONSOLELN "Hello World!"
Extract two fields
The following example reads the standard input (CSV format, separator ";") and
prints out the first two fields separated by "|":
CONSOLELN TOKEN(L, 1, ";") "|" TOKEN(L, 2, ";")
Proteus scripts by default work on an input file and write to an output file; the
predefined identifier L gets the value of every line in input. The function TOKEN
returns the requested item of the string; the third parameter represents the
delimiter. String concatenation is implicit.
The same program can be written in this way:
H = TOKNEW(L, ";")
CONSOLELN TOKGET(H, 1) "|" TOKGET(H, 2)
TOKFREE(H)
In this case, we used another function (TOKGET), which builds the list of the
tokens in the line; this is more efficient if we need to access several items in the
string.
4.2 AURDUINO IDE

Arduino is a prototype platform (open-source) based on an easy-to-use


hardware and software. It consists of a circuit board, which can be programed
(referred to as a microcontroller) and a ready-made software called Arduino IDE
(Integrated Development Environment), which is used to write and upload the
computer code to the physical board.

4.2.1 The key features are:

 Arduino boards are able to read analog or digital input signals from different
sensors and turn it into an output such as activating a motor, turning LED
on/off, connect to the cloud and many other actions.
 You can control your board functions by sending a set of instructions to the
microcontroller on the board via Arduino IDE (referred to as uploading
software).
 Unlike most previous programmable circuit boards, Arduino does not need
an extra piece of hardware (called a programmer) in order to load a new
code onto the board. You can simply use a USB cable.
 Additionally, the Arduino IDE uses a simplified version of C++, making it
easier to learn to program.
 Finally, Arduino provides a standard form factor that breaks the functions of
the micro-controller into a more accessible package.

4.1.3 Board Types

Various kinds of Arduino boards are available depending on different


microcontrollers used. However, all Arduino boards have one thing in common:
they are programed through the Arduino IDE.

The differences are based on the number of inputs and outputs (the number
of sensors, LEDs, and buttons you can use on a single board), speed, operating
voltage, form factor etc. Some boards are designed to be embedded and have no
programming interface (hardware), which you would need to buy separately. Some
can run directly from a 3.7V battery, others need at least 5V

Here is a list of different Arduino boards available.

Arduino boards based on ATMEGA328 microcontroller


4.1.4 ARDUINO INSTALLATION

After learning about the main parts of the Arduino UNO board, we are ready
to learn how to set up the Arduino IDE. Once we learn this, we will be ready to
upload our program on the Arduino board.

In this section, we will learn in easy steps, how to set up the Arduino IDE on
our computer and prepare the board to receive the program via USB cable.

Step 1: First you must have your Arduino board (you can choose your favorite
board) and a USB cable. In case you use Arduino UNO, Arduino Duemilanove,
Nano, Arduino Mega 2560, or Diecimila, you will need a standard USB cable (A
plug to B plug), the kind you would connect to a USB printer as shown in the
following image.

In case you use Arduino Nano, you will need an A to Mini-B cable instead
as shown in the following image.
Step 2: Download Arduino IDE Software.

You can get different versions of Arduino IDE from the Download page on
the Arduino Official website. You must select your software, which is compatible
with your operating system (Windows, IOS, or Linux). After your file download is
complete, unzip the file.

Step 3: Power up your board.

The Arduino Uno, Mega, Duemilanove and Arduino Nano automatically


draw power from either, the USB connection to the computer or an external power
supply. If you are using an Arduino Diecimila, you have to make sure that the
board is configured to draw power from the USB connection. The power source is
selected with a jumper, a small piece of plastic that fits onto two of the three pins
between the USB and power jacks. Check that it is on the two pins closest to the
USB port.

Connect the Arduino board to your computer using the USB cable. The
green power LED (labeled PWR) should glow.

Step 4: Launch Arduino IDE.

After your Arduino IDE software is downloaded, you need to unzip the
folder. Inside the folder, you can find the application icon with an infinity label
(application.exe). Double-click the icon to start the IDE.

Step 5: Open your first project.

Once the software starts, you have two options:

 Create a new project.


 Open an existing project example.
To create a new project, select File --> New

To open an existing project example, select File -> Example -> Basics -> Blink.

Here, we are selecting just one of the examples with the name Blink.
It turns the LED on and off with some time delay. You can select any other
example from the list.

Step 6: Select your Arduino board.

To avoid any error while uploading your program to the board, you must
select the correct Arduino board name, which matches with the board
connected to your computer.

Go to Tools -> Board and select your board.


CHAPTER 5

LITERATURE SURVEY
TITLE: Facial Expression Recognition System for Autistic Children in Virtual
Reality Environment.
AUTHOR: N., U.
YEAR: 2018
Autism Spectrum Disorder (ASD) may be viewed as a neuro developmental
disability that can affect social interaction, language (or) behavioral skills of
children. Most autistic children show symptoms of withdrawal from social
interaction and a lack of emotional empathy towards others. The underlying causes
of ASD are still not well understood but an alarming number of children are
diagnosed and also suffered from this disorder. Among the fundamental social
impairments in the ASD are challenges appropriately recognizing and responding
to nonverbal cues and communication. However this tool lacks the capability to
operate in conjunction with real world scenarios. So we proposed a new
intervention paradigm that act as a portable system called facial expression
recognition system that recognizes virtual reality (VR) based facial expressions in
a synchronous manner and also to break the dependency of an autistic child by
enhancing expression based accessing and controlling process in this modern
environment.

TITLE: A Tutoring System in Real-Time Facial Expression Perception and


Production in Children with Autism Spectrum Disorder.
AUTHOR: Cockburn, J., Bartlett, M., & Tanaka, J. Smile Maze
YEAR: 2017
Children with Autism Spectrum Disorders (ASD) are impaired in their
ability to produce and perceive dynamic facial expressions. The goal of SmileMaze
is to improve the expression production skills of children with ASD in a dynamic
and engaging game format. The Computer Expression Recognition Toolbox
(CERT) is the heart of the SmileMaze game. CERT automatically detects frontal
faces in a standard web-cam video stream and codes each frame in real-time with
respect to 37 continuous action dimensions. In the following we discuss how the
inclusion of real-time expression recognition can not only improve the efficacy of
an existing intervention program, Let’s Face It!, but it allows us to investigate
critical questions that could not be explored otherwise.

TITLE: An emotion recognition system based on autistic facial expression using


SIFTS descriptor.
AUTHOR: kaur, R.
YEAR: 2016
In these days, emotion recognition is frequently used from the face
recognition system but there problems occurred during classification of autistic
person face by using the feature extraction technique because in emotion detection
system we need more appropriate feature set of extracted face. We proposed
Emotion recognition system based on autistic facial expression using SIFT
descriptor with genetic algorithm (GA). In proposed work, we use back
propagation neural network (BPNN) for the classification of emotion using
extracted feature set from SIFT descriptor. SIFT descriptor is used to extract the
key points from the face; if the facial expression is different than the feature set
will varied. So, we can easily distinguish between different types of facial
expression and after that we can optimize the SIFT key points using genetic
algorithm. By using the proposed module, we got the accuracy near of around 95%
and for the implementation of proposed work, we use Image Processing Toolbox
under the MATLAB Software.
TITLE: Emotion Recognition System for Autism Children using Non-verbal
Communication.
AUTHOR: Santhoshkumar, R., & Geetha, M.
YEAR: 2015
Human beings can express their emotions through various ways, such as
facial expression, bodily expressions, prosody, or language. Autism Spectrum
Disorder (ASD) is a lifelong neuro developmental disorder, characterized by
varying levels of deficit in social and communications skills. This paper aims to
predict basic emotions from children with autism spectrum disorder (ASD) using
body movements. The facial expression is difficult for ASD children to recognize
emotion. The author proposed the body movement patterns to detect the type of
emotions of ASD children. In this paper 12 dimensional body movement features
(angle, distance, velocity and acceleration) from head, Lhand, R-hand are proposed
for predict the emotion from children body movements. The dataset for this
experiment is autism children’s recorded videos (5-11 years, n=10). The extracted
features are given to the Support Vector Machine (SVM) and the Random Forest
(RF) classifier to predict the children emotions.

TITLE:Emotion Recognition and Classification in Speech using Artificial Neural


Networks
AUTHOR: Shaw, A., Vardhan, R., & Saxena, S.
YEAR: 2015
Automatic emotion of detection in speech is a latest research area in the field
of human machine interaction and speech processing. The aim of this paper is to
enable a very natural interaction among human and machine. This dissertation
proposes an approach to recognize the user's emotional state by analysing signal of
human speech. To achieve the good extraction of the feature from the signal the
propose technique uses the high pass filter before the feature extraction process.
High pass filter uses to reduce the noise. High pass filter pass only high frequency
and attenuates the lower frequency. This paper uses the Neural Network as a
classifier to classify the different emotional states such as happy, sad, anger etc
from emotional speech database.

CONCLUSION

In this study, an emotion detection of autistic children from facial


expressions is examined by utilizing two techniques namely support vector
machine and neural network. I started by describing about importance of this
recognizing system and talked about the inspirations that encouraged us to consider
this field. Moreover, I described Autism Spectrum disorder, Emotion Recognition,
Need of emotion recognition for autistic children. The experiment achieved
different performances, and the overall accuracy was 90% which is achieved local
binary pattern + support vector machine method and with local binary pattern +
neural network method accuracy achieved is 70%.

Future work
In this study, high performance is achieved by using proposed method.
However, I documented some information that may lead to improve the proposed
system and proving its quality.

You might also like