Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
39 views50 pages

Embedded System

The document provides an introduction to embedded systems including definitions, characteristics, basic structure, types of processors used, and embedded software architectures. It defines an embedded system as a microcontroller or microprocessor-based system designed to perform a specific task and outlines the typical components including hardware, application software, and an optional real-time operating system.

Uploaded by

Pratham Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views50 pages

Embedded System

The document provides an introduction to embedded systems including definitions, characteristics, basic structure, types of processors used, and embedded software architectures. It defines an embedded system as a microcontroller or microprocessor-based system designed to perform a specific task and outlines the typical components including hardware, application software, and an optional real-time operating system.

Uploaded by

Pratham Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

HEMVATI NANDAN BAHUGUNA GARHWAL UNIVERSITY

(A Central University)

Srinagar Garhwal, Uttarakhand

School of Engineering and Technology

Session (2022 - 2023)

ASSIGNMENT OF

“EMBEDDED SYSTEM”

Submitted To: Submitted by:


Mr. Ashish Semwal Mohammad Aneesh
Dept. of Computer science and Engineering Roll no: 19134501023
BTech (CSE) 7th Sem
UNIT – 1
Introduction to Embedded System:

System
A system is an arrangement in which all its unit assemble work together according to a set of rules.
It can also be defined as a way of working, organizing or doing one or many tasks according to a
fixed plan. For example, a watch is a time displaying system. Its components follow a set of rules to
show time. If one of its parts fails, the watch will stop working. So, we can say, in a system, all its
subcomponents depend on each other.

Embedded System
As its name suggests, Embedded means something that is attached to another thing. An embedded
system can be thought of as a computer hardware system having software embedded in it. An
embedded system can be an independent system, or it can be a part of a large system. An embedded
system is a microcontroller or microprocessor-based system which is designed to perform a specific
task. For example, a fire alarm is an embedded system; it will sense only smoke.
An embedded system has three components −
• It has hardware.
• It has application software.
• It has Real Time Operating system (RTOS) that supervises the application software
and provide mechanism to let the processor run a process as per scheduling by
following a plan to control the latencies. RTOS defines the way the system works. It
sets the rules during the execution of application program. A small scale embedded
system may not have RTOS.
So we can define an embedded system as a Microcontroller based, software driven, reliable, real-
time control system.

Characteristics of an Embedded System


• Single-functioned − An embedded system usually performs a specialized operation
and does the same repeatedly. For example: A pager always functions as a pager.
• Tightly constrained − All computing systems have constraints on design metrics, but
those on an embedded system can be especially tight. Design metrics is a measure of
an implementation's features such as its cost, size, power, and performance. It must
be of a size to fit on a single chip, must perform fast enough to process data in real
time and consume minimum power to extend battery life.
• Reactive and Real time − Many embedded systems must continually react to changes
in the system's environment and must compute certain results in real time without any
delay. Consider an example of a car cruise controller; it continually monitors and
reacts to speed and brake sensors. It must compute acceleration or de-accelerations
repeatedly within a limited time; a delayed computation can result in failure to control
of the car.
• Microprocessors based − It must be microprocessor or microcontroller based.
• Memory − It must have a memory, as its software usually embeds in ROM. It does
not need any secondary memories in the computer.
• Connected − It must have connected peripherals to connect input and output devices.
• HW-SW systems − Software is used for more features and flexibility. Hardware is
used for performance and security.
Advantages
• Easily Customizable
• Low power consumption
• Low cost
• Enhanced performance

Disadvantages
• High development effort
• Larger time to market

Basic Structure of an Embedded System


The following illustration shows the basic structure of an embedded system −

• Sensor − It measures the physical quantity and converts it to an electrical signal which
can be read by an observer or by any electronic instrument like an A2D converter. A
sensor stores the measured quantity to the memory.
• A-D Converter − An analog-to-digital converter converts the analog signal sent by
the sensor into a digital signal.
• Processor & ASICs − Processors process the data to measure the output and store it
to the memory.
• D-A Converter − A digital-to-analog converter converts the digital data fed by the
processor to analog data
• Actuator − An actuator compares the output given by the D-A Converter to the actual
(expected) output stored in it and stores the approved output.

Introduction to embedded processors:


Processor is the heart of an embedded system. It is the basic unit that takes inputs and produces an
output after processing the data. For an embedded system designer, it is necessary to have the
knowledge of both microprocessors and microcontrollers.

Processors in a System
A processor has two essential units −

• Program Flow Control Unit (CU)


• Execution Unit (EU)
The CU includes a fetch unit for fetching instructions from the memory. The EU has circuits that
implement the instructions pertaining to data transfer operation and data conversion from one form
to another.
The EU includes the Arithmetic and Logical Unit (ALU) and also the circuits that execute
instructions for a program control task such as interrupt, or jump to another set of instructions.
A processor runs the cycles of fetch and executes the instructions in the same sequence as they are
fetched from memory.

Types of Processors
Processors can be of the following categories −
• General Purpose Processor (GPP)
o Microprocessor
o Microcontroller
o Embedded Processor
o Digital Signal Processor
o Media Processor
• Application Specific System Processor (ASSP)
• Application Specific Instruction Processors (ASIPs)
• GPP core(s) or ASIP core(s) on either an Application Specific Integrated Circuit
(ASIC) or a Very Large Scale Integration (VLSI) circuit.

Microprocessor
A microprocessor is a single VLSI chip having a CPU. In addition, it may also have other units such
as coaches, floating point processing arithmetic unit, and pipelining units that help in faster
processing of instructions.
Earlier generation microprocessors’ fetch-and-execute cycle was guided by a clock frequency of
order of ~1 MHz. Processors now operate at a clock frequency of 2GHz
Microcontroller
A microcontroller is a single-chip VLSI unit (also called microcomputer) which, although having
limited computational capabilities, possesses enhanced input/output capability and a number of on-
chip functional units.

CPU RAM ROM

I/O Port Timer Serial COM Port

Microcontrollers are particularly used in embedded systems for real-time control applications with
on-chip program memory and devices.

Microprocessor vs Microcontroller
Microprocessor Microcontroller
Microprocessors are multitasking in nature. Can Single task oriented. For example, a washing
perform multiple tasks at a time. For example, on machine is designed for washing clothes only.
computer we can play music while writing text in
text editor.

RAM, ROM, I/O Ports, and Timers can be added RAM, ROM, I/O Ports, and Timers cannot be
externally and can vary in numbers. added externally. These components are to be
embedded together on a chip and are fixed in
numbers.

Designers can decide the number of memory or I/O Fixed number for memory or I/O makes a
ports needed. microcontroller ideal for a limited but specific
task.

External support of external memory and I/O ports Microcontrollers are lightweight and cheaper than
makes a microprocessor-based system heavier and a microprocessor.
costlier.

External devices require more space, and their A microcontroller-based system consumes less
powerconsumption is higher. power and takes less space.
Embedded software architectures
An embedded software architecture is a piece of software that is divided in multiple layers. The
important layers in embedded software are

• Application layer
• Middleware layer
• Firmware layer

Application layer: Application layer is mostly written in high level languages like java, C++, C#
with rich GUI support. Application layer calls the middleware API in response to action by the user
or an event.

Middleware layer: Middleware layer is mostly written in C++, C with no rich GUI support. The
middleware software maintains the state machine of the device. And is responsible to handle requests
from the upper layer and the lower-level layer. The middleware exposes a set of API functions which
the application must call to use the services offered by the middleware. And vice versa the
middleware can send data to the application layer via IPC mechanism.

Firmware layer: Middleware layer is always written in C. The firmware is responsible for talking
to the chipset either configuring registers or reading from the chipset registers. The firmware exposes
a set of API’s that the middleware can call. To perform specific tasks.

Architectures used for processor/controller design are Harvard or


Von- Neumann:
Harvard architecture:

• It has separate buses for instruction as well as data fetching.


• Easier to pipeline, so high performance can be achieved.
• Comparatively high cost.
• Since data memory and program memory are stored physically in different locations, no
chances exist for accidental corruption of program memory

Von-Neumann architecture:

• It shares single common bus for instruction and data fetching.


• Low performance as compared to Harvard architecture.
• It is cheaper.
• Accidental corruption of program memory may occur if data memory and program memory
are stored physically in the same chip.

RISC and CISC are the two common Instruction Set Architectures:
RISC:
• Reduced Instruction Set Computing.
• It contains lesser number of instructions.
• Instruction pipelining and increased execution speed.
• Orthogonal instruction set.
• Operations are performed on registers only, only memory operations are load and store.
• A larger number of registers are available.
• Programmer needs to write more code to execute a task since instructions are simpler ones.
• It is single, fixed length instruction.
• Less silicon usage and pin count.
• With Harvard Architecture.

CISC:
• Complex Instruction Set Computing
• It contains greater number of instructions.
• Instruction pipelining feature does not exist.
• Non-orthogonal set.
• Operations are performed either on registers or memory depending on instruction.
• The number of general-purpose registers is very limited.
• Instructions are like macros in C language. A programmer can achieve the desired
functionality with a single instruction which in turn provides the effect of using simpler single
instruction in RISC.
• It is variable length instruction.
• More silicon usage since more additional decoder logic is required to implement the complex
instruction decoding.
• Can be Harvard or Von- Neumann Architecture.

Multitasking
Multitasking is the method of running several programs or processes at the same time. Most modern
OS support multitasking for maximum processor utilization. There are mainly two types of
multitasking which are Preemptive and Cooperative multitasking.

Cooperative multitasking: Cooperative multitasking is also referred to as preemptive multitasking.


The operating system never starts context switching from one executing process to another during
cooperative multitasking. A context switch occurs when processes voluntarily yield control regularly
or when they are inactive or logically halted to enable many apps to run concurrently. Additionally,
all processes cooperate in cooperative multitasking for the scheduling method to work.

Preemptive multitasking: The operating system may start a context switching from one process
that is currently running to another during preemptive multitasking. In other words, the OS enables
you to stop the execution of the existing process and reallocate the processor to another. The
operating system utilizes a set of criteria to determine how long a process should run before granting
access to the OS to another process.
Key differences between Preemptive and Cooperative Multitasking

Features Preemptive Multitasking Cooperative Multitasking

Definition The OS may begin context switching from The OS never starts context switching from one
one running process to another. executing process to another process.

Control It interrupts programs and grants control to It never unexpectedly interrupts a process.
processes that are not under the control of the
application.

Ideal It is ideal for multiple users. It is ideal for a single user.

Cores It may use multiple cores. It may use a single core.

Malicious It occurs when a malicious application A malicious application may block the entire system
Program initiates an indefinite loop that only affects in cooperative multitasking by busy waiting or
itself and has no effect on other programs or performing an indefinite loop and refusing to give
threads. up control.

Applications It forces apps to share the processor, whether All applications must collaborate for it to work. If
they want to or not. one program doesn't cooperate, it may use all of the
processor resources.

Examples UNIX, Windows 95, and Windows NT Macintosh OS versions 8.0-9.2.2 and Windows 3.x

Micro Kernel
Micro kernel is the type of the operating system architecture which is useful in the same way as the
other architectures are and is used for file management, memory management and the scheduling of
the processes.
But the difference in this type of architecture is that it has some space or address allotted for the
different purposes like file sharing, scheduling, kernel services etc.
These all services have their address allotted for them which results in the reduction of the size of
kernel and the operating system.
The basic idea of microkernel design is to achieve high reliability by splitting the operating system
up into small, well-defined modules. The microkernel OS runs in kernel mode.

Advantages
• This is small and isolated so as better functioning
• These are more secure due to space division
• Can add new features without recompiling
• This architecture is more flexible and can coexist in the system
• Fewer system crashes as compared to monolithic system
Disadvantages
• It is expensive as compared to the monolithic system architecture.
• Function calls needed when drivers are implemented as processes.
• Performance of the microkernel system can be indifferent and may sometimes cause
problems.

Architecture of Micro kernel is given below -

Exokernel
Exokernel is a type of operating system developed at the Massachusetts Institute of Technology that
seeks to provide application-level management of hardware resources. The exokernel architecture is
designed to separate resource protection from management to facilitate application-specific
customization.
Conventional operating systems always have an impact on the performance, functionality and scope
of applications that are built on them because the OS is positioned between the applications and the
physical hardware. The exokernel operating system attempts to address this problem by eliminating
the notion that an operating system must provide abstractions upon which to build applications. The
idea is to impose as few abstractions as possible on the developers and to provide them with the
liberty to use abstractions as and when needed. The exokernel architecture is built such that a small
kernel moves all hardware abstractions into untrusted libraries known as library operating systems.
The main goal of an exokernel is to ensure that there is no forced abstraction.
Advantages
• Improved performance of applications
• More efficient use of hardware resources through precise resource allocation and
revocation
• Easier development and testing of new operating systems
• Each user-space application is allowed to apply its own optimized memory management

Disadvantages
• Reduced consistency
• Complex design of exokernel interfaces

Monolithic Kernel
The monolithic kernel manages the system's resources between the system application and the
system hardware. Unlike the microkernel, user and kernel services are run in the same address space.
It increases the kernel size and also increases the size of the OS.
The monolithic kernel offers CPU scheduling, device management, file management, memory
management, process management, and other OS services via the system calls. All of these
components, including file management and memory management, are located within the kernel.
The user and kernel services use the same address space, resulting in a fast-executing operating
system. One drawback of this kernel is that if anyone process or service of the system fails, the
complete system crashes. The entire operating system must be modified to add a new service to a
monolithic kernel.

Advantages
• The monolithic kernel runs quickly because of memory management, file management,
process scheduling, etc.
• All of the components may interact directly with each other's and also with the kernel.
• It is a single huge process that executes completely within a single address space.
• Its structures are easy and simple. The kernel contains all of the components required for
processing.

Disadvantages
• If the user needs to add a new service, the user requires to modify the complete operating
system.
• It isn't easy to port code written in the monolithic operating system.
• If any of the services fails, the entire system fails.
UNIT – 2
Embedded Hardware Architecture – 32 Bit Microcontrollers

Brief History of 8051


The first microprocessor 4004 was invented by Intel Corporation. 8085 and 8086 microprocessors
were also invented by Intel. In 1981, Intel introduced an 8-bit microcontroller called the 8051. It was
referred as system on a chip because it had 128 bytes of RAM, 4K byte of on-chip ROM, two timers,
one serial port, and 4 ports (8-bit wide), all on a single chip. When it became widely popular, Intel
allowed other manufacturers to make and market different flavors of 8051 with its code compatible
with 8051. It means that if you write your program for one flavor of 8051, it will run on other flavors
too, regardless of the manufacturer. This has led to several versions with different speeds and
amounts of on-chip RAM.

8051 Flavors / Members


• 8052 microcontroller − 8052 has all the standard features of the 8051 microcontroller
as well as an extra 128 bytes of RAM and an extra timer. It also has 8K bytes of on-
chip program ROM instead of 4K bytes.
• 8031 microcontroller − It is another member of the 8051 family. This chip is often
referred to as a ROM-less 8051, since it has 0K byte of on-chip ROM. You must add
external ROM to it to use it, which contains the program to be fetched and executed.
This program can be as large as 64K bytes. But in the process of adding external ROM
to the 8031, it lost 2 ports out of 4 ports. To solve this problem, we canadd an external
I/O to the 8031
Comparison between 8051 Family Members
The following table compares the features available in 8051, 8052, and 8031.

Feature 8051 8052 8031

ROM (bytes) 4K 8K 0K

RAM (bytes) 128 256 128

Timers 2 3 2

I/O pins 32 32 32

Serial port 1 1 1

Interrupt sources 6 8 6

Features of 8051 Microcontroller


An 8051 microcontroller comes bundled with the following features −

• 4KB bytes on-chip program memory (ROM)


• 128 bytes on-chip data memory (RAM)
• Four register banks
• 128 user defined software flags
• 8-bit bidirectional data bus
• 16-bit unidirectional address bus
• 32 general purpose registers each of 8-bit
• 16-bit Timers (usually 2, but may have more or less)
• Three internal and two external Interrupts
• Four 8-bit ports, (short model have two 8-bit ports)
• 16-bit program counter and data pointer
• 8051 may also have a few special features such as UARTs, ADC, Op-amp,etc.

Block Diagram of 8051 Microcontroller


The following illustration shows the block diagram of an 8051 microcontroller −

Registers:
The most widely used registers of the 8051 are A (accumulator), B, R0-R7, DPTR (data pointer),
and PC (program counter). All these registers are of 8-bits, except DPTR and PC.

Storage Registers in 8051:


We will discuss the following types of storage registers here −

• Accumulator
• R register
• B register
• Data Pointer (DPTR)
• Program Counter (PC)
• Stack Pointer (SP)
Accumulator
The accumulator, register A, is used for all arithmetic and logic operations. If the accumulator is not
present, then every result of each calculation (addition, multiplication, shift, etc.) is to be stored into
the main memory. Access to main memory is slower than access to a register like the accumulator
because the technology used for the large main memory is slower (but cheaper) than that used for a
register.
The "R" Registers
The "R" registers are a set of eight registers, namely, R0, R1 to R7. These registers function as
auxiliary or temporary storage registers in many operations. Consider an example of the sum of 10
and 20. Store a variable 10 in an accumulator and another variable 20 in, say, register R4. To process
the addition operation, execute the following command −
ADD A, R4
After executing this instruction, the accumulator will contain the value 30. Thus "R" registers are
very important auxiliary or helper registers. The Accumulator alone would not be very useful if it
were not for these "R" registers. The "R" registers are meant for temporarily storage of values.
Let us take another example. We will add the values in R1 and R2 together and then subtract the
values of R3 and R4 from the result.
MOV A,R3 ;Move the value of R3 into the accumulator
ADD A,R4 ;Add the value of R4
MOV R5,A ;Store the resulting value temporarily in R5
MOV A,R1 ;Move the value of R1 into the accumulator
ADD A,R2 ;Add the value of R2
SUBB A,R5 ;Subtract the value of R5 (which now contains R3 + R4)
As you can see, we used R5 to temporarily hold the sum of R3 and R4. Of course, this is not the
most efficient way to calculate (R1 + R2) – (R3 + R4), but it does illustrate the use of the "R" registers
as a way to store values temporarily.
The "B" Register
The "B" register is very similar to the Accumulator in the sense that it may hold an 8-bit (1-byte)
value. The "B" register is used only by two 8051 instructions: MUL AB and DIV AB. To quickly
and easily multiply or divide A by another number, you may store the other number in "B" and make
use of these two instructions. Apart from using MUL and DIV instructions, the "B" register is often
used as yet another temporary storage register, much like a ninth R register.
The Data Pointer
The Data Pointer (DPTR) is the 8051’s only user-accessible 16-bit (2-byte) register. The
Accumulator, R0–R7 registers and B register are 1-byte value registers. DPTR is meant for pointing
to data. It is used by the 8051 to access external memory using the address indicated by DPTR. DPTR
is the only 16-bit register available and is often used to store 2-byte values.
The Program Counter
The Program Counter (PC) is a 2-byte address which tells the 8051 where the next instruction to
execute can be found in the memory. PC starts at 0000h when the 8051 initializes and is incremented
every time after an instruction is executed. PC is not always incremented by 1. Some instructions
may require 2 or 3 bytes; in such cases, the PC will be incremented by 2 or 3.
Branch, jump, and interrupt operations load the Program Counter with an address other than the
next sequential location. Activating a power-on reset will cause all values in the register to be lost.
It means the value of the PC is 0 upon reset, forcing the CPU to fetch the first opcode from the ROM
location 0000. It means we must place the first byte of upcode in ROM location 0000 because that
is where the CPU expects to find the first instruction.
The Stack Pointer (SP)
The Stack Pointer, like all registers except DPTR and PC, may hold an 8-bit (1-byte) value. The
Stack Pointer tells the location from where the next value is to be removed from the stack. When a
value is pushed onto the stack, the value of SP is incremented and then the value is stored at the
resulting memory location. When a value is popped off the stack, the value is returned from the
memory location indicated by SP, and then the value of SP is decremented.
This order of operation is important. SP will be initialized to 07h when the 8051 is initialized. If a
value is pushed onto the stack at the same time, the value will be stored in the internal RAM address
08h because the 8051 will first increment the value of SP (from 07h to 08h) and then will store the
pushed value at that memory address (08h). SP is modified directly by the 8051 by six instructions:
PUSH, POP, ACALL, LCALL, RET, and RETI.

Memory:
Memory is an important part of an Embedded system. It stores the control algorithm or firmware of
an Embedded system.
• Some processors/controllers contain built in memory and it is referred as On-chip memory.
• Others do not contain any memory inside the chip and requires external memory to be
connected with the controller/processor to store the control algorithm. It is called off -chip
memory.
ROM (Read only Memory)
The program memory or code storage memory of an embedded system stores the program
instructions. The code memory retains its contents even after the power to it is turned off. It is
generally known as non-volatile storage memory. Depending on the fabrication, erasing and
programming techniques they are classified into the following type—
• MROM
• PROM
• EPROM
• EEPROM
• FLASH
MROM:
• Masked ROM (MROM) Masked ROM is a one-time programmable device. Masked ROM
makes use of the hardwired technology for storing data. The device is factory programmed
by masking and metallisation process at the time of production itself, according to the data
provided by the end user.
• The primary advantage of this is low cost for high volume production. They are the least
expensive type of solid-state memory.
• The limitation with MROM based firmware storage is the inability to modify the device
firmware against firmware upgrades. Since the MROM is permanent in bit storage, it is not
possible to alter the bit information.
Programmable Read Only Memory (PROM) / (OTP):
Unlike Masked ROM Memory, One Time Programmable Memory (OTP) or PROM is not pre-
programmed by the manufacturer. The end user is responsible for programming these devices. This
memory has nichrome or polysilicon wires arranged in a matrix. These wires can be functionally
viewed as fuses. It is programmed by a PROM programmer which selectively burns the fuses
according to the bit pattern to be stored. Fuses which are not blown/burned represent logic "1"
whereas fuses which are blown/burned represent logic "0". The default state is logic "1". OTP is
widely used for commercial production of embedded systems whose proto-typed versions are proven
and the code is finalized. It is a low-cost solution for commercial production. OTPs cannot be
reprogrammed.
Erasable Programmable Read Only Memory (EPROM):
EPROM gives the flexibility to reprogram same chip. EPROM stores the bit information by charging
floating gate of an FET. Bit information is stored by using an EPROM programmer which applies
the high voltage to charge the floating gate. EPROM contains a quartz crystal window for erasing
the stored information. If the window is exposed to ultraviolet rays for a fixed duration the entire
memory will be erased. Even though the EPROM chip s flexible in terms of re-programmability, it
needs to be taken out of the circuit board and put in a UV eraser device for 20 to 30 minutes. so it is
a tedious and time-consuming process.

Electrically Erasable Programmable Read Only Memory (EEPROM):


As the name indicates, the information contained in the EEPROM memory can be altered by using
electrical signals at the register level. They can be erased and reprogrammed in -circuit. These chips
include a Chip erase mode and in this mode, they can be erased in a few milliseconds. It provides
greater flexibility for system design. The only limitation is their capacity is limited when compared
with the standard ROM (A few kilobytes).
FLASH:
FLASH is the latest ROM technology and is the most popular ROM technology used in today's
embedded designs. FLASH memory is a variation of EEPROM technology. It combines the re-
programmability of EEPROM and the high capacity of standard ROMs. FLASH memory isorganised
as sectors (blocks) or pages. FLASH memory stores information in an array of floating gate MOS-
FET transistors. The erasing of memory can be done at sector level or page level withoutaffecting
the other sectors or pages. Each sector/page should be erased before re -programming. Thetypical
erasable capacity of FLASH is 1000 cycles. W27C512 from WINBOND is an example of64KB
FLASH memory.
Read -Write Memory/Random Access Memory (RAM):
• RAM is the data memory or working memory of the controller/processor.
Controller/processor can read from it and write to it. RAM is volatile, meaning when the
power is turned off, all the contents are destroyed. RAM is a direct access memory, meaning
we can access the desired memory location directly without the need for traversing through
the entire memory locations to reach the desired memory position (i.e., random access of
memory location.)
• RAM generally falls into three categories: Static RAM (SRAM), dynamic RAM (DRAM) and
non-volatile RAM (NVRAM).

Static RAM (SRAM) :


• Static RAM stores data in the form of voltage. They are made up of flip-flops. Static RAM
is the fastest form of RAM available. Static RAM is realised using six transistors (or 6
MOSFETs). Four of the transistors are used for building the latch (flip-flop) part of the
memory cell and two for controlling the access. SRAM is fast in operation due to its resistive
networking and switching capabilities. In its simplest representation an SRAM cell can be
visualised as shown in figure.
• This implementation in its simpler form can be visualised as two -cross coupled inverters
with read/write control through transistors. The four transistors in the middle form the cross
-coupled inverters. This can be visualised as shown in Fig from the SRAM implementation
diagram, access to the memory cell is controlled by the line Word Line, whichcontrols the
access Visualisation of SRAM cell transistors (MOSFETs) Q5 and Q6. The access transistors
control the connection to bit lines B & B\.
• To write a value to the memory cell, apply the desired value to the bit control lines (For
writing ‘1’make B = 1 and B\ =0; For writing 0, make B = 0 and B\ =1) and assert the Word
Line (Make Word line high). This operation latches the bit written in the flip-flop. Forreading
operation assert both B and B/ bit lines to 1 and set word line to 1.
• The major limitations of SRAM are low capacity and high cost.

Dynamic RAM (DRAM):


• Dynamic RAM stores in the form of charge. They are made up of MOS transistor gates. The
advantages of DRAM are its high density and low cost compared to SRAM. The
disadvantage is that since the information is stored as charge it gets leaked off with time and
to prevent this they need to be refreshed periodically. Special circuits called DRAM
controllers are used for the refreshing operation. The refresh operation is done periodically
in milli- seconds interval.
• The MOSFET acts as the gate for the incoming and outgoing data whereas the capacitor acts
as the bit storage unit.

NVRAM:
Non-volatile RAM is a random-access memory with battery backup. It static RAM based memory
and a minute battery for providing supply to the memory in the absence of external power supply.
The memory and battery are packed together in a single package. NVRAM is used for the non-
volatile storage of results of operations or for setting up of flags, etc. The life of NVRAM is expected
to be around 10 years.

ASSEMBLY LANGUAGE BASED DEVELOPMENT:


Assembly language' is the human readable notation of ‘machine language' whereas ‘machine
language' is a processor understandable language. Processors deal only with binaries 0s and 1s).
Machine language is a binary representation, and it consists of 1s and 0s. Machine language is made
readable by using specific symbols called 'mnemonics'. Hence machine language can be considered
as an interface between processor and programmer.
Assembly language and machine languages are processor controller dependent, and an assembly
program written for one processor, controller family will not work with others. Assembly language
programming is the task of writing processor specific machine code in mnemonic form, converting
the mnemonics into actual processor instructions and associated data using an assembler.
The general format of an assembly language instruction is an Opcode followed by operands. The
Opcode tells the processor/controller what to do and the Operands provide the data and information
required to perform the action specified by the opcode. We will Analyse each of them with the 8051
ASM instructions as an example.
MOV A, #30 This instruction mnemonic moves decimal value 30 to the 8051 Accumulator register
Here MOV A is the Opcode and 30 is the operand (single operand). The same instruction when
written in machine language will look like 01110100 00011110
Where the first 8-bit binary value 01110100 represents the opcode. MOVA and the second 8-bit
binary value 00011110 represents the operand 30.
Similar to ‘c’ language in assembly level language also have multiple source files called modules.
Each module is represented by ‘.asm’ or ‘.src’ file similar to ‘.c’ files in C programming. This
approach is known as ‘Modular Programming’. Conversion of the assembly level language to
machine language is carried out by a sequence of operations.

ADVANTAGES OF ASSEMBLY LANGUAGE BASED DEVELOPMENT:


Assembly Language based development is the most common technique adopted from the beginning
of embedded technology development. The major advantages of Assembly Language based
development is listed below.
• Efficient Code Memory and Data Memory Usage (Memory Optimisation):
Since the developer is well versed with the target processor architecture and memory organisation,
optimised code can be written for performing operations. This leads to less utilisation of code
memory and efficient utilisation of data memory.
• High Performance:
Optimised code not only improves the code memory usage but also improves the total system
performance. Through effective assembly coding, optimum performance can be achieved for a target
application.
• Low Level Hardware Access:
Most of the code for low level programming like accessing external device specific registers from
the operating system kernel, device drivers, and low level interrupt routines, etc. are making use of
direct assembly coding since low level device specific operation support is not commonly available
with most of the high-level language cross compilers.
• Code Reverse Engineering:
Reverse engineering is the process of understanding the technology behind a product by extracting
the information from a finished product. Reverse engineering is performed by 'hawkers'. to reveal
the technology behind 'Proprietary Products'.

DRAW BACKS OF ASSEMBLY LANGUAGE BASED DEVELOPMENT:


• Limitations of assembly language development.
I/O operations interrupt structure:
I/O Structure consists of Programmed I/O, Interrupt driven I/O, DMS, CPU, Memory, External
devices, these are all connected with the help of Peripheral I/O Buses and General I/O Buses.
Different types of I/O Present inside the system are shown below −

Programmed I/O
In the programmed I/O when we write the input then the device should be ready to take the data
otherwise the program should wait for some time so that the device or buffer will be free then it can
take the input.
Once the input is taken then it will be checked whether the output device or output buffer is free
then it will be printed. This process is continued every time in transferring of the data.
I/O Interrupts
To initiate any I / O operation, the CPU first loads the registers to the device controller. Then the
device controller checks the contents of the registers to determine what operation to perform.
There are two possibilities if I / O operations want to be executed. These are as follows −
•Synchronous I / O − The control is returned to the user process after the I/O process
is completed.
• Asynchronous I/O − The control is returned to the user process without waiting for
the I/O process to finish. Here, I/O process and the user process run simultaneously.
DMA Structure
Direct Memory Access (DMA) is a method of handling I / O. Here the device controller directly
communicates with memory without CPU involvement.
After setting the resources of I/O devices like buffers, pointers, and counters, the device controller
transfers blocks of data directly to storage without CPU intervention.
DMA is generally used for high speed I / O devices.
ARM Bus technology:

AMBA stands for Advanced Microcontroller Bus Architecture. AMBA specification specifies an on
chip communication standard. This is used to design embedded microcontrollers with high
performance.

Embedded systems with ARM:

ARM processor controls the embedded device. An ARM processor comprises a core (the execution
engine that processes instructions and manipulates data) plus the extensions interface it with a bus.
Controllers coordinate important functional blocks of the system. Two commonly found controllers
are interrupt and memory controllers.
Peripherals provide all the input-output capability external to the chip and are responsible for the
uniqueness of the embedded device.

Serial bus protocols:


• I²C (Inter-Integrated Circuit), pronounced I-squared-C, is a multi-master, multi-slave,
single-ended, serial computer bus invented by Philips Semiconductor (now NXP
Semiconductors). It is typically used for attaching lower-speed peripheral ICs to processors
and microcontrollers. ICs mutually network through a common synchronous serial bus I2C.
• I2C Bus communication− use of only simplifies the number of connections and provides a
common way (protocol) of connecting different or same type of I/O devices using
synchronous serial communication.
The CAN Bus
• CAN bus (for controller area network) is a vehicle bus standard designed to allow
microcontrollers and devices to communicate with each other with in a vehicle without a
host computer.
• CAN bus is a message-based protocol, designed specifically for automotive applications but
now also used in other areas such as aerospace, maritime, industrial automation and medical
equipment.
• Development of the CAN bus started in 1983 at Robert Bosch GmbH. The protocol was
officially released in 1986 at the Society of Automotive Engineers (SAE) congress in
Detroit, Michigan. The first CAN controller chips, produced by Intel and Philips, came on
the market in 1987.
The USB Bus
• USB was designed to standardize the connection of computer peripherals (including
keyboards, pointing devices, digital cameras, printers, portable media players, disk drives
and network adapters) to personal computers, both to communicate and to supply electric
power.
• It has become commonplace on other devices, such as smartphones, PDAs and video game
consoles.
• USB has effectively replaced a variety of earlier interfaces, such as serial and parallel ports,
as well as separate power chargers for portable devices.
• Variations like USB 1.X, USB 2.X, USB 3.X

Parallel bus protocols:


• Parallel bus enables a host computer or system to communicate simultaneously 32-bit or 64-
bit with other devices or systems, for example, to a network interface card (NIC) or graphic
card.

The PCI Bus


PCI stands for Peripheral Component Interconnect.
It could be a standard information transport that was common in computers from 1993 to 2007 or
so. It was for a long time the standard transport for extension cards in computers, like sound cards,
network cards, etc. It was a parallel transport, that, in its most common shape, had a clock speed
of 66 MHz, and can either be 32 or 64 bits wide. It has since been replaced by PCI Express, which
could be a serial transport as contradicted to PCI. A PCI port, or, more precisely, PCI opening, is
essentially the connector that’s utilized to put through the card to the transport. When purge, it
basically sits there and does nothing.
Types of PCI:
These are various types of PCI:
• PCI 32 bits have a transport speed of 33 MHz and work at 132 MBps.
• PCI 64 bits have a transport speed of 33 MHz and work at 264 MBps.
• PCI 32 bits have a transport speed of 66 MHz and work at 512 MBps.
• PCI 64 bits have a transport speed of 66 MHz and work at 1 GBps.
Advantages of PCI :
• You’ll interface a greatest of five components to the PCI and you’ll be able moreover
supplant each of them by settled gadgets on the motherboard.
• You have different PCI buses on the same computer.
• The PCI transport will improve the speed of the exchanges from 33MHz to 133 MHz
with a transfer rate of 1 gigabyte per second.
• The PCI can handle gadgets employing a greatest of 5 volts and the pins utilized can
exchange more than one flag through one stick.
Disadvantages of PCI :
• PCI Graphics Card cannot get to system memory.
• PCI does not support pipeline.

GPIB bus
• GPIB (General Purpose Interface Bus) was developed as an interface between computers and
measuring instruments. It is mainly used to connect PCs and measuring instruments. GPIB
was created as HP-IB, an in-house standard developed by Hewlett Packard, which was
approved by the IEEE (Institute of Electrical and Electronics Engineers) and became an
international standard. Many current measuring instruments support the GPIB interface as
standard, and it is widely used in measurement systems using PCs and measuring
instruments.
• GPIB standards include IEEE-488 and the higher-level protocol IEEE-488.2, which is
currently mainstream. In addition to the transfer methods specified in IEEE-488, IEEE-488.2
features syntax for text data and numeric expressions, and commands and queries that can be
used by all instruments. IEEE-488.2-compatible instruments can communicate with other
IEEE-488.2-compliant devices and with IEEE-488 devices within the scope prescribed in
IEEE-488.
Advantages of GPIB:

• GPIB employs a bus interface, and piggyback connectors make connecting and configuring
devices easy. It is possible to use a single PC interface even if more devices are connected to
the system in the future.
• Handshake communication ensures highly reliable data transfer.
• As the standard bus of the measuring instrument industry, the GPIB interface is employed by
many measuring instruments, allowing users to control a variety of measuring instruments
by mastering a single protocol.
• Devices with different communication speeds can be connected. (*The whole system will be
limited to the speed of the slowest device.)
Unit-3
Software Development
If traditional desktop software is written for computers, embedded software is integrated into non-computer
hardware to control its functions. The hardware is represented by various monitoring devices, machines,
sensors, wearables and practically every piece of modern electronics. Embedded technology, together with
networks and information technologies, constitutes the Internet of Things systems and is widely used in
medicine, manufacturing, appliances, the automotive industry, transportation and aviation.

Embedded Programming in C and C++


C and C++ are old programming languages but are still used for embedded programming in a hardware
chip. C programming language is the core of other programming languages. There are some advantages
to using C and C++. They are as follows.

1. Low-level programming language helps to access components easily.


2. Due to a lack of automatic memory management, compiled programs use little memory.
3. The speed of low-level programming languages is excellent.

Embedded programs allow hardware to monitor external events and control external devices. Both
hardware and software are important in embedded systems.

Embedded Software Development tools


To create software, the following basic components are needed:

• Operating systems (Windows, Ubuntu Linux, ThreadX, Nucleus RTOS)


• Languages (C, C++, Python, JavaScript, etc.)
• Tools (IDE, PDK, SDK, compiler toolchains, hardware and software debuggers (e.g., ST-Link, Segger))

Types of software development:


1) Editor: A text editor is the first tool you need to begin creating an embedded system. It is used to
write source code in programming languages C and C++ and save this code as a text file.
A good example of a text editor is Geany. This is a small and lightweight environment that uses the
GTK+ toolkit. Geany supports C, Java, PHP, HTML, Python, Perl, Pascal and other types of files.

2) Compiler: Source code is written in a high-level programming language. A compiler is a tool for
transforming the code into a low-level machine language code — the one that a machine can
understand.
Keil C51 is a popular compiler that creates apps for 8051 microcontrollers and translates source code
written in the C language.

3) Assembler: The function of this tool is to convert a human-written code into a machine language. In
comparison with a compiler, which can do so directly, an assembler initially converts source code into
object code, and then to a machine language.
GNU Assembler (GAS) is widely used for Linux operating systems and can be found in the
Macintosh tools package.

4) Debugger: This is a critical tool for testing. It goes through the code and eliminates bugs and errors,
notifying places where they occur. Precisely, debuggers pinpoint the lines where issues are found, so
programmers can address them quickly.
A good debugger tool is IDA Pro that works on Linux, Windows and Mac OS X operating systems. It
has both free and commercial versions and is highly popular among developers.

5) Linker: Traditionally, code is written into small pieces and modules. A linker is a tool that combines
all these pieces together, creating a single executable program. GNU ld is one of the linker tools.

6) Emulator: An emulator is a replication of the target system with identical functionality and
components. This tool is needed to simulate software performance and to see how the code will work
in the real-time environment. Using emulators, programmers can change values in order to reach the
ideal performance of the code. Once the code is fully checked, it can be embedded in the device.

7) Integrated Development Environment (IDE): Talking about the list of embedded software
development tools, we cannot but mention integrated development environments. All the above-
mentioned tools are needed for creating your embedded software. But it would be extremely
inconvenient to use them separately, adding another layer of complexity to the project.
Hence, to simplify the development process, it is highly recommended to use integrated
environments. IDE is software that provides a set of necessary tools in one package.

Examples include PyCharm, WebStorm, Qt Creator, Visual Studio, Arduino, Eclipse, etc.

Program Modeling concepts in Single and Multiprocessor Systems


• Two models of programming language are procedural and object-oriented programming (OOP).
Procedure-oriented languages examples are ALP and C.
• The C language provides for functions and main C functions are called from the main. A function
can call another function. There can be nesting of function-calls. There can also be multiple function
calls within a function.
• Programming elements in C are preprocessor directives, modifiers, conditional statements and loops,
pointers, function calls, multiple functions, function pointers, function queues and ISRs. Program
uses data of various types and with various data structures: arrays, queues, stacks, lists and trees.
• OOP languages examples are C++ and Java. C++ supports object-oriented as well as procedural.
Java is pure object-oriented. Object is reusable software unit and using these units the reusable
software unit and using these units the reusable software components are built. Large complex
software can be built using the software components.

Software Development process


The software development process is also known as the Software Development Life Cycle (SDLC).
It is a comprehensive set of rules, practices and steps that enable you to turn an idea for a software
product into an actual product.

1) Planning and Research: Preparation is key in software development. Before diving into a new
project, you should know precisely what that project will be, why you will be undertaking it and
what you wish to achieve. The first step in development process is all about planning and research.
At this stage, you should determine the following aspects:

• Scope of project
• Timeline
• Resources it will require
• Estimated costs

2) Feasibility/requirement Analysis: Requirement Analysis is the most important and necessary


stage in SDLC. The senior members of the team perform it with inputs from all the stakeholders
and domain experts or SMEs in the industry. Planning for the quality assurance requirements and
identifications of the risks associated with the projects is also done at this stage. We also document
the software requirements and get them accepted from project stakeholder. This is accomplished
through “SRS” – software requirement specific document.

3) Designing the software: The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of the last two, like inputs
from the customer and requirement gathering.

4) Developing the project: In this phase of SDLC, the actual development begins, and the
programming is built. The implementation of design begins concerning writing code. Developers
have to follow the coding guidelines described by their management and programming tools like
compilers, interpreters, debuggers, etc. are used to develop and implement the code.

5) Testing: After the code is generated, it is tested against the requirements to make sure that the
products are solving the needs addressed and gathered during the requirements stage. During
this stage, unit testing, integration testing, system testing, acceptance testing are done.

6) Deployment: Once the software is certified, and no bugs or errors are stated, then it is deployed.
Then based on the assessment, the software may be released as it is or with suggested enhancement
in the object segment. After the software is deployed, then its maintenance begins.

7) Maintenance: Once when the client starts using the developed systems, then the real issues come
up and requirements to be solved from time to time. This procedure where the care is taken for the
developed product is known as maintenance.

HARDWARE AND SOFTWARE CO-DESIGN:


• Distributed embedded systems can be organized in many ways depending upon the needs of the
application and cost constraints. One good way to understand possible architectures is to consider
the different types of interconnection networks that can be used.
• A point-to-point link establishes a connection between exactly two PEs. Point to point links is
simple to design precisely because they deal with only two components. We do not have to worry
about other PEs interfering with communication on the link.
• Figure 5.1 shows a simple example of a distributed embedded system built from point-to-point
links. The input signal is sampled by the input device and passed to the first digital filter, F1, over
a point-to-point link. The results of that filter are sent through a second point-to-point link to
filter F2. The results in turn are sent to the output device over a third point-to-point link.
• A digital filtering system requires that its outputs arrive at strict intervals, which means that the
filters must process their inputs in a timely fashion. Using point-to-point connections allows both
F1 and F2 to receive a new sample and send a new output at the same time without worrying about
collisions on the communications network.
• It is possible to build a full-duplex, point-to-point connection that can
be used for simultaneous communication in both directions between the
two PEs. (A half-duplex connection allows for only one-way
communication.)
• A bus is a more general form of network since it allows multiple
devices to be connected to it. Like a microprocessor bus, PEs connected
to the bus have addressed. Communications on the bus generally take
the form of packets as illustrated in Figure 5.2. A packet contains an
address for the destination and the data to be delivered.
Unit - 4

Real Time Operating Systems

Tasking Models
A real-time operating system (RTOS) serves real-time applications that process data without any
buffering delay. In an RTOS, the Processing time requirement is calculated in tenths of seconds
increments of time. It is a time-bound system that is defined as fixed time constraints. In this
type of system, processing must be done inside the specified constraints. Otherwise, the system
will fail.

1. Periodic Task

In periodic tasks, jobs are released at regular intervals. A periodic task repeats itself after a fixed
time interval. A periodic task is denoted by five tuples: T i = < Φi, Pi, ei, Di >

2. Dynamic Tasks

It is a sequential program that is invoked by the occurrence of an event. An event may be generated
by the processes external to the system or by processes internal to the system. Dynamically arriving
tasks can be categorized on their criticality and knowledge about their occurrence times.

1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals.
Aperiodic tasks have soft deadlines or no deadlines.

2. Sporadic Tasks: They are like a periodic task, i.e., they repeat at random instances.The
only difference is that sporadic tasks have hard deadlines. Three tuples denote a sporadic
task: Ti = (ei, gi, Di)

3. Critical tasks: Critical tasks are those whose timely executions are critical. If deadlines
are missed, catastrophes occur.

For example, life-support systems and the stability control of aircraft. If critical tasks are executed
at a higher frequency, then it is necessary.
4. Non-critical Tasks: Non-critical tasks are real times tasks. As the name implies, they are
not critical to the application. However, they can deal with time, varying data, and hence
they are useless if not completed within a deadline. The goal of scheduling these tasks is to
maximize the percentage ofjobs successfully executed within their deadlines.

Task States

In typical designs, a task has three states:

● Running (executing on the CPU);


● Ready (ready to be executed);
● Blocked (waiting for an event, I/O for example).

Round-Robin Algorithm:

Round Robin scheduling algorithm is one of the most popular scheduling algorithms which can be
implemented in most of the operating systems. This is the preemptive version of first come first
serve scheduling. The Algorithm focuses on Time Sharing. In this algorithm, every process gets
executed in a cyclic way. A certain time slice is defined in the system which is called time
quantum. Each process present in the ready queue is assigned the CPU for that time quantum, if
the execution of the process is completed during that time, then the process will terminate else the
process will go back to the ready queue and waits for the next turn to complete the execution.

Advantages

1. It can be implementable in the system because it is not depending on the burst time.

2. It doesn't suffer from the problem of starvation or convoy effect.

3. All the jobs get a fare allocation of CPU.


Disadvantages

1. The higher the time quantum, the higher the response time in the system.

2. The lower the time quantum, the higher the context switching overhead in the system.

3. Deciding a perfect time quantum is really a very difficult task in the system.

FIFO

FIFO which is also known as First In First Out is one of the types of page replacement

algorithm. The FIFO algorithm is used in the paging method for memory management in
an operating system that decides which existing page needs to be replaced in the queue.
FIFO algorithm replaces the oldest (First) page which has been present for the longest
time in the main memory.

How Does FIFO Page Replacement Work?


1. The OS maintains a list of all pages which are residing in the memory.
2. When a new page brings from the secondary memory.
3. The new page requests the main memory.
4. On a page fault, the head of the list i.e the oldest will be removed.
5. The new page will be added at the tail of the list.
6.

Page Fault: A page fault occurs when a page requested by a program running in
the CPU is not present in the main memory, but in the address page of that program.
A page fault generally creates an alert for the OS.
Preemptive Priority Scheduling

In Preemptive Priority Scheduling, at the time of arrival of a process in the ready queue, its Priority
is compared with the priority of the other processes present in the ready queue as well as with the
one which is being executed by the CPU at that point of time. The One with the highest priority
among all the available processes will be given the CPU next.

Rate-monotonic scheduling

Rate monotonic scheduling is a priority algorithm that belongs to the static priority
scheduling category of Real Time Operating System. It is preemptive in nature. The
priority is decided according to the cycle time of the processes that are involved. If the
process has a small job duration, then it has the highest priority. Thus, if a process with
highest priority starts execution, it will preempt the other running processes. The priority
of a process is inversely proportional to the period it will run for.
A set of processes can be scheduled only if they satisfy the following equation:
Example:
An example to understand the working of Rate monotonic scheduling algorithm.

Time period (T)


Processes Execution Time (C)

P1 3 20

P2 2 5

P3 2 10

n( 2^1/n - 1 ) = 3 ( 2^1/3 - 1 ) = 0.7977

U = 3/20 + 2/5 + 2/10 = 0.75

Priority Inversion
Priority inversion is an operating system scenario in which a higher priority process is
preemptedby a lower priority process. This implies the inversion of the priorities of the two
processes.
Problems due to Priority Inversion
Some of the problems that occur due to priority inversion are given as follows −

● A system malfunction may occur if a high priority process is not provided the
required resources.
● Priority inversion may also lead to implementation of corrective measures. These
may include the resetting of the entire system.
● The performance of the system can be reducing due to priority inversion. This
mayhappen because it is imperative for higher priority tasks to execute promptly.

Priority Ceiling

Priority Ceiling Protocol is a job task synchronization protocol in a real-time system


that is better than Priority inheritance protocol in many ways. Real-Time Systems are
multitasking systems that involve the use of semaphore variables, signals, and events for
job synchronization.
In Priority ceiling protocol an assumption is made that all the jobs in the system have a
fixed priority. It does not fall into a deadlock state.

The basic properties of Priority Ceiling Protocols are:


1. Each of the resources in the system is assigned a priority ceiling.
2. The assigned priority ceiling is determined by the highest priority among all
the jobs which may acquire the resource.
3. It makes use of more than one resource or semaphore variable, thus eliminating
chain blocking.
4. A job is assigned a lock on a resource if no other job has acquired lock on that
resource.
5. A job J can acquire a lock only if the job’s priority is strictly greater than the
priority ceilings of all the locks held by other jobs.
6. If a high priority job has been blocked by a resource, then the job holding that
resource gets the priority of the high priority task.

Deadlock

Every process needs some resources to complete its execution. However, the resource is granted in
a sequential order.

1. The process requests for some resource.

2. OS grant the resource if it is available otherwise let the process waits.

3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to
be released.

Let us assume that there are three processes P1, P2 and P3. There are three different resources R1,
R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3. P3 also demands for R1 which is being used by P1
therefore P3 also stops its execution.

In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the processes
got blocked.
Necessary conditions for Deadlocks

1. Mutual Exclusion

2. A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time. Hold and Wait

3. A process waits for some resources while holding another resource at the same time. No
preemption

4. The process which once scheduled will be executed till the completion. No other process
can be scheduled by the scheduler meanwhile. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last process is
waiting for the resource which is being held by the first process.

Strategies for handling Deadlock

1. Deadlock Ignorance

Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses.
2. Deadlock prevention

Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.

3. Deadlock avoidance

In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe
state at every step which the operating system performs. The process continues until the system is
in safe state. Once the system moves to unsafe state, the OS has to backtrack one step.

4. Deadlock detection and recovery

This approach let the processes fall in deadlock and then periodically check whether deadlock
occur in the system or not. If it occurs, then it applies some of the recovery methods to the system
to get rid of deadlock.

Process synchronization:

When two or more process cooperates with each other, their order of execution must be preserved
otherwise there can be conflicts in their execution and inappropriate outputs can be produced.

Race Condition

A Race Condition typically occurs when two or more threads try to read, write and possibly make
the decisions based on the memory that they are accessing concurrently.
Critical Section

The regions of a program that try to access shared resources and may cause race conditions are
called critical section. To avoid race condition among the processes, we need to assure that only
one process at a time can execute within the critical section.

IPC:

"Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication:

Shared memory

Shared memory system is the fundamental model of inter process communication. In a shared
memory system, in the address space region the cooperating communicates with each other by
establishing the shared memory region.

Shared memory concept works on fastest inter process communication.


If the process wants to initiate the communication and it has some data to share, then establish
the shared memory region in its address space. After that, another process wants to communicate
and tries to read the shared data and must attach itself to the initiating process’s shared address
space.

Let us see the working condition of the shared memory system step by step.

Working
In the Shared Memory system, the cooperating processes communicate, to exchange the data with
each other. Because of this, the cooperating processes establish a shared region in their memory.
The processes share data by reading and writing the data in the shared segment of the processes.

Let us discuss it by considering two processes. The diagram is shown below −


Let the two cooperating processes P1 and P2. Both the processes P1 and P2, have their different
address spaces. Now let us assume, P1 wants to share some data with P2.

So, P1 and P2 will have to perform the following steps −

Step 1 − Process P1 has some data to share with process P2. First P1 takes initiative and
establishes a shared memory region in its own address space and stores the data or information to
be shared in its shared memory region.

Step 2 − Now, P2 requires the information stored in the shared segment of P1. So, process P2
needs to attach itself to the shared address space of P1. Now, P2 can read out the data from there.

Step 3 − The two processes can exchange information by reading and writing data in the shared
segment of the process.

Memory Locking

Locking memory is one of the most important issues for real-time applications. In a real-time
environment, a process must be able to guarantee continuous memory residence to reducelatency
and to prevent paging and swapping.

This section describes the memory locking mechanisms that are available to real-time applications
in SunOS.

Under SunOS, the memory residency of a process is determined by its current state, the total
available physical memory, the number of active processes, and the processes' demand formemory.
This residency is appropriate in a time-share environment. This residency is often unacceptable
for a real-time process. In a real-time environment, a process must guarantee a memory residence
to reduce the process' memory access and dispatch latency.
Real-time memory locking in SunOS is provided by a set of library routines. These routines allow
a process running with superuser privileges to lock specified portions of its virtual address space
into physical memory. Pages locked in this manner are exempt from paging until the pages are
unlocked or the process exits.

The operating system has a system-wide limit on the number of pages that can be locked at any
time. This limit is a tunable parameter whose default value is calculated at boot time. The default
value is based on the number of page frames minus another percentage, currently set at ten percent.

What is Main Memory

The main memory is central to the operation of a modern computer. Main Memory is a
large array of words or bytes, ranging in size from hundreds of thousands to billions.
Main memory is a repository of rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and information are kept when the
processor is effectively utilizing them. Main memory is associated with the processor, so
moving instructions and information into and out of the processor is extremely fast. Main
memory is also known as RAM (Random Access Memory). This memory is a volatile
memory. RAM lost its data when a power interruption occurs.
What is Memory Management:

In a multiprogramming computer, the operating system resides in a part of memory and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called memory management. Memory management is a method in the
operating system to manage operations between main memory and disk during process
execution. The main aim of memory management is to achieve efficient utilization of
memory.
Why Memory Management is required:

● Allocate and de-allocate memory before and after process execution.


● To keep track of used memory space by processes.
● To minimize fragmentation issues.
● To proper utilization of main memory.
● To maintain data integrity while executing of process.

Semaphore

Semaphore was proposed by Dijkstra in 1965 which is a very significant technique to


manage concurrent processes by using a simple integer value, which is known as a
semaphore. A semaphore is simply an integer variable that is shared between threads.This
variable is used to solve the critical section problem and to achieve process synchronization
in the multiprocessing environment.

Semaphores are of two types:

1. Binary Semaphore –
This is also known as mutex lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section
problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a
resource that has multiple instances.
Message queues:

Message queues are kernel objects used to pass content to a task. Messages are typically void
pointers to a storage area containing the actual message. However, the pointer can point to
anything, even a function for the receiving task to execute. The meaning of the message is thus
application dependent.

Mailboxes

A previous section covered semaphores which can be employed to synchronize tasks, thus
providing a mechanism allowing orderly inter-task communication using global data.

However, communication via global data is hard to keep track of and error-prone, since a
task might "forget" the required semaphore operation before accessing the data. Moreover,
no protocol has yet been introduced for a controlled exchange of data.

Mailboxes serve to close this gap. A mailbox is a data buffer that can store a fixed number
of messages of a fixed size.
Tasks can store messages in a mailbox. If the mailbox is full, the task is blocked until space
becomes available. Of course, tasks can also retrieve messages from mailboxes. In this
case, the task is blocked if no message is available in the mailbox. Any number of tasks can
use the same mailbox for storing and retrieving messages.

Pipe

Conceptually, a pipe is a connection between two processes, such that the standard output
from one process becomes the standard input of the other process. In UNIX Operating
System, Pipes are useful for communication between related processes (inter-process
communication).

● Pipe is one-way communication only i.e we can use a pipe such that One
process writes to the pipe, and the other process reads from the pipe. It opens
apipe, which is an area of main memory that is treated as a “virtual file”.
● The pipe can be used by the creating process, as well as all its child processes,
for reading and writing. One process can write to this “virtual file”, or pipe and
another related process can read from it.
● If a process tries to read before something is written to the pipe, the process is
suspended until something is written.
● The pipe system call finds the first two available positions in the process’s
open file table and allocates them for the read and write ends of the pipe.
Virtual socket

Virtual socket and virtual socket are “constructs” presented upstream to the tightly isolated
software container which we call a virtual machine. When you run an operating system, it
detects the hardware (layout) within the virtual machine. The VMkernel schedules a Virtual
Machine Monitor (VMM) for every vCPU.
UNIT 5

Study of Micro C/OS-II or Vx Works

RTOS System Level Functions


A real time operating system (RTOS) is a multitasking operating system for the applications
with hard or soft real time constraints. Real-time constraint means the constraint on
occurrence of an event, system expected response and latency to the event. It also provides
perfection, correctness, protection and security features of any kernel of OS when performing
multiple tasks. RTOS responds to inputs immediately i.e in real time. The task is completed
within a specified time delay
The main reasons for going to RTOS are effective use of drivers available with RTOS and we
can focus on developing the application code rather than creating and maintaining a
scheduling system. RTOS also supports multi-threading with the synchronization
mechanism. The developed application code can be portable to other CPUs also. Resource
allocation and management processes are also handled properly. We can add new features
without affecting the high priority functions or tasks
Functions of RTOS
The important functions done by RTOS are task management, scheduling, resource allocation
and interrupt handling.

1. Task management:

In Real Time Applications the Process is called as Task which takes execution time and
occupies memory. The task management is the process of managing tasks through its life
cycle. Task will have different states. The states of task are Pended, Ready, Delayed,
Suspended, and Run.
1.1 Task/Process States:
Each Task/ process will be in any one of the states. The states are pended, ready,
suspended, delayed and run. The scheduler will operate the process which is in ready
state.
1.2 Typical Task Operations:
The important task operations are creating and deleting tasks, controlling, task
scheduling and obtaining task information.
2. Scheduling in RTOS:

In order to schedule task, information about the task must be known. The information
of task are the number of tasks, resource requirements, execution time and deadlines.

3. Resource Allocation in RTOS:

Resource allocation is necessary for any application to be run on the system. when an
application is running, it requires the OS to allocate certain resources for it to be able to run.

4. Interrupts handling in RTOS:

An interrupt is a signal from a device attached to a computer or from a program within a


computer. It stops the main program and responds for the event which is interrupted.

RTOS task control functions include:


Task Context
Once a task is switched, the execution context represented by program stack, registers, and
counter contents is saved by the OS in a data structure known as a task control block to
ensure that the task resumes when rescheduled. The context must be restored from the task
control block once the task state is set to running.
Task Scheduling and Dispatch
Task scheduling and dispatch ensure that every task accesses the CPU as well as other system
resources effectively to ensure timely and successful completion of system computations.
This is carried out efficiently from a resource utilization point of view with correct
synchronization, with data and code protection for single tasks processing against incorrect
interference. Hence, different task scheduling and dispatch models must be used because the
appropriateness of OS models depends on the application features in use.
Coroutines
This is a cooperative multitasking model whose tasks are distributed over some processes
referred to as coroutines. The tasks exchange program control mutually as opposed to
relinquishing it to the RTOS, so every task transfers control to other scheduled tasks once its
data and control state is saved. However, the scheduling responsibility,i.e. determining the
tasks that get control of the processor at a specified time, is performed by the programmer as
opposed to the OS.
Interrupts
In most cases, task scheduling and dispatch must be responsive to timing and external
signals. However, it should not be assumed that running tasks can transfer control to
dispatchers on their own. When such scenarios occur, the interrupt capabilities available on
all processors are used to facilitate task switching. Different tasks in the system are either
switched by software or hardware interrupts. Hardware interrupts occur periodically from
clocks or asynchronously through external devices.

Memory Allocation Related Functions

● When a process is created, the memory manager allocates the memory addresses
(blocks) to it by mapping the processaddress space.
● Threads of a process share the memory space of the process

Memory Managing Strategy for a system

● Fixed-blocks allocation
● Dynamic -blocks Allocation
● Dynamic Page-Allocation
● Dynamic Data memory Allocation
● Dynamic address-relocation
● Multiprocessor Memory Allocation
● Memory Protection to OS functions

Memory allocation in RTOSes

● RTOS may disable the support to the dynamic block allocation, MMU support to
dynamic page allocation and dynamic binding as this increases the latency of
servicing the tasks and ISRs.
● RTOS may not support to memory protection of the OS functions, as this increases
the latency of servicing the tasks and ISRs.
● User functions are then can run in kernel space like kernel functions
● RTOS may provide for disabling of the support to memory protection among the tasks
as this increases the memory requirement for each task

Memory manager functions


(i) use of memory address space by a process,
(ii) specific mechanisms to share the memory space and
(iii) specific mechanisms to restrict sharing of a given memory space
(iv) optimization of the access periods of a memory by using an hierarchy of memory
(caches, primary and external secondary magnetic and optical memories).
Remember that the access periods are in the following increasing order: caches, primary and
external secondary magnetic and then or optical.

Semaphore Related Function


Semaphore is a variable which manages access to shared resource so that not more than one
device uses same shared resource simultaneously.
Let us understand by analogy. Assume that you want to use dormetry when you visit some
place. That dormetry has eight beds that are shared by Travellers. So, this dormetry will
have eight keys. When no one is using dormetry no of free keys will be eight. When
someone comes manager will give one key and now no of free keys will be 7. People keep
coming in they used all beds so now there are no free keys (zero keys). Now when you go
manager tells you that there is no bed available, note downs your contact information and
tells you that wait for some time in lounge I will let you know when someone leaves. So,
you wait in lounge doing nothing. When someone leaves manager calls you and gives key so
again free keys are zero.
In this analogy number of keys is value of semaphore. Manager is kernel. Contact
information of waiting customers is wait queue. Now if we have eight printers in a system at
most eight processes can use printers. So initially when no process is using printer number of
free printers are eight and so the value of semaphore. Now manager is kernel or semaphore
management module. When some process request access to printer kernel checks semaphore
value if is nonzero positive value or not. If it is, kernel grants access and decrease semaphore
value by one. So now value is seven. When all printers are busy value is zero. Now if any
more process requests access it is denied access because semaphore is not available due to its
value equal to zero. Process which is denied access goes to sleep mode and kernel adds it to
wait queue. When some process which is already using printer stops using printer, releases
semaphore, so kernel increments value of semaphore. When semaphore value is incremented
kernel checks it has some waiting process for this semaphore. If it is it wakes up that process
and that process starts using printer decreasing value of semaphore by one. So it again
remains zero.

Mailbox functions at OS
• Some OSes provide the mailbox and queue both IPC functions
• When the IPC functions for mailbox are not provided by an OS, then the OS employs
queue for the same purpose.
• A mailbox of a task can receive from other tasks and has a distinct ID
• Mailbox (for message) is an IPC through a message at an OS that can be received only one
single destined task for the message from the tasks
• Two or more tasks cannot take message from same Mailbox
• A task on an OS function call puts (means post and send) into the mailbox only a
pointer to a mailbox message
• Mailbox message may also include a header to identify the message-type specification.]
• OS provides for inserting and deleting message into the mailbox message pointer. Deleting
means message-pointer pointing to Null.
• Each mailbox for a message need initialization (creation) before using the functions in the
scheduler for the message queue and message pointer pointing to Null.
• There may be a provision for multiple mailboxes for the multiple types or destinations of
messages. Each mailbox has an ID.
• Each mailbox usually has one message pointer only, which can point to message.

Mailbox Related Functions at the OS


1. OSMBoxCreate creates a box and initializes the mailbox contents with a NULL pointer at
*msg .
2. OSMBoxPost sends at *msg, which now does not point to Null.
• An ISR can also post into mailbox for a task
3. OSMBoxWait (Pend) waits for *msg not Null, which is read when not Null and again
*msg points to Null.
• The time out and error handling function can be provided with Pend function argument.
• ISR not permitted to wait for message into mailbox. Only the task can wait
4.OSMBoxAccept reads the message at *msg after checking the presence yes or no [No
wait.] Deletes (reads) the mailbox message when read and *msg again points to Null
• An ISR can also accept mailbox message for a task
5. OSMBoxQuery queries the mailbox *msg.
6. OSMBoxDelete

Queue Related Functions


• Some OSes provide the mailbox and queue both IPC functions
• Every OS provides queue IPC functions.
• When the IPC functions for mailbox are not provided by an OS, then the OS employs queue
for the same purpose.
• OS provides for inserting and deleting the message-pointers or messages.
• Each queue for a message need initialization (creation) before using the functions in the
scheduler for the message queue.
• There may be a provision for multiple queues for the multiple types or destinations of
messages. Each queue have an ID.
• Each queue either has a user definable size (upper limit for number of bytes) or a fixed
pre-defined size assigned by the scheduler.
• When an RTOS call is to insert into the queue, the bytes are as per the pointed number of
bytes.
• For example, for an integer or float variable as a pointer, there will be four bytes inserted
per call. If the pointer is for an array of 8 integers, then 32 bytes will be inserted into the
queue.
• When a queue becomes full, there may be a need for error handling and user codes for
blocking the task(s). There may not be self-blocking.
Queue Related Functions at the OS
• OSQCreate─ to create a queue and initialize the queue message, blocks the contents with
front and back as queue-top pointers, *QFRONT and *QBACK, respectively.
• OSQPost ─ to post a message to the message block as per the queue back pointer,
*QBACK. (Used by ISRs and tasks)
• OSQPend ─ to wait for a queue message at the queue and reads and deletes that whenreceived. (Wait, Used by
tasks.)
• OSQAccept ─ to read the present queue front pointer after checking its presence yes or no and after the read the
queue front pointer increments (No wait. Used by ISRs and tasks)
• OSQFlush ─ to read queue from front to back, and deletes the queue block, as it is not needed later after the flush
the queue front and back points to QTop, pointer to start of the queue. (Used by ISRs and tasks)
• OSQQuery─ to querythe queue message-block when read and but the message is not deleted. The function returns
pointer to the message queue *QFRONT if there are the messages in the queue or else null. It returns a pointer to
data structure of the queue data structure which has *QFRONT, number of queued messages, size of the queue and.
table of tasks waiting for the messages from the queue. [Query is used by tasks.]
• OSQPostFront ─ to send a message as per the queue front pointer, *QFRONT. Use of this function is made in the
following situations. A message is urgent or is of higher priority than all the previously posted message into the queue
(Used in ISRs and tasks)
• OSQDelete─ to delete a queue

You might also like