Embedded System
Embedded System
(A Central University)
ASSIGNMENT OF
“EMBEDDED SYSTEM”
System
A system is an arrangement in which all its unit assemble work together according to a set of rules.
It can also be defined as a way of working, organizing or doing one or many tasks according to a
fixed plan. For example, a watch is a time displaying system. Its components follow a set of rules to
show time. If one of its parts fails, the watch will stop working. So, we can say, in a system, all its
subcomponents depend on each other.
Embedded System
As its name suggests, Embedded means something that is attached to another thing. An embedded
system can be thought of as a computer hardware system having software embedded in it. An
embedded system can be an independent system, or it can be a part of a large system. An embedded
system is a microcontroller or microprocessor-based system which is designed to perform a specific
task. For example, a fire alarm is an embedded system; it will sense only smoke.
An embedded system has three components −
• It has hardware.
• It has application software.
• It has Real Time Operating system (RTOS) that supervises the application software
and provide mechanism to let the processor run a process as per scheduling by
following a plan to control the latencies. RTOS defines the way the system works. It
sets the rules during the execution of application program. A small scale embedded
system may not have RTOS.
So we can define an embedded system as a Microcontroller based, software driven, reliable, real-
time control system.
Disadvantages
• High development effort
• Larger time to market
• Sensor − It measures the physical quantity and converts it to an electrical signal which
can be read by an observer or by any electronic instrument like an A2D converter. A
sensor stores the measured quantity to the memory.
• A-D Converter − An analog-to-digital converter converts the analog signal sent by
the sensor into a digital signal.
• Processor & ASICs − Processors process the data to measure the output and store it
to the memory.
• D-A Converter − A digital-to-analog converter converts the digital data fed by the
processor to analog data
• Actuator − An actuator compares the output given by the D-A Converter to the actual
(expected) output stored in it and stores the approved output.
Processors in a System
A processor has two essential units −
Types of Processors
Processors can be of the following categories −
• General Purpose Processor (GPP)
o Microprocessor
o Microcontroller
o Embedded Processor
o Digital Signal Processor
o Media Processor
• Application Specific System Processor (ASSP)
• Application Specific Instruction Processors (ASIPs)
• GPP core(s) or ASIP core(s) on either an Application Specific Integrated Circuit
(ASIC) or a Very Large Scale Integration (VLSI) circuit.
Microprocessor
A microprocessor is a single VLSI chip having a CPU. In addition, it may also have other units such
as coaches, floating point processing arithmetic unit, and pipelining units that help in faster
processing of instructions.
Earlier generation microprocessors’ fetch-and-execute cycle was guided by a clock frequency of
order of ~1 MHz. Processors now operate at a clock frequency of 2GHz
Microcontroller
A microcontroller is a single-chip VLSI unit (also called microcomputer) which, although having
limited computational capabilities, possesses enhanced input/output capability and a number of on-
chip functional units.
Microcontrollers are particularly used in embedded systems for real-time control applications with
on-chip program memory and devices.
Microprocessor vs Microcontroller
Microprocessor Microcontroller
Microprocessors are multitasking in nature. Can Single task oriented. For example, a washing
perform multiple tasks at a time. For example, on machine is designed for washing clothes only.
computer we can play music while writing text in
text editor.
RAM, ROM, I/O Ports, and Timers can be added RAM, ROM, I/O Ports, and Timers cannot be
externally and can vary in numbers. added externally. These components are to be
embedded together on a chip and are fixed in
numbers.
Designers can decide the number of memory or I/O Fixed number for memory or I/O makes a
ports needed. microcontroller ideal for a limited but specific
task.
External support of external memory and I/O ports Microcontrollers are lightweight and cheaper than
makes a microprocessor-based system heavier and a microprocessor.
costlier.
External devices require more space, and their A microcontroller-based system consumes less
powerconsumption is higher. power and takes less space.
Embedded software architectures
An embedded software architecture is a piece of software that is divided in multiple layers. The
important layers in embedded software are
• Application layer
• Middleware layer
• Firmware layer
Application layer: Application layer is mostly written in high level languages like java, C++, C#
with rich GUI support. Application layer calls the middleware API in response to action by the user
or an event.
Middleware layer: Middleware layer is mostly written in C++, C with no rich GUI support. The
middleware software maintains the state machine of the device. And is responsible to handle requests
from the upper layer and the lower-level layer. The middleware exposes a set of API functions which
the application must call to use the services offered by the middleware. And vice versa the
middleware can send data to the application layer via IPC mechanism.
Firmware layer: Middleware layer is always written in C. The firmware is responsible for talking
to the chipset either configuring registers or reading from the chipset registers. The firmware exposes
a set of API’s that the middleware can call. To perform specific tasks.
Von-Neumann architecture:
RISC and CISC are the two common Instruction Set Architectures:
RISC:
• Reduced Instruction Set Computing.
• It contains lesser number of instructions.
• Instruction pipelining and increased execution speed.
• Orthogonal instruction set.
• Operations are performed on registers only, only memory operations are load and store.
• A larger number of registers are available.
• Programmer needs to write more code to execute a task since instructions are simpler ones.
• It is single, fixed length instruction.
• Less silicon usage and pin count.
• With Harvard Architecture.
CISC:
• Complex Instruction Set Computing
• It contains greater number of instructions.
• Instruction pipelining feature does not exist.
• Non-orthogonal set.
• Operations are performed either on registers or memory depending on instruction.
• The number of general-purpose registers is very limited.
• Instructions are like macros in C language. A programmer can achieve the desired
functionality with a single instruction which in turn provides the effect of using simpler single
instruction in RISC.
• It is variable length instruction.
• More silicon usage since more additional decoder logic is required to implement the complex
instruction decoding.
• Can be Harvard or Von- Neumann Architecture.
Multitasking
Multitasking is the method of running several programs or processes at the same time. Most modern
OS support multitasking for maximum processor utilization. There are mainly two types of
multitasking which are Preemptive and Cooperative multitasking.
Preemptive multitasking: The operating system may start a context switching from one process
that is currently running to another during preemptive multitasking. In other words, the OS enables
you to stop the execution of the existing process and reallocate the processor to another. The
operating system utilizes a set of criteria to determine how long a process should run before granting
access to the OS to another process.
Key differences between Preemptive and Cooperative Multitasking
Definition The OS may begin context switching from The OS never starts context switching from one
one running process to another. executing process to another process.
Control It interrupts programs and grants control to It never unexpectedly interrupts a process.
processes that are not under the control of the
application.
Malicious It occurs when a malicious application A malicious application may block the entire system
Program initiates an indefinite loop that only affects in cooperative multitasking by busy waiting or
itself and has no effect on other programs or performing an indefinite loop and refusing to give
threads. up control.
Applications It forces apps to share the processor, whether All applications must collaborate for it to work. If
they want to or not. one program doesn't cooperate, it may use all of the
processor resources.
Examples UNIX, Windows 95, and Windows NT Macintosh OS versions 8.0-9.2.2 and Windows 3.x
Micro Kernel
Micro kernel is the type of the operating system architecture which is useful in the same way as the
other architectures are and is used for file management, memory management and the scheduling of
the processes.
But the difference in this type of architecture is that it has some space or address allotted for the
different purposes like file sharing, scheduling, kernel services etc.
These all services have their address allotted for them which results in the reduction of the size of
kernel and the operating system.
The basic idea of microkernel design is to achieve high reliability by splitting the operating system
up into small, well-defined modules. The microkernel OS runs in kernel mode.
Advantages
• This is small and isolated so as better functioning
• These are more secure due to space division
• Can add new features without recompiling
• This architecture is more flexible and can coexist in the system
• Fewer system crashes as compared to monolithic system
Disadvantages
• It is expensive as compared to the monolithic system architecture.
• Function calls needed when drivers are implemented as processes.
• Performance of the microkernel system can be indifferent and may sometimes cause
problems.
Exokernel
Exokernel is a type of operating system developed at the Massachusetts Institute of Technology that
seeks to provide application-level management of hardware resources. The exokernel architecture is
designed to separate resource protection from management to facilitate application-specific
customization.
Conventional operating systems always have an impact on the performance, functionality and scope
of applications that are built on them because the OS is positioned between the applications and the
physical hardware. The exokernel operating system attempts to address this problem by eliminating
the notion that an operating system must provide abstractions upon which to build applications. The
idea is to impose as few abstractions as possible on the developers and to provide them with the
liberty to use abstractions as and when needed. The exokernel architecture is built such that a small
kernel moves all hardware abstractions into untrusted libraries known as library operating systems.
The main goal of an exokernel is to ensure that there is no forced abstraction.
Advantages
• Improved performance of applications
• More efficient use of hardware resources through precise resource allocation and
revocation
• Easier development and testing of new operating systems
• Each user-space application is allowed to apply its own optimized memory management
Disadvantages
• Reduced consistency
• Complex design of exokernel interfaces
Monolithic Kernel
The monolithic kernel manages the system's resources between the system application and the
system hardware. Unlike the microkernel, user and kernel services are run in the same address space.
It increases the kernel size and also increases the size of the OS.
The monolithic kernel offers CPU scheduling, device management, file management, memory
management, process management, and other OS services via the system calls. All of these
components, including file management and memory management, are located within the kernel.
The user and kernel services use the same address space, resulting in a fast-executing operating
system. One drawback of this kernel is that if anyone process or service of the system fails, the
complete system crashes. The entire operating system must be modified to add a new service to a
monolithic kernel.
Advantages
• The monolithic kernel runs quickly because of memory management, file management,
process scheduling, etc.
• All of the components may interact directly with each other's and also with the kernel.
• It is a single huge process that executes completely within a single address space.
• Its structures are easy and simple. The kernel contains all of the components required for
processing.
Disadvantages
• If the user needs to add a new service, the user requires to modify the complete operating
system.
• It isn't easy to port code written in the monolithic operating system.
• If any of the services fails, the entire system fails.
UNIT – 2
Embedded Hardware Architecture – 32 Bit Microcontrollers
ROM (bytes) 4K 8K 0K
Timers 2 3 2
I/O pins 32 32 32
Serial port 1 1 1
Interrupt sources 6 8 6
Registers:
The most widely used registers of the 8051 are A (accumulator), B, R0-R7, DPTR (data pointer),
and PC (program counter). All these registers are of 8-bits, except DPTR and PC.
• Accumulator
• R register
• B register
• Data Pointer (DPTR)
• Program Counter (PC)
• Stack Pointer (SP)
Accumulator
The accumulator, register A, is used for all arithmetic and logic operations. If the accumulator is not
present, then every result of each calculation (addition, multiplication, shift, etc.) is to be stored into
the main memory. Access to main memory is slower than access to a register like the accumulator
because the technology used for the large main memory is slower (but cheaper) than that used for a
register.
The "R" Registers
The "R" registers are a set of eight registers, namely, R0, R1 to R7. These registers function as
auxiliary or temporary storage registers in many operations. Consider an example of the sum of 10
and 20. Store a variable 10 in an accumulator and another variable 20 in, say, register R4. To process
the addition operation, execute the following command −
ADD A, R4
After executing this instruction, the accumulator will contain the value 30. Thus "R" registers are
very important auxiliary or helper registers. The Accumulator alone would not be very useful if it
were not for these "R" registers. The "R" registers are meant for temporarily storage of values.
Let us take another example. We will add the values in R1 and R2 together and then subtract the
values of R3 and R4 from the result.
MOV A,R3 ;Move the value of R3 into the accumulator
ADD A,R4 ;Add the value of R4
MOV R5,A ;Store the resulting value temporarily in R5
MOV A,R1 ;Move the value of R1 into the accumulator
ADD A,R2 ;Add the value of R2
SUBB A,R5 ;Subtract the value of R5 (which now contains R3 + R4)
As you can see, we used R5 to temporarily hold the sum of R3 and R4. Of course, this is not the
most efficient way to calculate (R1 + R2) – (R3 + R4), but it does illustrate the use of the "R" registers
as a way to store values temporarily.
The "B" Register
The "B" register is very similar to the Accumulator in the sense that it may hold an 8-bit (1-byte)
value. The "B" register is used only by two 8051 instructions: MUL AB and DIV AB. To quickly
and easily multiply or divide A by another number, you may store the other number in "B" and make
use of these two instructions. Apart from using MUL and DIV instructions, the "B" register is often
used as yet another temporary storage register, much like a ninth R register.
The Data Pointer
The Data Pointer (DPTR) is the 8051’s only user-accessible 16-bit (2-byte) register. The
Accumulator, R0–R7 registers and B register are 1-byte value registers. DPTR is meant for pointing
to data. It is used by the 8051 to access external memory using the address indicated by DPTR. DPTR
is the only 16-bit register available and is often used to store 2-byte values.
The Program Counter
The Program Counter (PC) is a 2-byte address which tells the 8051 where the next instruction to
execute can be found in the memory. PC starts at 0000h when the 8051 initializes and is incremented
every time after an instruction is executed. PC is not always incremented by 1. Some instructions
may require 2 or 3 bytes; in such cases, the PC will be incremented by 2 or 3.
Branch, jump, and interrupt operations load the Program Counter with an address other than the
next sequential location. Activating a power-on reset will cause all values in the register to be lost.
It means the value of the PC is 0 upon reset, forcing the CPU to fetch the first opcode from the ROM
location 0000. It means we must place the first byte of upcode in ROM location 0000 because that
is where the CPU expects to find the first instruction.
The Stack Pointer (SP)
The Stack Pointer, like all registers except DPTR and PC, may hold an 8-bit (1-byte) value. The
Stack Pointer tells the location from where the next value is to be removed from the stack. When a
value is pushed onto the stack, the value of SP is incremented and then the value is stored at the
resulting memory location. When a value is popped off the stack, the value is returned from the
memory location indicated by SP, and then the value of SP is decremented.
This order of operation is important. SP will be initialized to 07h when the 8051 is initialized. If a
value is pushed onto the stack at the same time, the value will be stored in the internal RAM address
08h because the 8051 will first increment the value of SP (from 07h to 08h) and then will store the
pushed value at that memory address (08h). SP is modified directly by the 8051 by six instructions:
PUSH, POP, ACALL, LCALL, RET, and RETI.
Memory:
Memory is an important part of an Embedded system. It stores the control algorithm or firmware of
an Embedded system.
• Some processors/controllers contain built in memory and it is referred as On-chip memory.
• Others do not contain any memory inside the chip and requires external memory to be
connected with the controller/processor to store the control algorithm. It is called off -chip
memory.
ROM (Read only Memory)
The program memory or code storage memory of an embedded system stores the program
instructions. The code memory retains its contents even after the power to it is turned off. It is
generally known as non-volatile storage memory. Depending on the fabrication, erasing and
programming techniques they are classified into the following type—
• MROM
• PROM
• EPROM
• EEPROM
• FLASH
MROM:
• Masked ROM (MROM) Masked ROM is a one-time programmable device. Masked ROM
makes use of the hardwired technology for storing data. The device is factory programmed
by masking and metallisation process at the time of production itself, according to the data
provided by the end user.
• The primary advantage of this is low cost for high volume production. They are the least
expensive type of solid-state memory.
• The limitation with MROM based firmware storage is the inability to modify the device
firmware against firmware upgrades. Since the MROM is permanent in bit storage, it is not
possible to alter the bit information.
Programmable Read Only Memory (PROM) / (OTP):
Unlike Masked ROM Memory, One Time Programmable Memory (OTP) or PROM is not pre-
programmed by the manufacturer. The end user is responsible for programming these devices. This
memory has nichrome or polysilicon wires arranged in a matrix. These wires can be functionally
viewed as fuses. It is programmed by a PROM programmer which selectively burns the fuses
according to the bit pattern to be stored. Fuses which are not blown/burned represent logic "1"
whereas fuses which are blown/burned represent logic "0". The default state is logic "1". OTP is
widely used for commercial production of embedded systems whose proto-typed versions are proven
and the code is finalized. It is a low-cost solution for commercial production. OTPs cannot be
reprogrammed.
Erasable Programmable Read Only Memory (EPROM):
EPROM gives the flexibility to reprogram same chip. EPROM stores the bit information by charging
floating gate of an FET. Bit information is stored by using an EPROM programmer which applies
the high voltage to charge the floating gate. EPROM contains a quartz crystal window for erasing
the stored information. If the window is exposed to ultraviolet rays for a fixed duration the entire
memory will be erased. Even though the EPROM chip s flexible in terms of re-programmability, it
needs to be taken out of the circuit board and put in a UV eraser device for 20 to 30 minutes. so it is
a tedious and time-consuming process.
NVRAM:
Non-volatile RAM is a random-access memory with battery backup. It static RAM based memory
and a minute battery for providing supply to the memory in the absence of external power supply.
The memory and battery are packed together in a single package. NVRAM is used for the non-
volatile storage of results of operations or for setting up of flags, etc. The life of NVRAM is expected
to be around 10 years.
Programmed I/O
In the programmed I/O when we write the input then the device should be ready to take the data
otherwise the program should wait for some time so that the device or buffer will be free then it can
take the input.
Once the input is taken then it will be checked whether the output device or output buffer is free
then it will be printed. This process is continued every time in transferring of the data.
I/O Interrupts
To initiate any I / O operation, the CPU first loads the registers to the device controller. Then the
device controller checks the contents of the registers to determine what operation to perform.
There are two possibilities if I / O operations want to be executed. These are as follows −
•Synchronous I / O − The control is returned to the user process after the I/O process
is completed.
• Asynchronous I/O − The control is returned to the user process without waiting for
the I/O process to finish. Here, I/O process and the user process run simultaneously.
DMA Structure
Direct Memory Access (DMA) is a method of handling I / O. Here the device controller directly
communicates with memory without CPU involvement.
After setting the resources of I/O devices like buffers, pointers, and counters, the device controller
transfers blocks of data directly to storage without CPU intervention.
DMA is generally used for high speed I / O devices.
ARM Bus technology:
AMBA stands for Advanced Microcontroller Bus Architecture. AMBA specification specifies an on
chip communication standard. This is used to design embedded microcontrollers with high
performance.
ARM processor controls the embedded device. An ARM processor comprises a core (the execution
engine that processes instructions and manipulates data) plus the extensions interface it with a bus.
Controllers coordinate important functional blocks of the system. Two commonly found controllers
are interrupt and memory controllers.
Peripherals provide all the input-output capability external to the chip and are responsible for the
uniqueness of the embedded device.
GPIB bus
• GPIB (General Purpose Interface Bus) was developed as an interface between computers and
measuring instruments. It is mainly used to connect PCs and measuring instruments. GPIB
was created as HP-IB, an in-house standard developed by Hewlett Packard, which was
approved by the IEEE (Institute of Electrical and Electronics Engineers) and became an
international standard. Many current measuring instruments support the GPIB interface as
standard, and it is widely used in measurement systems using PCs and measuring
instruments.
• GPIB standards include IEEE-488 and the higher-level protocol IEEE-488.2, which is
currently mainstream. In addition to the transfer methods specified in IEEE-488, IEEE-488.2
features syntax for text data and numeric expressions, and commands and queries that can be
used by all instruments. IEEE-488.2-compatible instruments can communicate with other
IEEE-488.2-compliant devices and with IEEE-488 devices within the scope prescribed in
IEEE-488.
Advantages of GPIB:
• GPIB employs a bus interface, and piggyback connectors make connecting and configuring
devices easy. It is possible to use a single PC interface even if more devices are connected to
the system in the future.
• Handshake communication ensures highly reliable data transfer.
• As the standard bus of the measuring instrument industry, the GPIB interface is employed by
many measuring instruments, allowing users to control a variety of measuring instruments
by mastering a single protocol.
• Devices with different communication speeds can be connected. (*The whole system will be
limited to the speed of the slowest device.)
Unit-3
Software Development
If traditional desktop software is written for computers, embedded software is integrated into non-computer
hardware to control its functions. The hardware is represented by various monitoring devices, machines,
sensors, wearables and practically every piece of modern electronics. Embedded technology, together with
networks and information technologies, constitutes the Internet of Things systems and is widely used in
medicine, manufacturing, appliances, the automotive industry, transportation and aviation.
Embedded programs allow hardware to monitor external events and control external devices. Both
hardware and software are important in embedded systems.
2) Compiler: Source code is written in a high-level programming language. A compiler is a tool for
transforming the code into a low-level machine language code — the one that a machine can
understand.
Keil C51 is a popular compiler that creates apps for 8051 microcontrollers and translates source code
written in the C language.
3) Assembler: The function of this tool is to convert a human-written code into a machine language. In
comparison with a compiler, which can do so directly, an assembler initially converts source code into
object code, and then to a machine language.
GNU Assembler (GAS) is widely used for Linux operating systems and can be found in the
Macintosh tools package.
4) Debugger: This is a critical tool for testing. It goes through the code and eliminates bugs and errors,
notifying places where they occur. Precisely, debuggers pinpoint the lines where issues are found, so
programmers can address them quickly.
A good debugger tool is IDA Pro that works on Linux, Windows and Mac OS X operating systems. It
has both free and commercial versions and is highly popular among developers.
5) Linker: Traditionally, code is written into small pieces and modules. A linker is a tool that combines
all these pieces together, creating a single executable program. GNU ld is one of the linker tools.
6) Emulator: An emulator is a replication of the target system with identical functionality and
components. This tool is needed to simulate software performance and to see how the code will work
in the real-time environment. Using emulators, programmers can change values in order to reach the
ideal performance of the code. Once the code is fully checked, it can be embedded in the device.
7) Integrated Development Environment (IDE): Talking about the list of embedded software
development tools, we cannot but mention integrated development environments. All the above-
mentioned tools are needed for creating your embedded software. But it would be extremely
inconvenient to use them separately, adding another layer of complexity to the project.
Hence, to simplify the development process, it is highly recommended to use integrated
environments. IDE is software that provides a set of necessary tools in one package.
Examples include PyCharm, WebStorm, Qt Creator, Visual Studio, Arduino, Eclipse, etc.
1) Planning and Research: Preparation is key in software development. Before diving into a new
project, you should know precisely what that project will be, why you will be undertaking it and
what you wish to achieve. The first step in development process is all about planning and research.
At this stage, you should determine the following aspects:
• Scope of project
• Timeline
• Resources it will require
• Estimated costs
3) Designing the software: The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of the last two, like inputs
from the customer and requirement gathering.
4) Developing the project: In this phase of SDLC, the actual development begins, and the
programming is built. The implementation of design begins concerning writing code. Developers
have to follow the coding guidelines described by their management and programming tools like
compilers, interpreters, debuggers, etc. are used to develop and implement the code.
5) Testing: After the code is generated, it is tested against the requirements to make sure that the
products are solving the needs addressed and gathered during the requirements stage. During
this stage, unit testing, integration testing, system testing, acceptance testing are done.
6) Deployment: Once the software is certified, and no bugs or errors are stated, then it is deployed.
Then based on the assessment, the software may be released as it is or with suggested enhancement
in the object segment. After the software is deployed, then its maintenance begins.
7) Maintenance: Once when the client starts using the developed systems, then the real issues come
up and requirements to be solved from time to time. This procedure where the care is taken for the
developed product is known as maintenance.
Tasking Models
A real-time operating system (RTOS) serves real-time applications that process data without any
buffering delay. In an RTOS, the Processing time requirement is calculated in tenths of seconds
increments of time. It is a time-bound system that is defined as fixed time constraints. In this
type of system, processing must be done inside the specified constraints. Otherwise, the system
will fail.
1. Periodic Task
In periodic tasks, jobs are released at regular intervals. A periodic task repeats itself after a fixed
time interval. A periodic task is denoted by five tuples: T i = < Φi, Pi, ei, Di >
2. Dynamic Tasks
It is a sequential program that is invoked by the occurrence of an event. An event may be generated
by the processes external to the system or by processes internal to the system. Dynamically arriving
tasks can be categorized on their criticality and knowledge about their occurrence times.
1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals.
Aperiodic tasks have soft deadlines or no deadlines.
2. Sporadic Tasks: They are like a periodic task, i.e., they repeat at random instances.The
only difference is that sporadic tasks have hard deadlines. Three tuples denote a sporadic
task: Ti = (ei, gi, Di)
3. Critical tasks: Critical tasks are those whose timely executions are critical. If deadlines
are missed, catastrophes occur.
For example, life-support systems and the stability control of aircraft. If critical tasks are executed
at a higher frequency, then it is necessary.
4. Non-critical Tasks: Non-critical tasks are real times tasks. As the name implies, they are
not critical to the application. However, they can deal with time, varying data, and hence
they are useless if not completed within a deadline. The goal of scheduling these tasks is to
maximize the percentage ofjobs successfully executed within their deadlines.
Task States
Round-Robin Algorithm:
Round Robin scheduling algorithm is one of the most popular scheduling algorithms which can be
implemented in most of the operating systems. This is the preemptive version of first come first
serve scheduling. The Algorithm focuses on Time Sharing. In this algorithm, every process gets
executed in a cyclic way. A certain time slice is defined in the system which is called time
quantum. Each process present in the ready queue is assigned the CPU for that time quantum, if
the execution of the process is completed during that time, then the process will terminate else the
process will go back to the ready queue and waits for the next turn to complete the execution.
Advantages
1. It can be implementable in the system because it is not depending on the burst time.
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task in the system.
FIFO
FIFO which is also known as First In First Out is one of the types of page replacement
algorithm. The FIFO algorithm is used in the paging method for memory management in
an operating system that decides which existing page needs to be replaced in the queue.
FIFO algorithm replaces the oldest (First) page which has been present for the longest
time in the main memory.
Page Fault: A page fault occurs when a page requested by a program running in
the CPU is not present in the main memory, but in the address page of that program.
A page fault generally creates an alert for the OS.
Preemptive Priority Scheduling
In Preemptive Priority Scheduling, at the time of arrival of a process in the ready queue, its Priority
is compared with the priority of the other processes present in the ready queue as well as with the
one which is being executed by the CPU at that point of time. The One with the highest priority
among all the available processes will be given the CPU next.
Rate-monotonic scheduling
Rate monotonic scheduling is a priority algorithm that belongs to the static priority
scheduling category of Real Time Operating System. It is preemptive in nature. The
priority is decided according to the cycle time of the processes that are involved. If the
process has a small job duration, then it has the highest priority. Thus, if a process with
highest priority starts execution, it will preempt the other running processes. The priority
of a process is inversely proportional to the period it will run for.
A set of processes can be scheduled only if they satisfy the following equation:
Example:
An example to understand the working of Rate monotonic scheduling algorithm.
P1 3 20
P2 2 5
P3 2 10
Priority Inversion
Priority inversion is an operating system scenario in which a higher priority process is
preemptedby a lower priority process. This implies the inversion of the priorities of the two
processes.
Problems due to Priority Inversion
Some of the problems that occur due to priority inversion are given as follows −
● A system malfunction may occur if a high priority process is not provided the
required resources.
● Priority inversion may also lead to implementation of corrective measures. These
may include the resetting of the entire system.
● The performance of the system can be reducing due to priority inversion. This
mayhappen because it is imperative for higher priority tasks to execute promptly.
Priority Ceiling
Deadlock
Every process needs some resources to complete its execution. However, the resource is granted in
a sequential order.
A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to
be released.
Let us assume that there are three processes P1, P2 and P3. There are three different resources R1,
R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3. P3 also demands for R1 which is being used by P1
therefore P3 also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the processes
got blocked.
Necessary conditions for Deadlocks
1. Mutual Exclusion
2. A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time. Hold and Wait
3. A process waits for some resources while holding another resource at the same time. No
preemption
4. The process which once scheduled will be executed till the completion. No other process
can be scheduled by the scheduler meanwhile. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last process is
waiting for the resource which is being held by the first process.
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe
state at every step which the operating system performs. The process continues until the system is
in safe state. Once the system moves to unsafe state, the OS has to backtrack one step.
This approach let the processes fall in deadlock and then periodically check whether deadlock
occur in the system or not. If it occurs, then it applies some of the recovery methods to the system
to get rid of deadlock.
Process synchronization:
When two or more process cooperates with each other, their order of execution must be preserved
otherwise there can be conflicts in their execution and inappropriate outputs can be produced.
Race Condition
A Race Condition typically occurs when two or more threads try to read, write and possibly make
the decisions based on the memory that they are accessing concurrently.
Critical Section
The regions of a program that try to access shared resources and may cause race conditions are
called critical section. To avoid race condition among the processes, we need to assure that only
one process at a time can execute within the critical section.
IPC:
To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication:
Shared memory
Shared memory system is the fundamental model of inter process communication. In a shared
memory system, in the address space region the cooperating communicates with each other by
establishing the shared memory region.
Let us see the working condition of the shared memory system step by step.
Working
In the Shared Memory system, the cooperating processes communicate, to exchange the data with
each other. Because of this, the cooperating processes establish a shared region in their memory.
The processes share data by reading and writing the data in the shared segment of the processes.
Step 1 − Process P1 has some data to share with process P2. First P1 takes initiative and
establishes a shared memory region in its own address space and stores the data or information to
be shared in its shared memory region.
Step 2 − Now, P2 requires the information stored in the shared segment of P1. So, process P2
needs to attach itself to the shared address space of P1. Now, P2 can read out the data from there.
Step 3 − The two processes can exchange information by reading and writing data in the shared
segment of the process.
Memory Locking
Locking memory is one of the most important issues for real-time applications. In a real-time
environment, a process must be able to guarantee continuous memory residence to reducelatency
and to prevent paging and swapping.
This section describes the memory locking mechanisms that are available to real-time applications
in SunOS.
Under SunOS, the memory residency of a process is determined by its current state, the total
available physical memory, the number of active processes, and the processes' demand formemory.
This residency is appropriate in a time-share environment. This residency is often unacceptable
for a real-time process. In a real-time environment, a process must guarantee a memory residence
to reduce the process' memory access and dispatch latency.
Real-time memory locking in SunOS is provided by a set of library routines. These routines allow
a process running with superuser privileges to lock specified portions of its virtual address space
into physical memory. Pages locked in this manner are exempt from paging until the pages are
unlocked or the process exits.
The operating system has a system-wide limit on the number of pages that can be locked at any
time. This limit is a tunable parameter whose default value is calculated at boot time. The default
value is based on the number of page frames minus another percentage, currently set at ten percent.
The main memory is central to the operation of a modern computer. Main Memory is a
large array of words or bytes, ranging in size from hundreds of thousands to billions.
Main memory is a repository of rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and information are kept when the
processor is effectively utilizing them. Main memory is associated with the processor, so
moving instructions and information into and out of the processor is extremely fast. Main
memory is also known as RAM (Random Access Memory). This memory is a volatile
memory. RAM lost its data when a power interruption occurs.
What is Memory Management:
In a multiprogramming computer, the operating system resides in a part of memory and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called memory management. Memory management is a method in the
operating system to manage operations between main memory and disk during process
execution. The main aim of memory management is to achieve efficient utilization of
memory.
Why Memory Management is required:
Semaphore
1. Binary Semaphore –
This is also known as mutex lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section
problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a
resource that has multiple instances.
Message queues:
Message queues are kernel objects used to pass content to a task. Messages are typically void
pointers to a storage area containing the actual message. However, the pointer can point to
anything, even a function for the receiving task to execute. The meaning of the message is thus
application dependent.
Mailboxes
A previous section covered semaphores which can be employed to synchronize tasks, thus
providing a mechanism allowing orderly inter-task communication using global data.
However, communication via global data is hard to keep track of and error-prone, since a
task might "forget" the required semaphore operation before accessing the data. Moreover,
no protocol has yet been introduced for a controlled exchange of data.
Mailboxes serve to close this gap. A mailbox is a data buffer that can store a fixed number
of messages of a fixed size.
Tasks can store messages in a mailbox. If the mailbox is full, the task is blocked until space
becomes available. Of course, tasks can also retrieve messages from mailboxes. In this
case, the task is blocked if no message is available in the mailbox. Any number of tasks can
use the same mailbox for storing and retrieving messages.
Pipe
Conceptually, a pipe is a connection between two processes, such that the standard output
from one process becomes the standard input of the other process. In UNIX Operating
System, Pipes are useful for communication between related processes (inter-process
communication).
● Pipe is one-way communication only i.e we can use a pipe such that One
process writes to the pipe, and the other process reads from the pipe. It opens
apipe, which is an area of main memory that is treated as a “virtual file”.
● The pipe can be used by the creating process, as well as all its child processes,
for reading and writing. One process can write to this “virtual file”, or pipe and
another related process can read from it.
● If a process tries to read before something is written to the pipe, the process is
suspended until something is written.
● The pipe system call finds the first two available positions in the process’s
open file table and allocates them for the read and write ends of the pipe.
Virtual socket
Virtual socket and virtual socket are “constructs” presented upstream to the tightly isolated
software container which we call a virtual machine. When you run an operating system, it
detects the hardware (layout) within the virtual machine. The VMkernel schedules a Virtual
Machine Monitor (VMM) for every vCPU.
UNIT 5
1. Task management:
In Real Time Applications the Process is called as Task which takes execution time and
occupies memory. The task management is the process of managing tasks through its life
cycle. Task will have different states. The states of task are Pended, Ready, Delayed,
Suspended, and Run.
1.1 Task/Process States:
Each Task/ process will be in any one of the states. The states are pended, ready,
suspended, delayed and run. The scheduler will operate the process which is in ready
state.
1.2 Typical Task Operations:
The important task operations are creating and deleting tasks, controlling, task
scheduling and obtaining task information.
2. Scheduling in RTOS:
In order to schedule task, information about the task must be known. The information
of task are the number of tasks, resource requirements, execution time and deadlines.
Resource allocation is necessary for any application to be run on the system. when an
application is running, it requires the OS to allocate certain resources for it to be able to run.
● When a process is created, the memory manager allocates the memory addresses
(blocks) to it by mapping the processaddress space.
● Threads of a process share the memory space of the process
● Fixed-blocks allocation
● Dynamic -blocks Allocation
● Dynamic Page-Allocation
● Dynamic Data memory Allocation
● Dynamic address-relocation
● Multiprocessor Memory Allocation
● Memory Protection to OS functions
● RTOS may disable the support to the dynamic block allocation, MMU support to
dynamic page allocation and dynamic binding as this increases the latency of
servicing the tasks and ISRs.
● RTOS may not support to memory protection of the OS functions, as this increases
the latency of servicing the tasks and ISRs.
● User functions are then can run in kernel space like kernel functions
● RTOS may provide for disabling of the support to memory protection among the tasks
as this increases the memory requirement for each task
Mailbox functions at OS
• Some OSes provide the mailbox and queue both IPC functions
• When the IPC functions for mailbox are not provided by an OS, then the OS employs
queue for the same purpose.
• A mailbox of a task can receive from other tasks and has a distinct ID
• Mailbox (for message) is an IPC through a message at an OS that can be received only one
single destined task for the message from the tasks
• Two or more tasks cannot take message from same Mailbox
• A task on an OS function call puts (means post and send) into the mailbox only a
pointer to a mailbox message
• Mailbox message may also include a header to identify the message-type specification.]
• OS provides for inserting and deleting message into the mailbox message pointer. Deleting
means message-pointer pointing to Null.
• Each mailbox for a message need initialization (creation) before using the functions in the
scheduler for the message queue and message pointer pointing to Null.
• There may be a provision for multiple mailboxes for the multiple types or destinations of
messages. Each mailbox has an ID.
• Each mailbox usually has one message pointer only, which can point to message.