Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
25 views44 pages

ES Notes

The document provides an overview of operating system basics, including types of operating systems, kernel architecture, and task management. It differentiates between general-purpose operating systems (GPOS) and real-time operating systems (RTOS), detailing their functionalities and characteristics. Additionally, it covers memory management, task scheduling, synchronization, and the importance of timing and interrupts in real-time systems.

Uploaded by

jhansivellanki9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views44 pages

ES Notes

The document provides an overview of operating system basics, including types of operating systems, kernel architecture, and task management. It differentiates between general-purpose operating systems (GPOS) and real-time operating systems (RTOS), detailing their functionalities and characteristics. Additionally, it covers memory management, task scheduling, synchronization, and the importance of timing and interrupts in real-time systems.

Uploaded by

jhansivellanki9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Concepts:

Operating system basics, types of operating systems, tasks, process and threads, multiprocessing
and multitasking, task scheduling: non-pre-emptive and pre-emptive scheduling; task
communication-shared memory, message passing, Remote Procedure Call and Sockets, Task
Synchronization: Task Communication/ Synchronization Issues, Task Synchronization
Techniques
OperatingSystemBasics:
 The Operating System acts as a bridge between the user applications/tasks
and the underlying system resources through a set of system functionalities
and services
 OS manages thesystem
resourcesand makesthem
availableto theuser
applications/tasksonaneed basis


TheprimaryfunctionsofanOperatingsystemis
Makethesystemconvenienttouse
Organizeandmanagethesystemresourcesefficientlyandcorrectly
KernelService

UserApplicat
ions ApplicationProgra
mming Interface
MemoryManageme (API)
nt
ProcessManageme
nt
TimeManagement

FileSystemManage
ment
I/
OSystemManagem DeviceDri
ent
Figure1:TheArchitectureofOperatingSystem
ver
Underlying Interface
Hardware
The Kernel:

 Thekernelisthecoreoftheoperatingsystem

 Itisresponsibleformanagingthesystemresourcesandthecommunication
among the hardware and other system services

 Kernelactsastheabstractionlayerbetweensystemresourcesanduser
applications

 Kernelcontainsasetofsystemlibrariesandservices.

 ForageneralpurposeOS,thekernelcontainsdifferentserviceslike

 ProcessManagement

 PrimaryMemoryManagement

 FileSystemmanagement

 I/OSystem(Device)Management

 SecondaryStorageManagement

 Protection

 Timemanagement

 InterruptHandling

KernelSpaceandUser Space:

 The program code corresponding to the kernel applications/services are kept


in a contiguous area (OS dependent) of primary (working) memory and is
protected from the un-authorized access by userprograms/applications

 Thememoryspaceatwhichthekernelcodeislocatedisknownas‘Kernel Space’
 All user applications are loaded to a specific area of primary memory and
this memory area is referred as ‘User Space’

 The partitioning of memory into kernel and user space is purely Operating
Systemdependent

 An operating system with virtual memory support, loads the user


applications into its corresponding virtual memory space with demand
paging technique

 Most of the operating systems keep the kernel application code in main
memory and it is not swapped out into the secondarymemory

MonolithicKernel:

 Allkernelservicesruninthekernelspace

 Allkernelmodulesrunwithinthesamememoryspaceunderasinglekernel thread
 Thetightinternalintegrationofkernelmodulesinmonolithickernel
architecture allows the effective
utilizationofthelow-levelfeaturesof
theunderlyingsystem
Applications
 The major drawback of monolithic
kernel is that any error or failure
inanyone of the kernel modules leads
to the crashing of the entire kernel
application
Monolithickernelwithall
 LINUX,SOLARIS,MS-DOSkernels
operatingsystemservices
areexamplesofmonolithickernel running in kernel space
Figure2:TheMonolithicKernelModel
Microkernel

 ThemicrokerneldesignincorporatesonlytheessentialsetofOperating
System services into the kernel

 RestoftheOperatingSystemservicesareimplementedinprogramsknown as
‘Servers’ which runs in user space

 The kernel design is highly Servers (kernel


modularprovidesOS-neutral servicesrunning Applications
in user space)
abstraction.

 Memorymanagement,process
management, timer systems and Microkernel with essential
services like memory
interrupt handlers are examples of management, process
essentialservices,whichformsthepart of management, timer systemetc...

the microkernel
Figure3:TheMicrokernelModel

 QNX,Minix3kernelsareexamplesformicrokernel.
BenefitsofMicrokernel:
1. Robustness: If a problem is encountered in any services in server can
reconfiguredandre-startedwithouttheneedforre-startingtheentireOS.
2. Configurability:Anyservices,whichrunas‘server’applicationcanbe
changed without need to restart the whole system.
TypesofOperatingSystems:
Depending on the type of kernel and kernel services, purpose and type of
computing systems where the OS is deployed and the responsiveness to
applications, Operating Systems are classified into

1. GeneralPurposeOperatingSystem(GPOS):

2. RealTimePurposeOperatingSystem(RTOS):
1. GeneralPurposeOperatingSystem(GPOS):

 OperatingSystems,whicharedeployedingeneralcomputingsystems

 Thekernelismoregeneralizedandcontainsalltherequiredservicesto
execute generic applications

 Neednotbedeterministicinexecutionbehavior

 Mayinjectrandomdelaysintoapplicationsoftwareandthuscauseslow
responsiveness of an application at unexpected times

 Usuallydeployedincomputingsystemswheredeterministicbehaviorisnot an
important criterion

 PersonalComputer/Desktopsystemisatypicalexampleforasystemwhere
GPOSs are deployed.

 WindowsXP/MS-DOSetcareexamplesofGeneralPurposeOperating
System

2. RealTimePurposeOperatingSystem(RTOS):

 OperatingSystems,whicharedeployedinembeddedsystemsdemanding
real-time response

 Deterministicinexecutionbehavior.Consumesonlyknownamountoftime for
kernel applications

 Implements scheduling policies for executing the highest priority


task/application always

 Implementspoliciesandrulesconcerningtime-criticalallocationofa
system’s resources

 WindowsCE,QNX,VxWorks,MicroC/OS-IIetcareexamplesofReal Time
Operating Systems (RTOS)
The Real Time Kernel: The kernel of a Real Time Operating System is referredas
Real Time kernel. In complement to the conventional OS kernel, the Real Time
kernel is highly specialized and it contains only the minimal set of
servicesrequiredforrunningtheuserapplications/tasks.ThebasicfunctionsofaRealTim
e kernel are
a) Task/Processmanagement

b) Task/Processscheduling

c) Task/Processsynchronization

d) Error/Exceptionhandling

e) MemoryManagement

f) Interrupthandling

g) Timemanagement

 Real Time Kernel Task/Process Management: Deals with setting up the


memory space for the tasks, loading the task’s code into the memory space,
allocating systemresources, setting up a Task Control Block (TCB) for the task
and task/process termination/deletion. A Task Control Block (TCB) is used for
holding the information corresponding to a task. TCB usually contains the
following set of information

 TaskID:TaskIdentificationNumber

 TaskState:Thecurrentstateofthetask.(E.g.State=‘Ready’foratask which
is ready to execute)

 TaskType:Tasktype.Indicateswhatisthetypeforthistask.Thetaskcan be a
hard real time or soft real time or background task.

 TaskPriority:Taskpriority(E.g.Taskpriority=1fortaskwithpriority=1)

 TaskContextPointer:Contextpointer.Pointerforcontextsaving
 TaskMemoryPointers:Pointerstothecodememory,datamemoryand stack
memory for the task

 TaskSystemResourcePointers:Pointerstosystemresources(semaphores,
mutex etc) used by the task

 TaskPointers:PointerstootherTCBs(TCBsforpreceding,nextand
waiting tasks)

 OtherParametersOtherrelevanttaskparameters

The parameters and implementation of the TCB is kernel dependent. The TCB
parameters vary across different kernels, based on the task management
implementation

 Task/Process Scheduling: Deals with sharing the CPU among various


tasks/processes. A kernel application called ‘Scheduler’ handles the task
scheduling. Scheduler is nothing but an algorithm implementation, which
performstheefficientandoptimalschedulingoftaskstoprovideadeterministic
behavior.
Task/Process Synchronization: Deals with synchronizing the concurrentaccess
of a resource, which is shared across multiple tasks and the communication
between various tasks.

Error/Exception handling: Deals with registering and handling the errors


occurred/exceptions raised during the execution of tasks. Insufficient memory,
timeouts, deadlocks, deadline missing, bus error, divide by zero, unknown
instruction execution etc, are examples of errors/exceptions. Errors/Exceptions
can happen at the kernel level services or at task level. Deadlock is an example
for kernel level exception, whereas timeout is an example for a task level
exception. The OS kernel gives the information about the error in the form of a
system call (API).
Memory Management:

 The memory management function of an RTOS kernel is slightly


different compared to the General Purpose OperatingSystems

 The memory allocation time increases depending on the size of the block
of memory needs to be allocated and the state of the allocated memory
block (initialized memory block consumes more allocation time than un-
initialized memory block)

 Sincepredictabletiming anddeterministicbehavior aretheprimaryfocus for


an RTOS, RTOS achieves this by compromising the effectiveness of
memory allocation

 RTOS generally uses ‘block’ based memory allocation technique, instead


of the usual dynamic memory allocation techniques used by theGPOS.

 RTOS kernel uses blocks of fixed size of dynamic memory and the block
is allocated for a task on a need basis. The blocks are stored in a ‘Free
buffer Queue’.

 Most of the RTOS kernels allow tasks to access any of the memory
blocks without any memory protection to achieve predictable timing and
avoid the timing overheads

 RTOS kernels assume that the whole design is proven correct and
protection is unnecessary. Some commercial RTOS kernels allow
memory protection as optional and the kernel enters a fail-safe mode
when an illegal memory access occurs

 The memory management function of an RTOS kernel is slightly


different compared to the General Purpose Operating Systems

 A few RTOS kernels implement Virtual Memory concept for memory


allocation if the system supports secondary memory storage (like HDD
and FLASH memory).
 In the ‘block’ based memory allocation, a block of fixed memory is
always allocated for tasks on need basis and it is taken as a unit. Hence,
there will not be any memory fragmentation issues.

 The memory allocation can be implemented as constant functions and


thereby it consumes fixed amount of time for memory allocation. This
leaves the deterministic behavior of the RTOS kerneluntouched.

Interrupt Handling:

 Interrupts inform the processor that an external device or an associated


task requires immediate attention of the CPU.

 InterruptscanbeeitherSynchronousorAsynchronous.

 Interruptswhichoccursinsyncwiththecurrentlyexecutingtaskisknown as
Synchronous interrupts. Usually the software interrupts fall under the
Synchronous Interrupt category. Divide by zero, memory segmentation
error etc are examples of Synchronous interrupts.

 For synchronous interrupts, the interrupt handler runs in the same context
of the interrupting task.

 Asynchronous interrupts are interrupts, which occurs at any point of


execution of any task, and are not in sync with the currently executing
task.

 The interrupts generated by external devices (by asserting the Interrupt


line of the processor/controller to which the interrupt line of the device is
connected) connected to the processor/controller, timer overflow
interrupts, serial data reception/ transmission interrupts etc are examples
for asynchronous interrupts.

 For asynchronous interrupts, the interrupt handler is usually written as


separate task (Depends on OS Kernel implementation) and it runs in a
differentcontext.Hence,acontextswitchhappenswhilehandlingthe
asynchronous interrupts.

 Priority levels can be assigned to the interrupts and each interrupts can be
enabled or disabled individually.

 Most of the RTOS kernel implements ‘Nested Interrupts’ architecture.


Interrupt nesting allows the pre-emption (interruption) of an Interrupt
Service Routine (ISR), servicing an interrupt, by a higher priorityinterrupt.

Time Management:

 Interrupts inform the processor that an external device or an associated


task requires immediate attention of the CPU.

 Accurate time management is essential for providing precise time


reference for all applications

 The time reference to kernel is provided by a high-resolution Real Time


Clock (RTC) hardware chip (hardware timer)

 The hardware timer is programmed to interrupt the processor/controllerat


a fixed rate. This timer interrupt is referred as ‘Timer tick’

 The ‘Timer tick’ is taken as the timing reference by the kernel. The
‘Timer tick’ interval may vary depending on the hardware timer. Usually
the ‘Timer tick’ varies in the microseconds range

 The time parameters for tasks are expressed as the multiples of the‘Timer
tick’

 TheSystemtimeisupdatedbasedonthe‘Timertick’

 If the System time register is 32 bits wide and the ‘Timer tick’ interval is
1microsecond, the System time register will reset in

232*10-6/(24*60*60)=49700Days=~0.0497Days=1.19Hours
Ifthe‘Timertick’intervalis1millisecond,theSystemtimeregisterwill reset in

232*10-3/ (24* 60* 60)=497Days =49.7Days=~50 Days

The ‘Timer tick’ interrupt is handled by the ‘Timer Interrupt’ handler of kernel.
The ‘Timer tick’ interrupt can be utilized for implementing the following
actions.

 Savethecurrentcontext(Contextofthecurrentlyexecutingtask)

 Increment the System time register by one. Generate timing error and reset
the System time register if the timer tick count is greater than the maximum
range available for System time register

 Update the timers implemented in kernel (Increment or decrement the timer


registers for each timer depending on the count direction setting for each
register. Increment registers with count direction setting = ‘count up’ and
decrement registers with count direction setting = ‘countdown’)

 Activatetheperiodictasks,whichareintheidlestate

 Invoke the scheduler and schedule the tasks again based on the scheduling
algorithm

 Deletealltheterminatedtasksandtheirassociateddatastructures(TCBs)

 Load the context for the first task in the ready queue. Due to the re-
scheduling, the ready task might be changed to a new one from the task,
which was pre-empted by the ‘Timer Interrupt’ task
Hard Real-time System:

 A Real Time Operating Systems which strictly adheres to the timing


constraints for a task

 A Hard Real Time systemmust meet thedeadlines for a task without any
slippage

 Missing any deadline may produce catastrophic results for Hard Real
Time Systems, including permanent data lose and irrecoverable damages
to the system/users

 Emphasizeontheprinciple‘Alateanswerisawronganswer’

 Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles
are typical examples of Hard Real Time Systems

 As a rule of thumb, Hard Real Time Systems does not implement the
virtual memory model for handling the memory. This eliminates thedelay
in swapping in and out the code corresponding to the task to and from the
primary memory

 The presence of Human in the loop (HITL) for tasks introduces un-
expected delays in the task execution. Most of the Hard Real Time
Systems are automatic and does not contain a ‘human in theloop’

 SoftReal-timeSystem:

 RealTimeOperatingSystemsthatdoesnotguaranteemeetingdeadlines,
but, offer the best effort to meet the deadline

 Missingdeadlinesfortasksareacceptableifthefrequencyofdeadline
missing is within thecompliancelimit ofthe Qualityof Service(QoS)

 ASoftRealTimesystememphasizesontheprinciple‘Alateanswerisan
acceptable answer, but it could have done bitfaster’

 SoftRealTimesystemsmostoftenhavea‘humanintheloop(HITL)’
 Automatic Teller Machine (ATM) is a typical example of Soft Real
Time System. If the ATM takes a few seconds more than the ideal
operation time, nothing fatal happens.

 An audio video play back system is another example of Soft Real


Time system. No potential damage arises if a sample comes late by
fraction of a second, for play back.

Tasks, Processes & Threads :


 In the Operating System context, a task is defined as the program in
execution and the related information maintained by the Operatingsystem
for the program
 Taskisalsoknownas‘Job’intheoperatingsystemcontext
 Aprogramorpartofitinexecution isalsocalleda‘Process’
 The terms ‘Task’, ‘job’ and ‘Process’ refer to the same entity in the
Operating System context and most often they are usedinterchangeably
 A process requires various system resources like CPU for executing the
process, memory for storing the code corresponding to the process and
associated variables, I/O devices for information exchange etc
The structure of a Processes

 The concept of ‘Process’ leads to concurrent execution (pseudo parallelism)


of tasks and thereby the efficient utilization of the CPU and other system
resources

 Concurrent execution is achieved through the sharing of CPU among the


processes.

 A process mimics a processor in properties and holds a set of registers,


process status, a Program Counter (PC) to point to the next executable
instruction of the process, a stack for holding the local variables associated
with the process and the code corresponding to theprocess
Process
 A process, which inherits all
Stack
the properties of the CPU, (StackPointe
r)
can be considered as a
WorkingRegisters
virtual processor, awaiting
its turn tohaveits properties StatusRegisters

switched into the physical ProgramCounter(PC)


processor
Code Memory
Figure:4StructureofaProcess correspondingt
othe Process

 Whentheprocessgetsitsturn,itsregistersandProgramcounterregister
becomes mapped to the physical registers of the CPU

MemoryorganizationofProcesses:

 Thememoryoccupiedbytheprocessis
segregatedintothreeregionsnamely;Stack StackMemory
memory, Data memory and Code memory
Stackmemorygrows
 The ‘Stack’ memory holds all temporary
downwards
data such as variables local to theprocess
Datamemorygrows
 Data memory holds all global data for the upwards
process

 The code memory contains the program


code (instructions) corresponding to the Data Memory
process

Code Memory
Fig:5MemoryorganizationofaProcess
 On loading a process into the main memory, a specific area of memory is
allocated for the process

 The stack memory usually starts at the highest memory address from the
memory area allocated for the process (Depending on the OS kernel
implementation)

Process States & State Transition

 Thecreationofaprocesstoitsterminationisnotasinglestepoperation

 The process traverses through a series of states during its transition from the
newly created state to the terminated state

 The cycle through which a process changes its state from ‘newly created’ to
‘execution completed’ is known as ‘Process Life Cycle’. The various states
through which a process traverses through during a Process Life Cycle
indicates the current status of the process with respect to time and also
provides information on what it is allowed to donext

Process States & State Transition:

 Created State: The state at which a process is being created is referred as


‘Created State’. The Operating System recognizes a process in the ‘Created
State’ but no resources are allocated to the process

 Ready State: The state, where a process is incepted into the memory and
awaiting the processor time for execution, is known as ‘Ready State’. Atthis
stage, the process is placed in the ‘Ready list’ queue maintained by the OS

 Running State: The state where in the source code instructions


corresponding to the process is being executed is called ‘Running State’.
Running state is the state at which the process executionhappens
 . Blocked State/Wait State: Refers Created

to a state where a running process is


Inceptedintomemory
temporarily suspended from
execution and does not have
Ready
immediate access to resources. The
blocked state might have invoked by

Interrupted

Schedule
various conditions like- the process

dforExe
Blocke
d
enters a wait state for an event to

or
occur (E.g. Waiting for user inputs
such as keyboard input) or waiting Running

for getting access to a shared


ExecutionCompletion
resource like semaphore, mutex etc
Completed

Figure6.Processstates andState transition

 CompletedState:Astatewheretheprocesscompletesitsexecution

 Thetransitionofaprocessfromonestatetoanotherisknownas
‘Statetransition’

 WhenaprocesschangesitsstatefromReadyto runningorfrom
runningtoblockedorterminatedorfromblockedtorunning,theCPU
allocation for the process may alsochange

Threads

 Athreadistheprimitivethatcanexecutecode

 Athreadisasinglesequentialflowofcontrolwithinaprocess

 ‘Thread’isalsoknownaslightweightprocess

 Aprocesscan havemanythreadsofexecution
 Different threads, which are part of a
process, share the same address space;
meaning they share the data memory,
code memory and heap memory area

 Threads maintain their own thread status


(CPU register values), Program Counter
(PC) and stack

Figure7Memoryorganizationof processanditsassociatedThreads

The Concept of multithreading


Useofmultiplethreadstoexecuteaprocessbringsthefollowingadvantage.

 Better memory utilization.


Multiple threads of the same process Task/Process

share the address space for data


CodeMemory
memory. This also reduces the
DataMemory

complexity of inter thread StackRegis StackRe StackReg

communication since ters

Thread2
gisters

Thread3
isters

Thread1

variablescanbesharedacrossthe voidmain (void)


{
intChildThread1 (void)

//Create child thread 1 //Dosomething


intChildThread2 (void)

something

threads.
CreateThread(NULL,
1000,
(LPTHREAD_START_RO
UTINE)
ChildThread1,NULL, 0,
&dwThreadID);
//Create child thread 2

 Sincetheprocessissplitinto
CreateThread(NULL,
1000,
(LPTHREAD_START_RO
UTINE)
ChildThread2,NULL, 0,

different threads, when one


&dwThreadID);
}

threadentersawaitstate,the
CPUcanbeutilizedbyother
Figure8Processwithmulti-threads

 threadsoftheprocessthatdonot requiretheevent,whichtheotherthreadis
waiting, for processing. This speeds up the execution of theprocess.
 EfficientCPUutilization.TheCPUisengagedalltime.
Thread V/s Process

Thread Process
Threadisasingleunitofexecutionandispart of Processisaprograminexecutionandcontains one or
process. more threads.

A thread does not have its own data memory and Processhasitsowncodememory,datamemory and
heap memory. It shares the data memory and stack memory.
heap memory with other threads of the same
process.

Athreadcannotliveindependently;itlives within the Aprocesscontainsatleastonethread.


process.

There can be multiple threads in a process.The Threads within a process share the code, data and
first thread (main thread) calls the main heap memory. Each thread holds separate
function and occupies the start of the stack memory area for stack (shares the total stack
memory of the process. memory of the process).

Threadsareveryinexpensivetocreate Processesareveryexpensivetocreate.Involves many


OS overhead.

Contextswitchingisinexpensiveandfast Contextswitchingiscomplexandinvolveslotof OS
overhead and is comparatively slower.

Ifathreadexpires,itsstackisreclaimedbythe process. If a process dies, the resources allocated to it are


reclaimed by the OS and all the associated
threads of the process also dies.

AdvantagesofThreads:

1. Better memory utilization: Multiple threads of the same process share the
address space for data memory. This also reduces the complexity of inter
thread communication since variables can be shared across thethreads.

2. EfficientCPUutilization:TheCPUisengagedall time.
3. Speeds up the execution of the process: The process is split into different
threads, when one thread enters a wait state, the CPU can be utilized byother
threads of the process that do not require the event, which the other thread is
waiting, for processing.

Multiprocessing&Multitasking

 Theabilitytoexecutemultipleprocessessimultaneouslyisreferredas
multiprocessing

 Systemswhicharecapableofperformingmultiprocessingareknownas
multiprocessorsystems

 Multiprocessor systems possess multiple CPUs and can execute multiple


processes simultaneously

 The ability of the Operating System to have multiple programs in memory,


which are ready for execution, is referred as multiprogramming

 Multitasking refers to the ability of an operating system to hold multiple


processes in memory and switch the processor (CPU) from executing one
process to another process

 Multitasking involves ‘Context switching’, ‘Context saving’ and ‘Context


retrieval’

 Context switching refers to the switching of execution context from task to


other

 When a task/process switching happens, the current context of execution


should be saved to (Context saving) retrieve it at a later point of time when
the CPU executes the process, which is interrupted currently due to
execution switching

 During context switching, the context of the task to be executed is retrieved


from the saved context list. This is known as Contextretrieval
Multitasking–ContextSwitching:

ReloadContextforProcess2fromPCB

ReloadContextforProcess1fromPCB
Idle
PerformotherOSoperationsrelate

PerformotherOSoperationsrelate
ExecutionswitchestoProces

ExecutionswitchestoProces
SaveCurrentcontextintoPCB0

SaveCurrentcontextintoPCB1
Running
dto ‘Context Switching’

dto ‘Context Switching’


s2

s1
Process

1.

3.
2.

1.

3.
2.
Delayinexecutionof Process 2happened due to ‘Context
Delayinexecutionof
Switching’ Process 1happened due to ‘Context Switching’

Idle Running Waitsin‘Re dy’Queue


Process2

Process1 Running Waitsin‘Ready’Queue Idle Running

Time

Figure9 ContextSwitching

 Multiprogramming:TheabilityoftheOperatingSystemtohavemultiple
programsinmemory,whicharereadyforexecution,isreferredas multiprogramming.

TypesofMultitasking:

Dependingonhowthetask/processexecutionswitchingactisimplemented, multitasking
can is classified into
• Co-operative Multitasking: Co-operative multitasking is the most primitive
form of multitasking in which a task/process gets a chance to execute only
when the currently executing task/process voluntarily relinquishes the CPU.
In this method, anytask/process can avail the CPU as much time as it wants.
Since this type of implementation involves the mercy of the tasks each other
for getting the CPU time for execution, it is known as co-operative
multitasking. If the currently executing task is non-cooperative, the other
tasks may have to wait for a long time to get the CPU

• Preemptive Multitasking: Preemptive multitasking ensures that every


task/process gets a chance to execute. When and how much time a process
gets is dependent on the implementation of the preemptive scheduling. As
the name indicates, in preemptive multitasking, the currently running
task/process is preempted to give a chance to other tasks/process to execute.
The preemption of task may be based on time slots or task/processpriority

• Non-preemptive Multitasking: The process/task, which is currently given the


CPU time, is allowed to execute until it terminates (enters the ‘Completed’
state) or enters the ‘Blocked/Wait’ state, waiting for an I/O. The co-
operative and non-preemptive multitasking differs in their behavior when
they are in the ‘Blocked/Wait’ state. In co-operative multitasking, the
currently executing process/task need not relinquish the CPU when it enters
the‘Blocked/Wait’sate,waiting foran I/O,orashared resourceaccessor an
event to occur whereas in non-preemptive multitasking the currently
executing task relinquishes the CPU when it waits for anI/O.

TaskScheduling:
 In a multitasking system, there should be some mechanism in place to share
the CPU among the different tasks and to decide which process/task is to be
executed at a given point of time

 Determining which task/process is to be executed at a given point of time is


known as task/process scheduling
 Taskschedulingformsthebasisofmultitasking

 Scheduling policies forms the guidelines for determining which task is to be


executed when

 The scheduling policies are implemented in an algorithmand it is run by the


kernel as a service

 The kernel service/application, which implements the scheduling algorithm,


is known as ‘Scheduler’

 The task scheduling policy can be pre-emptive, non-preemptive or co-


operative

 Depending on the scheduling policy the process scheduling decision may


take place when a process switches its state to
 ‘Ready’statefrom‘Running’state
 ‘Blocked/Wait’statefrom‘Running’state
 ‘Ready’statefrom‘Blocked/Wait’state
 ‘Completed’state
TaskScheduling-Scheduler Selection:
Theselectionofaschedulingcriteria/algorithmshouldconsider
• CPU Utilization: The scheduling algorithm should always make the CPU
utilization high. CPU utilization is a direct measure of how much percentage
of the CPU is being utilized.
• Throughput: This gives an indication of the number of processes executed
per unit of time. The throughput for a good scheduler should always be
higher.
• Turnaround Time: It is the amount of time taken by a process for
completing its execution. It includes the time spent by the process forwaiting
for the main memory, time spent in the ready queue, time spent on
completing the I/O operations, and the time spent in execution. The
turnaround time should be a minimum for a good schedulingalgorithm.
• Waiting Time: It is the amount of time spent by a process in the ‘Ready’
queuewaitingto gettheCPUtimeforexecution.Thewaiting timeshouldbe
minimal for a good scheduling algorithm.
• Response Time: It is the time elapsed between the submission of a process
and the first response. For a good scheduling algorithm, the response time
should be as least as possible.

TaskScheduling - Queues
Tosummarize,agoodschedulingalgorithmhashighCPUutilization,minimum
TurnAroundTime(TAT),maximumthroughputandleastresponsetime.
ThevariousqueuesmaintainedbyOSinassociationwithCPUschedulingare
• JobQueue:Jobqueuecontainsalltheprocessesinthesystem
• Ready Queue: Contains all the processes, which are ready for execution and
waiting for CPU to get their turn for execution. The Ready queue is empty
when there is no process ready for running.
• Device Queue: Contains the set of processes, which are waiting for an I/O
device
TaskScheduling–TasktransitionthroughvariousQueues

Process1

Scheduler

JobQueue

Process Admitted Process1

RunProcess
toCompletio
n
Processn
to‘Ready’queue
ReadyQueue
Move Processto ‘DeviceQueue’

Movepreemptedproce Process CPU


ss
ResourceRe

Move I/O C
ompletedProcessto‘Re
ady’queue

DeviceMana
ger Proces
s
Process1
Process2

Figure10.ProcessTranDesvii cteiQoune u ethroughvariousqueues


Non-preemptivescheduling–FirstComeFirstServed(FCFS)/FirstIn First
Out (FIFO) Scheduling:

 AllocatesCPUtimetotheprocessesbasedontheorderin whichtheyenters the


‘Ready’ queue
 Thefirstenteredprocessisservicedfirst
 Itissameasanyrealworldapplicationwherequeuesystemsareused;E.g.
Ticketing
Drawbacks:

 Favorsmonopolyofprocess.Aprocess,whichdoesnotcontainanyI/O
operation, continues its execution until it finishes its task
 In general,FCFSfavors CPUbound processes and I/Obound processes may
have to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device
utilization.
 TheaveragewaitingtimeisnotminimalforFCFSschedulingalgorithm

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT)
for each process and the Average waiting time and Turn Around Time (Assuming
there is no I/O waiting for the processes).

Solution:Thesequenceofexecutionoftheprocessesbythe CPUisrepresentedas

P1 P2 P3
0 10 15 22
10 5 7
Assuming the CPU is readily available at the time of arrival of P1, P1 starts
executing without anywaiting in the‘Ready’queue.Hencethewaiting timeforP1 is
zero.

WaitingTimeforP1=0ms(P1startsexecuting first)

Waiting Time for P2 = 10 ms (P2 starts executing after completing P1)

WaitingTimeforP3=15ms(P3startsexecutingaftercompletingP1andP2) Average

waiting time = (Waiting time for all processes) / No. of Processes


=(Waitingtimefor(P1+P2+P3))/3

=(0+10+15)/3=25/3=8.33milliseconds

TurnAroundTime(TAT)forP1=10ms(TimespentinReadyQueue+
ExecutionTime)

Turn Around Time (TAT) for P2 = 15 ms (-Do-)

TurnAroundTime(TAT)forP3=22ms (-Do-)

AverageTurnAroundTime=(TurnAroundTimeforallprocesses)/No.of
Processes

=(TurnAroundTimefor(P1+P2+P3))/3

=(10+15+22)/3= 47/3

=15.66milliseconds

Non-preemptivescheduling–LastComeFirstServed(LCFS)/LastIn First
Out (LIFO) Scheduling:

 AllocatesCPUtimetotheprocessesbasedontheorderinwhichtheyare
entered in the ‘Ready’ queue
 Thelastenteredprocessisservicedfirst
 LCFS scheduling is also known as Last In First Out (LIFO) where the
process, which is put last into the ‘Ready’ queue, is servicedfirst

Drawbacks:

 Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task

 Ingeneral,LCFS favorsCPUboundprocessesandI/Oboundprocessesmay have


to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device
utilization.

 TheaveragewaitingtimeisnotminimalforLCFSschedulingalgorithm

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together in the order
P1, P2, P3 (Assume only P1 is present in the ‘Ready’ queue when the scheduler
picks up it and P2, P3 entered ‘Ready’ queue after that). Now a new process P4
with estimated completiontime6ms enters the‘Ready’queueafter5ms of scheduling
P1. Calculate the waiting time and Turn Around Time (TAT) for each process and
the Average waiting time and Turn Around Time (Assumingthere is no I/O waiting
for the processes).Assume all the processes contain only CPU operation and no I/O
operations are involved.

Solution:InitiallythereisonlyP1availableintheReadyqueueandthe scheduling sequence


will be P1, P3, P2. P4 enters the queue during the execution of P1 and becomes the
last process entered the ‘Ready’ queue. Now the order of execution changes to P1,
P4, P3, and P2 as given below.
P1 P4 P3 P2

0 10 16 23 28

10 6 7 5

The waiting time for all the processes are given as

WaitingTimeforP1=0ms(P1startsexecutingfirst)

Waiting Time for P4 = 5 ms (P4 starts executing after completing P1. But P4
arrivedafter5msofexecutionofP1.Henceitswaitingtime=Executionstarttime –
Arrival Time = 10-5 = 5)

Waiting Time for P3 = 16 ms(P3 starts executing after completing P1 and P4)

WaitingTimeforP2=23ms(P2startsexecutingaftercompletingP1,P4andP3) Average

waiting time = (Waiting time for all processes) / No. ofProcesses


=(Waitingtimefor(P1+P4+P3+P2))/4

=(0+ 5+16 +23)/4= 44/4

= 11milliseconds

TurnAroundTime(TAT)forP1=10ms (TimespentinReadyQueue+ExecutionTime)

TurnAroundTime(TAT)forP4=11ms (TimespentinReadyQueue+
Execution Time = (Execution Start Time – Arrival
Time)+EstimatedExecutionTime=(10-5)+6=5+6)

TurnAroundTime(TAT)forP3=23ms (TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP2=28ms (TimespentinReadyQueue+ExecutionTime)
AverageTurnAroundTime=(TurnAround Timeforall processes)/No.ofProcesses
=(TurnAroundTimefor(P1+P4+P3+P2))/4

=(10+11+23+28)/4=72/4

= 18milliseconds
Non-preemptivescheduling–ShortestJobFirst(SJF)Scheduling.
 AllocatesCPUtimetotheprocessesbasedonthe executioncompletiontime for
tasks

 The average waiting time for a given set of processes is minimal in SJF
scheduling

 Optimalcomparedtoothernon-preemptiveschedulinglikeFCFS

Drawbacks:

 A process whose estimated execution completion time is high may not get a
chanceto execute ifmore and more processes with least estimated execution
time enters the ‘Ready’ queue before the process with longest estimated
execution time starts its execution

 May lead to the ‘Starvation’ of processes with high estimated completion


time

 Difficult to know in advance the next shortest process in the ‘Ready’ queue
for scheduling since new processes with different estimated execution time
keep entering the ‘Ready’ queue at any point oftime.

Non-preemptivescheduling–PrioritybasedScheduling

 Apriority,whichisuniqueorsameisassociatedwitheachtask

 The priority of a task is expressed in different ways, like a priority number,


the time required to complete the execution etc.

 In number based priority assignment the priority is a number ranging from0


to the maximum priority supported by the OS. The maximum level of
priority is OS dependent.

 WindowsCEsupports256levelsofpriority(0to255prioritynumbers,with 0
being the highest priority)
 The priority is assigned to the task on creating it. It can also be changed
dynamically (If the Operating System supports this feature)

 The non-preemptive priority based scheduler sorts the ‘Ready’ queue based
on the priority and picks the process with the highest level of priority for
execution

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0- highest priority, 3 lowest
priority) respectively enters the ready queue together. Calculate the waiting time
and Turn Around Time (TAT) for each process and the Average waiting time and
Turn Around Time (Assuming there is no I/O waiting for the processes) in priority
based scheduling algorithm.

Solution: The scheduler sorts the ‘Ready’ queue based on the priority and
schedulestheprocesswiththehighestpriority(P1withprioritynumber0)firstand thenext
high priorityprocess (P3 with prioritynumber 2) as second and so on. The order in
which the processes are scheduled for execution is represented as

P1 P3 P2

0 10 17 22
10 7 5

The waiting time for all the processes are given as

WaitingTimeforP1=0ms(P1startsexecutingfirst)

Waiting Time for P3 = 10 ms (P3 starts executing after completing P1)

WaitingTimeforP2=17ms(P2startsexecutingaftercompletingP1andP3) Average

waiting time = (Waiting time for all processes) / No. of Processes


=(Waitingtimefor(P1+P3+P2))/3
=(0+10+17)/3=27/3

= 9 milliseconds

TurnAroundTime(TAT)forP1=10ms(TimespentinReadyQueue+ExecutionTime)

Turn Around Time (TAT) for P3 = 17 ms (-Do-)

TurnAroundTime(TAT)forP2=22ms (-Do-)

AverageTurnAroundTime=(TurnAround Timeforallprocesses)/ No.ofProcesses

=(TurnAroundTimefor(P1+P3+P2))/3

=(10+17+22)/3= 49/3

=16.33milliseconds

Drawbacks:

 Similar to SJF scheduling algorithm, non-preemptive priority based


algorithm also possess the drawback of ‘Starvation’ where a process whose
priority is low may not get a chance to execute if more and more processes
with higher priorities enter the ‘Ready’ queue before the process with lower
priority starts its execution.

 ‘Starvation’ can be effectively tackled in priority based non-preemptive


scheduling by dynamically raising the priority of the low priority
task/process which is under starvation (waiting in the ready queue for a
longer time for getting the CPU time)

 The technique of gradually raising the priority of processes which are


waiting in the ‘Ready’queue as time progresses, for preventing ‘Starvation’,
is known as ‘Aging’.
Preemptive scheduling:
 Employedinsystems,whichimplementspreemptivemultitaskingmodel

 Every task in the ‘Ready’ queue gets a chance to execute. When and how
ofteneachprocessgetsachancetoexecute(getstheCPUtime)isdependent on the
type of preemptive scheduling algorithm used for scheduling the processes

 The scheduler can preempt (stop temporarily) the currently executing


task/process and select another task from the ‘Ready’ queue forexecution

 When to pre-empt a task and which task is to be picked up from the ‘ Ready’
queue for execution after preempting the current task is purely dependent on
the scheduling algorithm

 A task which is preempted by the scheduler is moved to the ‘Ready’ queue.


The act of moving a ‘Running’ process/task into the ‘Ready’ queue by the
scheduler, without the processes requesting for it is known as‘Preemption’

 Time-basedpreemption and priority-basedpreemption are the two important


approaches adopted in preemptive scheduling

Preemptive scheduling – Preemptive SJF Scheduling/Shortest Remaining


Time (SRT):

 The non preemptive SJF scheduling algorithm sorts the ‘Ready’ queue only
after the current process completes execution or enters wait state, whereas
the preemptive SJF scheduling algorithm sorts the ‘Ready’ queue when a
new process enters the ‘Ready’ queue and checks whether the executiontime
of the new process is shorter than the remaining of the total estimated
execution time of the currently executingprocess

 If the execution time of the new process is less, the currently executing
process is preempted and the new process is scheduled forexecution
 Alwayscomparestheexecutioncompletiontime(ietheremainingexecution time
for the new process) of a new process entered the ‘Ready’ queue with the
remaining time for completion of the currently executing process and
schedules the process with shortest remaining time forexecution.

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together. A new
process P4 with estimated completion time 2ms enters the ‘Ready’ queue after
2ms. Assume all the processes contain only CPU operation and no I/O operations
are involved.

Solution: At the beginning, there are only three processes (P1, P2 and P3)availablein
the‘Ready’queueand theSRTschedulerpicksup theprocess with the Shortest
remaining time for execution completion (In this example P2 with remaining time
5ms) for scheduling. Now process P4 with estimated execution completion time
2ms enters the ‘Ready’ queue after 2ms of start of execution ofP2. The processes
are re-scheduled for execution in the followingorder

P2 P4 P2 P3 P1

0 2 4 7 14 24
2 2 3 7 10

Thewaitingtimeforalltheprocessesaregivenas

WaitingTimeforP2=0ms+(4-2)ms= 2ms (P2startsexecutingfirstandis


interruptedbyP4andhastowaittillthecompletionof P4 to
get the next CPU slot)
WaitingTimeforP4=0ms (P4 starts executingbypreemptingP2 sincethe
executiontimeforcompletionofP4(2ms)isless than that
of the Remaining time for execution completion of
P2 (Here it is 3ms))
WaitingTimeforP3=7ms(P3startsexecutingaftercompletingP4andP2)
WaitingTimeforP1=14ms(P1startsexecutingaftercompletingP4,P2andP3) Average
waiting time = (Waiting time for all the processes) / No. of Processes
=(Waitingtimefor(P4+P2+P3+P1))/4
=(0+ 2+7 +14)/4 =23/4
=5.75 milliseconds
TurnAroundTime(TAT)forP2=7ms (TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP4=2ms
(TimespentinReadyQueue+ExecutionTime=(ExecutionStartTime –Arrival Time)
+ Estimated Execution Time = (2-2) + 2)

Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue +


Execution Time)
TurnAroundTime(TAT)forP1=24ms (TimespentinReadyQueue+
ExecutionTime)
AverageTurnAroundTime =(TurnAroundTimeforalltheprocesses)/No.ofProcesses
=(TurnAroundTimefor(P2+P4+P3+P1))/4
=(7+2+14+24)/4=47/4
=11.75 milliseconds
Process1

Preemptivescheduling–RoundRobin(RR) ExecutionSwitch
ExecutionSwitc
Scheduling: h

 Each process in the ‘Ready’ queue


Process4 Process2
is executed for a pre-defined time
slot.
 The execution starts with picking up the first ExecutionSwitch
ExecutionSwitc
process in the ‘Ready’ queue. It is executed for a h

pre-defined time Process3

Figure11RoundRobinScheduling
 When the pre-defined time elapses or the process completes (before the pre-
defined time slice), the next process in the ‘Ready’ queue is selected for
execution.

 Thisisrepeatedforalltheprocessesinthe‘Ready’queue

 Once each process in the ‘Ready’ queue is executed for the pre-defined time
period, the scheduler comes back and picks the first process in the ‘Ready’
queue again for execution.

 Round Robin scheduling is similar to the FCFS scheduling and the only
difference is that a time slice based preemption is added to switch the
execution between the processes in the ‘Ready’ queue

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 6, 4, 2 milliseconds respectively, enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT)
for each process and the Average waiting time and Turn Around Time (Assuming
there is no I/O waiting for the processes) in RR algorithm with Time slice= 2ms.

Solution: The scheduler sorts the ‘Ready’ queue based on the FCFS policy and
picks up the first process P1 from the ‘Ready’ queue and executes it for the time
slice 2ms. When the time slice is expired, P1 is preempted and P2 is scheduled for
execution. The Time slice expires after 2ms of execution of P2. Now P2 is
preempted and P3 ispicked up forexecution.P3 completes its execution within the
time slice and the scheduler picks P1 again for execution for the next time slice.
Thisprocedureisrepeatedtillalltheprocessesareserviced.Theorderinwhichthe
processes are scheduled for execution is represented as

P1 P2 P3 P1 P2 P1

0 2 4 6 8 10 12
2 2 2 2 2 2
Thewaitingtimeforallthe processesaregivenas

WaitingTimeforP1=0+(6-2)+(10-8)=0+4+2=6ms(P1startsexecutingfirst
andwaitsfortwotimeslicestogetexecutionbackand again 1
time slice for getting CPU time)
WaitingTimeforP2=(2-0)+(8-4)=2+4=6ms(P2startsexecutingafterP1
executesfor1timesliceandwaitsfortwotime
slices to get the CPU time)

WaitingTimeforP3=(4-0)=4ms(P3startsexecutingaftercompletingthefirst time
slices for P1 and P2 and completes its execution in a single time slice.)

Averagewaitingtime =(Waitingtimeforalltheprocesses)/No.ofProcesses

=(Waitingtimefor(P1+P2+P3))/3

=(6+6+4)/3=16/3

=5.33milliseconds

TurnAroundTime(TAT)forP1=12ms (TimespentinReadyQueue+ExecutionTime)

TurnAroundTime(TAT)forP2=10 ms (-Do-)

TurnAroundTime(TAT)forP3=6ms (-Do-)

AverageTurnAroundTime =(TurnAroundTimeforalltheprocesses)/No.of Processes

=(TurnAroundTimefor(P1+P2+P3))/3

=(12+10+6)/3=28/3

=9.33milliseconds.
Preemptivescheduling–PrioritybasedScheduling
 Same as that of the non-preemptive priority based scheduling except for the
switching of execution between tasks

 In preemptive priority based scheduling, any high priority process entering


the ‘Ready’ queue is immediately scheduled for execution whereas in the
non-preemptive scheduling any high priority process entering the ‘Ready’
queue is scheduled only after the currently executing process completes its
execution or only when it voluntarily releases theCPU
 The priority of a task/process in preemptive priority based scheduling is
indicated in the same way as that of the mechanisms adopted for non-
preemptive multitasking.

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 1, 3, 2 (0- highest priority, 3
lowest priority) respectively enters the ready queue together. A new process P4
with estimated completion time 6ms and priority 0 enters the ‘Ready’ queue after
5ms of start of execution of P1. Assume all the processes contain only CPU
operation and no I/O operations are involved.

Solution: At the beginning, there are only three processes (P1, P2 and P3)available
in the ‘Ready’ queue and the scheduler picks up the process with the highest
priority (In this example P1 with priority 1) for scheduling. Now processP4 with
estimated execution completion time 6ms and priority 0 enters the ‘Ready’ queue
after 5ms of start of execution of P1. The processes are re-scheduled for execution
in the following order

P1 P4 P1 P3 P2

0 5 11 16 23 28
5 6 5 5
7
Thewaitingtimeforallthe processesaregivenas

WaitingTimeforP1=0+(11-5)=0+6=6ms(P1startsexecutingfirstandgets
PreemptedbyP4after5msandagaingetstheCPUtime after
completion ofP4)

WaitingTimeforP4=0ms(P4startsexecutingimmediatelyonenteringthe
‘Ready’queue,bypreemptingP1)

Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)

WaitingTimeforP2=23ms(P2startsexecutingaftercompletingP1,P4andP3) Average

waiting time = (Waiting time for all the processes) / No. of Processes
=(Waitingtimefor(P1+P4+P3+P2))/4

=(6+0 +16 +23)/4= 45/4

=11.25milliseconds

TurnAroundTime(TAT)forP1=16 ms(TimespentinReadyQueue+ExecutionTime)

TurnAroundTime(TAT)forP4=6ms(TimespentinReadyQueue+ ExecutionTime
=(Execution Start Time – Arrival Time)+Estimated ExecutionTime=(5-5)+6 =0+6)

TurnAroundTime(TAT)forP3=23ms(TimespentinReadyQueue+ExecutionTime)

TurnAroundTime(TAT)forP2=28ms(TimespentinReadyQueue+ExecutionTime) Average

Turn Around Time= (Turn Around Time for all the processes) / No. of Processes
=(TurnAroundTimefor(P2+P4+P3+P1))/4

=(16+6+23+28)/4=73/4

=18.25milliseconds
HowtochoseRTOS:
ThedecisionofanRTOSforanembeddeddesignisverycritical.

Alotoffactorsneedtobeanalyzedcarefullybeforemakingadecisionon the
selection of an RTOS.

Thesefactorscanbeeither

1. Functional

2. Non-functionalrequirements.

1. FunctionalRequirements:

1. Processorsupport:

ItisnotnecessarythatallRTOS’ssupportallkindsofprocessor architectures.

ItisessentialtoensuretheprocessorsupportbytheRTOS

2. MemoryRequirements:

• TheRTOSrequiresROMmemoryforholdingtheOSfilesanditis
normally stored in a non-volatile memory likeFLASH.

OSalsorequiresworkingmemoryRAMforloadingtheOSservice.

Sinceembeddedsystemsarememoryconstrained,itisessentialtoevaluate the
minimal RAM and ROM requirements for the OS under consideration.

3. Real-TimeCapabilities:

ItisnotmandatorythattheOSforallembeddedsystemsneedtobeReal- Time
and all embedded OS’s are ‘Real-Time’ in behavior.

TheTask/processschedulingpoliciesplaysanimportantroleintheReal- Time
behavior of an OS.
3. KernelandInterruptLatency:

ThekerneloftheOSmaydisableinterruptswhileexecutingcertainservices and it
may lead to interrupt latency.

Foranembeddedsystemwhoseresponserequirementsarehigh, thislatency
should be minimal.

5. InterprocessCommunication(IPC)andTaskSynchronization:The
implementation of IPC and Synchronization is OS kernel dependent.

6. ModularizationSupport:

MostoftheOS’sprovideabunchof features.

It is very useful if the OS supports modularization where in which the


developercan choosetheessential modules and re-compiletheOSimagefor
functioning.

7. SupportforNetworkingandCommunication:

TheOSkernelmayprovidestackimplementationanddriversupportfora bunch of
communication interfaces and networking.

EnsurethattheOSunderconsiderationprovidessupportforallthe
interfaces required by the embedded product.

8. DevelopmentLanguageSupport:

CertainOS’sincludetheruntimelibrariesrequiredforrunningapplications
written in languages like JAVA and C++.

TheOSmayincludethesecomponentsasbuilt-incomponent,ifnot,check the
availability of the same from a third party.
2. Non-FunctionalRequirements:

1. CustomDevelopedorOfftheShelf:

It is possible to go for the complete development of an OS suiting the


embedded system needs or use an off the shelf, readily availableOS.

It may be possible to build the required features by customizing an open


source OS.

The decision on which to select is purely dependent on the developmentcost,


licensing fees for the OS, development time and availability of skilled
resources.

2. Cost:

Thetotal cost fordeveloping orbuying theOSand maintainingit in terms of


commercial product and custom build needs to be evaluated before taking a
decision on the selection of OS.

3. DevelopmentandDebuggingtoolsAvailability:

The availability of development and debugging tools is a critical decision


making factor in the selection of an OS for embedded design.

Certain OS’s may be superior in performance, but the availability oftoolsfor


supporting the development may be limited.

4. EaseofUse:

How easy it is to use a commercial RTOS is another important feature that


needs to be considered in the RTOS selection.

5. AfterSales:

For a commercial embedded RTOS, after sales in the form of e-mail, on-call
services etc. for bug fixes, critical patch updates and support for production
issues etc. should be analyzed thoroughly.
DeviceDrivers:
• Devicedriverisapieceofsoftwarethatactsasabridgebetweenthe
operating system and the hardware

• TheuserapplicationstalktotheOSkernelforallnecessaryinformation
exchange including communication with the hardware peripherals

UserLevelApplications/Tasks
App1 App2 App3

OperatingSystemServices
• ThearchitectureoftheOSkernelwillnotallowdirectdeviceaccess from the
user application (Kernel)
• AllthedevicerelatedaccessshouldflowthroughtheOSkernelandtheOS kernel
DeviceDrivers
routes it to the concerned hardware peripheral

• OSProvidesinterfacesintheformofApplicationProgrammingInterfaces
(APIs) for accessing the hardware
Hardware
• Thedevicedriverabstractsthehardwarefromuserapplications
• Device drivers are responsible for initiating and managing the
communication with the hardware peripherals

• DriverswhichcomesaspartoftheOperatingsystemimageisknownas ‘built-
in drivers’ or ‘onboard’ drivers. Eg. NAND FLASHdriver

• Driverswhichneedstobeinstalledontheflyforcommunicatingwithadd- on
devices are known as ‘Installable drivers’

• Forinstallabledrivers,thedriverisloadedonaneed basiswhenthedevice is
present and it is unloaded when the device isremoved/detached

• The ‘Device Manager serviceoftheOSkernelisresponsibleforloading and


unloading the driver, managing the driver etc.

• TheunderlyingimplementationofdevicedriverisOSkerneldependent

• ThedrivercommunicateswiththekernelisdependentontheOSstructure and
implementation.

• Devicedriverscanrunoneitheruserspaceorkernelspace

• Devicedriverswhichruninuserspaceareknownas usermodedriversand the


drivers which run in kernel space are known as kernel modedrivers

• Usermodedriversaresaferthankernelmodedrivers

• Ifanerrororexceptionoccursinausermodedriver,itwon’taffectthe services
of the kernel

• Ifanexceptionoccursinthe kernelmodedriver,itmayleadtothekernel crash

• Thewayhowadevicedriveriswrittenandhowtheinterruptsarehandledin it are
Operating system and target hardware specific.

• Thedevicedriverimplementsthefollowing:

• Device(Hardware)InitializationandInterruptconfiguration
• Interrupthandlingandprocessing

• Clientinterfacing(Interfacingwithuser applications)

• ThebasicInterruptconfigurationinvolvesthefollowing.

• Settheinterrupttype(EdgeTriggered(Rising/Falling)orLevelTriggered
(Low or High)), enable the interrupts and set the interruptpriorities.

• TheprocessoridentifiesaninterruptthroughIRQ.

• IRQsaregeneratedbytheInterruptController.

• RegisteranInterruptServiceRoutine(ISR)withanInterruptRequest(IRQ).

• Whenaninterruptoccurs,dependingonitspriority,itisservicedandthe
corresponding ISR is invoked

• TheprocessingpartofaninterruptishandledinanISR

• ThewholeinterruptprocessingcanbedonebytheISRitselforbyinvoking an
Interrupt Service Thread (IST)

• TheISTperformsinterruptprocessingonbehalfoftheISR

• ItisalwaysadvisedtouseanISTforinterruptprocessing,tomaketheISR
compact and short

You might also like