ES Notes
ES Notes
Operating system basics, types of operating systems, tasks, process and threads, multiprocessing
and multitasking, task scheduling: non-pre-emptive and pre-emptive scheduling; task
communication-shared memory, message passing, Remote Procedure Call and Sockets, Task
Synchronization: Task Communication/ Synchronization Issues, Task Synchronization
Techniques
OperatingSystemBasics:
The Operating System acts as a bridge between the user applications/tasks
and the underlying system resources through a set of system functionalities
and services
OS manages thesystem
resourcesand makesthem
availableto theuser
applications/tasksonaneed basis
TheprimaryfunctionsofanOperatingsystemis
Makethesystemconvenienttouse
Organizeandmanagethesystemresourcesefficientlyandcorrectly
KernelService
UserApplicat
ions ApplicationProgra
mming Interface
MemoryManageme (API)
nt
ProcessManageme
nt
TimeManagement
FileSystemManage
ment
I/
OSystemManagem DeviceDri
ent
Figure1:TheArchitectureofOperatingSystem
ver
Underlying Interface
Hardware
The Kernel:
Thekernelisthecoreoftheoperatingsystem
Itisresponsibleformanagingthesystemresourcesandthecommunication
among the hardware and other system services
Kernelactsastheabstractionlayerbetweensystemresourcesanduser
applications
Kernelcontainsasetofsystemlibrariesandservices.
ForageneralpurposeOS,thekernelcontainsdifferentserviceslike
ProcessManagement
PrimaryMemoryManagement
FileSystemmanagement
I/OSystem(Device)Management
SecondaryStorageManagement
Protection
Timemanagement
InterruptHandling
KernelSpaceandUser Space:
Thememoryspaceatwhichthekernelcodeislocatedisknownas‘Kernel Space’
All user applications are loaded to a specific area of primary memory and
this memory area is referred as ‘User Space’
The partitioning of memory into kernel and user space is purely Operating
Systemdependent
Most of the operating systems keep the kernel application code in main
memory and it is not swapped out into the secondarymemory
MonolithicKernel:
Allkernelservicesruninthekernelspace
Allkernelmodulesrunwithinthesamememoryspaceunderasinglekernel thread
Thetightinternalintegrationofkernelmodulesinmonolithickernel
architecture allows the effective
utilizationofthelow-levelfeaturesof
theunderlyingsystem
Applications
The major drawback of monolithic
kernel is that any error or failure
inanyone of the kernel modules leads
to the crashing of the entire kernel
application
Monolithickernelwithall
LINUX,SOLARIS,MS-DOSkernels
operatingsystemservices
areexamplesofmonolithickernel running in kernel space
Figure2:TheMonolithicKernelModel
Microkernel
ThemicrokerneldesignincorporatesonlytheessentialsetofOperating
System services into the kernel
RestoftheOperatingSystemservicesareimplementedinprogramsknown as
‘Servers’ which runs in user space
Memorymanagement,process
management, timer systems and Microkernel with essential
services like memory
interrupt handlers are examples of management, process
essentialservices,whichformsthepart of management, timer systemetc...
the microkernel
Figure3:TheMicrokernelModel
QNX,Minix3kernelsareexamplesformicrokernel.
BenefitsofMicrokernel:
1. Robustness: If a problem is encountered in any services in server can
reconfiguredandre-startedwithouttheneedforre-startingtheentireOS.
2. Configurability:Anyservices,whichrunas‘server’applicationcanbe
changed without need to restart the whole system.
TypesofOperatingSystems:
Depending on the type of kernel and kernel services, purpose and type of
computing systems where the OS is deployed and the responsiveness to
applications, Operating Systems are classified into
1. GeneralPurposeOperatingSystem(GPOS):
2. RealTimePurposeOperatingSystem(RTOS):
1. GeneralPurposeOperatingSystem(GPOS):
OperatingSystems,whicharedeployedingeneralcomputingsystems
Thekernelismoregeneralizedandcontainsalltherequiredservicesto
execute generic applications
Neednotbedeterministicinexecutionbehavior
Mayinjectrandomdelaysintoapplicationsoftwareandthuscauseslow
responsiveness of an application at unexpected times
Usuallydeployedincomputingsystemswheredeterministicbehaviorisnot an
important criterion
PersonalComputer/Desktopsystemisatypicalexampleforasystemwhere
GPOSs are deployed.
WindowsXP/MS-DOSetcareexamplesofGeneralPurposeOperating
System
2. RealTimePurposeOperatingSystem(RTOS):
OperatingSystems,whicharedeployedinembeddedsystemsdemanding
real-time response
Deterministicinexecutionbehavior.Consumesonlyknownamountoftime for
kernel applications
Implementspoliciesandrulesconcerningtime-criticalallocationofa
system’s resources
WindowsCE,QNX,VxWorks,MicroC/OS-IIetcareexamplesofReal Time
Operating Systems (RTOS)
The Real Time Kernel: The kernel of a Real Time Operating System is referredas
Real Time kernel. In complement to the conventional OS kernel, the Real Time
kernel is highly specialized and it contains only the minimal set of
servicesrequiredforrunningtheuserapplications/tasks.ThebasicfunctionsofaRealTim
e kernel are
a) Task/Processmanagement
b) Task/Processscheduling
c) Task/Processsynchronization
d) Error/Exceptionhandling
e) MemoryManagement
f) Interrupthandling
g) Timemanagement
TaskID:TaskIdentificationNumber
TaskState:Thecurrentstateofthetask.(E.g.State=‘Ready’foratask which
is ready to execute)
TaskType:Tasktype.Indicateswhatisthetypeforthistask.Thetaskcan be a
hard real time or soft real time or background task.
TaskPriority:Taskpriority(E.g.Taskpriority=1fortaskwithpriority=1)
TaskContextPointer:Contextpointer.Pointerforcontextsaving
TaskMemoryPointers:Pointerstothecodememory,datamemoryand stack
memory for the task
TaskSystemResourcePointers:Pointerstosystemresources(semaphores,
mutex etc) used by the task
TaskPointers:PointerstootherTCBs(TCBsforpreceding,nextand
waiting tasks)
OtherParametersOtherrelevanttaskparameters
The parameters and implementation of the TCB is kernel dependent. The TCB
parameters vary across different kernels, based on the task management
implementation
The memory allocation time increases depending on the size of the block
of memory needs to be allocated and the state of the allocated memory
block (initialized memory block consumes more allocation time than un-
initialized memory block)
RTOS kernel uses blocks of fixed size of dynamic memory and the block
is allocated for a task on a need basis. The blocks are stored in a ‘Free
buffer Queue’.
Most of the RTOS kernels allow tasks to access any of the memory
blocks without any memory protection to achieve predictable timing and
avoid the timing overheads
RTOS kernels assume that the whole design is proven correct and
protection is unnecessary. Some commercial RTOS kernels allow
memory protection as optional and the kernel enters a fail-safe mode
when an illegal memory access occurs
Interrupt Handling:
InterruptscanbeeitherSynchronousorAsynchronous.
Interruptswhichoccursinsyncwiththecurrentlyexecutingtaskisknown as
Synchronous interrupts. Usually the software interrupts fall under the
Synchronous Interrupt category. Divide by zero, memory segmentation
error etc are examples of Synchronous interrupts.
For synchronous interrupts, the interrupt handler runs in the same context
of the interrupting task.
Priority levels can be assigned to the interrupts and each interrupts can be
enabled or disabled individually.
Time Management:
The ‘Timer tick’ is taken as the timing reference by the kernel. The
‘Timer tick’ interval may vary depending on the hardware timer. Usually
the ‘Timer tick’ varies in the microseconds range
The time parameters for tasks are expressed as the multiples of the‘Timer
tick’
TheSystemtimeisupdatedbasedonthe‘Timertick’
If the System time register is 32 bits wide and the ‘Timer tick’ interval is
1microsecond, the System time register will reset in
232*10-6/(24*60*60)=49700Days=~0.0497Days=1.19Hours
Ifthe‘Timertick’intervalis1millisecond,theSystemtimeregisterwill reset in
The ‘Timer tick’ interrupt is handled by the ‘Timer Interrupt’ handler of kernel.
The ‘Timer tick’ interrupt can be utilized for implementing the following
actions.
Savethecurrentcontext(Contextofthecurrentlyexecutingtask)
Increment the System time register by one. Generate timing error and reset
the System time register if the timer tick count is greater than the maximum
range available for System time register
Activatetheperiodictasks,whichareintheidlestate
Invoke the scheduler and schedule the tasks again based on the scheduling
algorithm
Deletealltheterminatedtasksandtheirassociateddatastructures(TCBs)
Load the context for the first task in the ready queue. Due to the re-
scheduling, the ready task might be changed to a new one from the task,
which was pre-empted by the ‘Timer Interrupt’ task
Hard Real-time System:
A Hard Real Time systemmust meet thedeadlines for a task without any
slippage
Missing any deadline may produce catastrophic results for Hard Real
Time Systems, including permanent data lose and irrecoverable damages
to the system/users
Emphasizeontheprinciple‘Alateanswerisawronganswer’
Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles
are typical examples of Hard Real Time Systems
As a rule of thumb, Hard Real Time Systems does not implement the
virtual memory model for handling the memory. This eliminates thedelay
in swapping in and out the code corresponding to the task to and from the
primary memory
The presence of Human in the loop (HITL) for tasks introduces un-
expected delays in the task execution. Most of the Hard Real Time
Systems are automatic and does not contain a ‘human in theloop’
SoftReal-timeSystem:
RealTimeOperatingSystemsthatdoesnotguaranteemeetingdeadlines,
but, offer the best effort to meet the deadline
Missingdeadlinesfortasksareacceptableifthefrequencyofdeadline
missing is within thecompliancelimit ofthe Qualityof Service(QoS)
ASoftRealTimesystememphasizesontheprinciple‘Alateanswerisan
acceptable answer, but it could have done bitfaster’
SoftRealTimesystemsmostoftenhavea‘humanintheloop(HITL)’
Automatic Teller Machine (ATM) is a typical example of Soft Real
Time System. If the ATM takes a few seconds more than the ideal
operation time, nothing fatal happens.
Whentheprocessgetsitsturn,itsregistersandProgramcounterregister
becomes mapped to the physical registers of the CPU
MemoryorganizationofProcesses:
Thememoryoccupiedbytheprocessis
segregatedintothreeregionsnamely;Stack StackMemory
memory, Data memory and Code memory
Stackmemorygrows
The ‘Stack’ memory holds all temporary
downwards
data such as variables local to theprocess
Datamemorygrows
Data memory holds all global data for the upwards
process
Code Memory
Fig:5MemoryorganizationofaProcess
On loading a process into the main memory, a specific area of memory is
allocated for the process
The stack memory usually starts at the highest memory address from the
memory area allocated for the process (Depending on the OS kernel
implementation)
Thecreationofaprocesstoitsterminationisnotasinglestepoperation
The process traverses through a series of states during its transition from the
newly created state to the terminated state
The cycle through which a process changes its state from ‘newly created’ to
‘execution completed’ is known as ‘Process Life Cycle’. The various states
through which a process traverses through during a Process Life Cycle
indicates the current status of the process with respect to time and also
provides information on what it is allowed to donext
Ready State: The state, where a process is incepted into the memory and
awaiting the processor time for execution, is known as ‘Ready State’. Atthis
stage, the process is placed in the ‘Ready list’ queue maintained by the OS
Interrupted
Schedule
various conditions like- the process
dforExe
Blocke
d
enters a wait state for an event to
or
occur (E.g. Waiting for user inputs
such as keyboard input) or waiting Running
CompletedState:Astatewheretheprocesscompletesitsexecution
Thetransitionofaprocessfromonestatetoanotherisknownas
‘Statetransition’
WhenaprocesschangesitsstatefromReadyto runningorfrom
runningtoblockedorterminatedorfromblockedtorunning,theCPU
allocation for the process may alsochange
Threads
Athreadistheprimitivethatcanexecutecode
Athreadisasinglesequentialflowofcontrolwithinaprocess
‘Thread’isalsoknownaslightweightprocess
Aprocesscan havemanythreadsofexecution
Different threads, which are part of a
process, share the same address space;
meaning they share the data memory,
code memory and heap memory area
Figure7Memoryorganizationof processanditsassociatedThreads
Thread2
gisters
Thread3
isters
Thread1
something
threads.
CreateThread(NULL,
1000,
(LPTHREAD_START_RO
UTINE)
ChildThread1,NULL, 0,
&dwThreadID);
//Create child thread 2
Sincetheprocessissplitinto
CreateThread(NULL,
1000,
(LPTHREAD_START_RO
UTINE)
ChildThread2,NULL, 0,
threadentersawaitstate,the
CPUcanbeutilizedbyother
Figure8Processwithmulti-threads
threadsoftheprocessthatdonot requiretheevent,whichtheotherthreadis
waiting, for processing. This speeds up the execution of theprocess.
EfficientCPUutilization.TheCPUisengagedalltime.
Thread V/s Process
Thread Process
Threadisasingleunitofexecutionandispart of Processisaprograminexecutionandcontains one or
process. more threads.
A thread does not have its own data memory and Processhasitsowncodememory,datamemory and
heap memory. It shares the data memory and stack memory.
heap memory with other threads of the same
process.
There can be multiple threads in a process.The Threads within a process share the code, data and
first thread (main thread) calls the main heap memory. Each thread holds separate
function and occupies the start of the stack memory area for stack (shares the total stack
memory of the process. memory of the process).
Contextswitchingisinexpensiveandfast Contextswitchingiscomplexandinvolveslotof OS
overhead and is comparatively slower.
AdvantagesofThreads:
1. Better memory utilization: Multiple threads of the same process share the
address space for data memory. This also reduces the complexity of inter
thread communication since variables can be shared across thethreads.
2. EfficientCPUutilization:TheCPUisengagedall time.
3. Speeds up the execution of the process: The process is split into different
threads, when one thread enters a wait state, the CPU can be utilized byother
threads of the process that do not require the event, which the other thread is
waiting, for processing.
Multiprocessing&Multitasking
Theabilitytoexecutemultipleprocessessimultaneouslyisreferredas
multiprocessing
Systemswhicharecapableofperformingmultiprocessingareknownas
multiprocessorsystems
ReloadContextforProcess2fromPCB
ReloadContextforProcess1fromPCB
Idle
PerformotherOSoperationsrelate
PerformotherOSoperationsrelate
ExecutionswitchestoProces
ExecutionswitchestoProces
SaveCurrentcontextintoPCB0
SaveCurrentcontextintoPCB1
Running
dto ‘Context Switching’
s1
Process
1.
3.
2.
1.
3.
2.
Delayinexecutionof Process 2happened due to ‘Context
Delayinexecutionof
Switching’ Process 1happened due to ‘Context Switching’
Time
Figure9 ContextSwitching
Multiprogramming:TheabilityoftheOperatingSystemtohavemultiple
programsinmemory,whicharereadyforexecution,isreferredas multiprogramming.
TypesofMultitasking:
Dependingonhowthetask/processexecutionswitchingactisimplemented, multitasking
can is classified into
• Co-operative Multitasking: Co-operative multitasking is the most primitive
form of multitasking in which a task/process gets a chance to execute only
when the currently executing task/process voluntarily relinquishes the CPU.
In this method, anytask/process can avail the CPU as much time as it wants.
Since this type of implementation involves the mercy of the tasks each other
for getting the CPU time for execution, it is known as co-operative
multitasking. If the currently executing task is non-cooperative, the other
tasks may have to wait for a long time to get the CPU
TaskScheduling:
In a multitasking system, there should be some mechanism in place to share
the CPU among the different tasks and to decide which process/task is to be
executed at a given point of time
TaskScheduling - Queues
Tosummarize,agoodschedulingalgorithmhashighCPUutilization,minimum
TurnAroundTime(TAT),maximumthroughputandleastresponsetime.
ThevariousqueuesmaintainedbyOSinassociationwithCPUschedulingare
• JobQueue:Jobqueuecontainsalltheprocessesinthesystem
• Ready Queue: Contains all the processes, which are ready for execution and
waiting for CPU to get their turn for execution. The Ready queue is empty
when there is no process ready for running.
• Device Queue: Contains the set of processes, which are waiting for an I/O
device
TaskScheduling–TasktransitionthroughvariousQueues
Process1
Scheduler
JobQueue
RunProcess
toCompletio
n
Processn
to‘Ready’queue
ReadyQueue
Move Processto ‘DeviceQueue’
Move I/O C
ompletedProcessto‘Re
ady’queue
DeviceMana
ger Proces
s
Process1
Process2
Favorsmonopolyofprocess.Aprocess,whichdoesnotcontainanyI/O
operation, continues its execution until it finishes its task
In general,FCFSfavors CPUbound processes and I/Obound processes may
have to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device
utilization.
TheaveragewaitingtimeisnotminimalforFCFSschedulingalgorithm
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT)
for each process and the Average waiting time and Turn Around Time (Assuming
there is no I/O waiting for the processes).
Solution:Thesequenceofexecutionoftheprocessesbythe CPUisrepresentedas
P1 P2 P3
0 10 15 22
10 5 7
Assuming the CPU is readily available at the time of arrival of P1, P1 starts
executing without anywaiting in the‘Ready’queue.Hencethewaiting timeforP1 is
zero.
WaitingTimeforP1=0ms(P1startsexecuting first)
WaitingTimeforP3=15ms(P3startsexecutingaftercompletingP1andP2) Average
=(0+10+15)/3=25/3=8.33milliseconds
TurnAroundTime(TAT)forP1=10ms(TimespentinReadyQueue+
ExecutionTime)
TurnAroundTime(TAT)forP3=22ms (-Do-)
AverageTurnAroundTime=(TurnAroundTimeforallprocesses)/No.of
Processes
=(TurnAroundTimefor(P1+P2+P3))/3
=(10+15+22)/3= 47/3
=15.66milliseconds
Non-preemptivescheduling–LastComeFirstServed(LCFS)/LastIn First
Out (LIFO) Scheduling:
AllocatesCPUtimetotheprocessesbasedontheorderinwhichtheyare
entered in the ‘Ready’ queue
Thelastenteredprocessisservicedfirst
LCFS scheduling is also known as Last In First Out (LIFO) where the
process, which is put last into the ‘Ready’ queue, is servicedfirst
Drawbacks:
Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
TheaveragewaitingtimeisnotminimalforLCFSschedulingalgorithm
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together in the order
P1, P2, P3 (Assume only P1 is present in the ‘Ready’ queue when the scheduler
picks up it and P2, P3 entered ‘Ready’ queue after that). Now a new process P4
with estimated completiontime6ms enters the‘Ready’queueafter5ms of scheduling
P1. Calculate the waiting time and Turn Around Time (TAT) for each process and
the Average waiting time and Turn Around Time (Assumingthere is no I/O waiting
for the processes).Assume all the processes contain only CPU operation and no I/O
operations are involved.
0 10 16 23 28
10 6 7 5
WaitingTimeforP1=0ms(P1startsexecutingfirst)
Waiting Time for P4 = 5 ms (P4 starts executing after completing P1. But P4
arrivedafter5msofexecutionofP1.Henceitswaitingtime=Executionstarttime –
Arrival Time = 10-5 = 5)
Waiting Time for P3 = 16 ms(P3 starts executing after completing P1 and P4)
WaitingTimeforP2=23ms(P2startsexecutingaftercompletingP1,P4andP3) Average
= 11milliseconds
TurnAroundTime(TAT)forP1=10ms (TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP4=11ms (TimespentinReadyQueue+
Execution Time = (Execution Start Time – Arrival
Time)+EstimatedExecutionTime=(10-5)+6=5+6)
TurnAroundTime(TAT)forP3=23ms (TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP2=28ms (TimespentinReadyQueue+ExecutionTime)
AverageTurnAroundTime=(TurnAround Timeforall processes)/No.ofProcesses
=(TurnAroundTimefor(P1+P4+P3+P2))/4
=(10+11+23+28)/4=72/4
= 18milliseconds
Non-preemptivescheduling–ShortestJobFirst(SJF)Scheduling.
AllocatesCPUtimetotheprocessesbasedonthe executioncompletiontime for
tasks
The average waiting time for a given set of processes is minimal in SJF
scheduling
Optimalcomparedtoothernon-preemptiveschedulinglikeFCFS
Drawbacks:
A process whose estimated execution completion time is high may not get a
chanceto execute ifmore and more processes with least estimated execution
time enters the ‘Ready’ queue before the process with longest estimated
execution time starts its execution
Difficult to know in advance the next shortest process in the ‘Ready’ queue
for scheduling since new processes with different estimated execution time
keep entering the ‘Ready’ queue at any point oftime.
Non-preemptivescheduling–PrioritybasedScheduling
Apriority,whichisuniqueorsameisassociatedwitheachtask
WindowsCEsupports256levelsofpriority(0to255prioritynumbers,with 0
being the highest priority)
The priority is assigned to the task on creating it. It can also be changed
dynamically (If the Operating System supports this feature)
The non-preemptive priority based scheduler sorts the ‘Ready’ queue based
on the priority and picks the process with the highest level of priority for
execution
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0- highest priority, 3 lowest
priority) respectively enters the ready queue together. Calculate the waiting time
and Turn Around Time (TAT) for each process and the Average waiting time and
Turn Around Time (Assuming there is no I/O waiting for the processes) in priority
based scheduling algorithm.
Solution: The scheduler sorts the ‘Ready’ queue based on the priority and
schedulestheprocesswiththehighestpriority(P1withprioritynumber0)firstand thenext
high priorityprocess (P3 with prioritynumber 2) as second and so on. The order in
which the processes are scheduled for execution is represented as
P1 P3 P2
0 10 17 22
10 7 5
WaitingTimeforP1=0ms(P1startsexecutingfirst)
WaitingTimeforP2=17ms(P2startsexecutingaftercompletingP1andP3) Average
= 9 milliseconds
TurnAroundTime(TAT)forP1=10ms(TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP2=22ms (-Do-)
=(TurnAroundTimefor(P1+P3+P2))/3
=(10+17+22)/3= 49/3
=16.33milliseconds
Drawbacks:
Every task in the ‘Ready’ queue gets a chance to execute. When and how
ofteneachprocessgetsachancetoexecute(getstheCPUtime)isdependent on the
type of preemptive scheduling algorithm used for scheduling the processes
When to pre-empt a task and which task is to be picked up from the ‘ Ready’
queue for execution after preempting the current task is purely dependent on
the scheduling algorithm
The non preemptive SJF scheduling algorithm sorts the ‘Ready’ queue only
after the current process completes execution or enters wait state, whereas
the preemptive SJF scheduling algorithm sorts the ‘Ready’ queue when a
new process enters the ‘Ready’ queue and checks whether the executiontime
of the new process is shorter than the remaining of the total estimated
execution time of the currently executingprocess
If the execution time of the new process is less, the currently executing
process is preempted and the new process is scheduled forexecution
Alwayscomparestheexecutioncompletiontime(ietheremainingexecution time
for the new process) of a new process entered the ‘Ready’ queue with the
remaining time for completion of the currently executing process and
schedules the process with shortest remaining time forexecution.
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together. A new
process P4 with estimated completion time 2ms enters the ‘Ready’ queue after
2ms. Assume all the processes contain only CPU operation and no I/O operations
are involved.
Solution: At the beginning, there are only three processes (P1, P2 and P3)availablein
the‘Ready’queueand theSRTschedulerpicksup theprocess with the Shortest
remaining time for execution completion (In this example P2 with remaining time
5ms) for scheduling. Now process P4 with estimated execution completion time
2ms enters the ‘Ready’ queue after 2ms of start of execution ofP2. The processes
are re-scheduled for execution in the followingorder
P2 P4 P2 P3 P1
0 2 4 7 14 24
2 2 3 7 10
Thewaitingtimeforalltheprocessesaregivenas
Preemptivescheduling–RoundRobin(RR) ExecutionSwitch
ExecutionSwitc
Scheduling: h
Figure11RoundRobinScheduling
When the pre-defined time elapses or the process completes (before the pre-
defined time slice), the next process in the ‘Ready’ queue is selected for
execution.
Thisisrepeatedforalltheprocessesinthe‘Ready’queue
Once each process in the ‘Ready’ queue is executed for the pre-defined time
period, the scheduler comes back and picks the first process in the ‘Ready’
queue again for execution.
Round Robin scheduling is similar to the FCFS scheduling and the only
difference is that a time slice based preemption is added to switch the
execution between the processes in the ‘Ready’ queue
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 6, 4, 2 milliseconds respectively, enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT)
for each process and the Average waiting time and Turn Around Time (Assuming
there is no I/O waiting for the processes) in RR algorithm with Time slice= 2ms.
Solution: The scheduler sorts the ‘Ready’ queue based on the FCFS policy and
picks up the first process P1 from the ‘Ready’ queue and executes it for the time
slice 2ms. When the time slice is expired, P1 is preempted and P2 is scheduled for
execution. The Time slice expires after 2ms of execution of P2. Now P2 is
preempted and P3 ispicked up forexecution.P3 completes its execution within the
time slice and the scheduler picks P1 again for execution for the next time slice.
Thisprocedureisrepeatedtillalltheprocessesareserviced.Theorderinwhichthe
processes are scheduled for execution is represented as
P1 P2 P3 P1 P2 P1
0 2 4 6 8 10 12
2 2 2 2 2 2
Thewaitingtimeforallthe processesaregivenas
WaitingTimeforP1=0+(6-2)+(10-8)=0+4+2=6ms(P1startsexecutingfirst
andwaitsfortwotimeslicestogetexecutionbackand again 1
time slice for getting CPU time)
WaitingTimeforP2=(2-0)+(8-4)=2+4=6ms(P2startsexecutingafterP1
executesfor1timesliceandwaitsfortwotime
slices to get the CPU time)
WaitingTimeforP3=(4-0)=4ms(P3startsexecutingaftercompletingthefirst time
slices for P1 and P2 and completes its execution in a single time slice.)
Averagewaitingtime =(Waitingtimeforalltheprocesses)/No.ofProcesses
=(Waitingtimefor(P1+P2+P3))/3
=(6+6+4)/3=16/3
=5.33milliseconds
TurnAroundTime(TAT)forP1=12ms (TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP2=10 ms (-Do-)
TurnAroundTime(TAT)forP3=6ms (-Do-)
=(TurnAroundTimefor(P1+P2+P3))/3
=(12+10+6)/3=28/3
=9.33milliseconds.
Preemptivescheduling–PrioritybasedScheduling
Same as that of the non-preemptive priority based scheduling except for the
switching of execution between tasks
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 1, 3, 2 (0- highest priority, 3
lowest priority) respectively enters the ready queue together. A new process P4
with estimated completion time 6ms and priority 0 enters the ‘Ready’ queue after
5ms of start of execution of P1. Assume all the processes contain only CPU
operation and no I/O operations are involved.
Solution: At the beginning, there are only three processes (P1, P2 and P3)available
in the ‘Ready’ queue and the scheduler picks up the process with the highest
priority (In this example P1 with priority 1) for scheduling. Now processP4 with
estimated execution completion time 6ms and priority 0 enters the ‘Ready’ queue
after 5ms of start of execution of P1. The processes are re-scheduled for execution
in the following order
P1 P4 P1 P3 P2
0 5 11 16 23 28
5 6 5 5
7
Thewaitingtimeforallthe processesaregivenas
WaitingTimeforP1=0+(11-5)=0+6=6ms(P1startsexecutingfirstandgets
PreemptedbyP4after5msandagaingetstheCPUtime after
completion ofP4)
WaitingTimeforP4=0ms(P4startsexecutingimmediatelyonenteringthe
‘Ready’queue,bypreemptingP1)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
WaitingTimeforP2=23ms(P2startsexecutingaftercompletingP1,P4andP3) Average
waiting time = (Waiting time for all the processes) / No. of Processes
=(Waitingtimefor(P1+P4+P3+P2))/4
=11.25milliseconds
TurnAroundTime(TAT)forP1=16 ms(TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP4=6ms(TimespentinReadyQueue+ ExecutionTime
=(Execution Start Time – Arrival Time)+Estimated ExecutionTime=(5-5)+6 =0+6)
TurnAroundTime(TAT)forP3=23ms(TimespentinReadyQueue+ExecutionTime)
TurnAroundTime(TAT)forP2=28ms(TimespentinReadyQueue+ExecutionTime) Average
Turn Around Time= (Turn Around Time for all the processes) / No. of Processes
=(TurnAroundTimefor(P2+P4+P3+P1))/4
=(16+6+23+28)/4=73/4
=18.25milliseconds
HowtochoseRTOS:
ThedecisionofanRTOSforanembeddeddesignisverycritical.
Alotoffactorsneedtobeanalyzedcarefullybeforemakingadecisionon the
selection of an RTOS.
Thesefactorscanbeeither
1. Functional
2. Non-functionalrequirements.
1. FunctionalRequirements:
1. Processorsupport:
ItisnotnecessarythatallRTOS’ssupportallkindsofprocessor architectures.
ItisessentialtoensuretheprocessorsupportbytheRTOS
2. MemoryRequirements:
• TheRTOSrequiresROMmemoryforholdingtheOSfilesanditis
normally stored in a non-volatile memory likeFLASH.
OSalsorequiresworkingmemoryRAMforloadingtheOSservice.
Sinceembeddedsystemsarememoryconstrained,itisessentialtoevaluate the
minimal RAM and ROM requirements for the OS under consideration.
3. Real-TimeCapabilities:
ItisnotmandatorythattheOSforallembeddedsystemsneedtobeReal- Time
and all embedded OS’s are ‘Real-Time’ in behavior.
TheTask/processschedulingpoliciesplaysanimportantroleintheReal- Time
behavior of an OS.
3. KernelandInterruptLatency:
ThekerneloftheOSmaydisableinterruptswhileexecutingcertainservices and it
may lead to interrupt latency.
Foranembeddedsystemwhoseresponserequirementsarehigh, thislatency
should be minimal.
5. InterprocessCommunication(IPC)andTaskSynchronization:The
implementation of IPC and Synchronization is OS kernel dependent.
6. ModularizationSupport:
MostoftheOS’sprovideabunchof features.
7. SupportforNetworkingandCommunication:
TheOSkernelmayprovidestackimplementationanddriversupportfora bunch of
communication interfaces and networking.
EnsurethattheOSunderconsiderationprovidessupportforallthe
interfaces required by the embedded product.
8. DevelopmentLanguageSupport:
CertainOS’sincludetheruntimelibrariesrequiredforrunningapplications
written in languages like JAVA and C++.
TheOSmayincludethesecomponentsasbuilt-incomponent,ifnot,check the
availability of the same from a third party.
2. Non-FunctionalRequirements:
1. CustomDevelopedorOfftheShelf:
2. Cost:
3. DevelopmentandDebuggingtoolsAvailability:
4. EaseofUse:
5. AfterSales:
For a commercial embedded RTOS, after sales in the form of e-mail, on-call
services etc. for bug fixes, critical patch updates and support for production
issues etc. should be analyzed thoroughly.
DeviceDrivers:
• Devicedriverisapieceofsoftwarethatactsasabridgebetweenthe
operating system and the hardware
• TheuserapplicationstalktotheOSkernelforallnecessaryinformation
exchange including communication with the hardware peripherals
UserLevelApplications/Tasks
App1 App2 App3
OperatingSystemServices
• ThearchitectureoftheOSkernelwillnotallowdirectdeviceaccess from the
user application (Kernel)
• AllthedevicerelatedaccessshouldflowthroughtheOSkernelandtheOS kernel
DeviceDrivers
routes it to the concerned hardware peripheral
• OSProvidesinterfacesintheformofApplicationProgrammingInterfaces
(APIs) for accessing the hardware
Hardware
• Thedevicedriverabstractsthehardwarefromuserapplications
• Device drivers are responsible for initiating and managing the
communication with the hardware peripherals
• DriverswhichcomesaspartoftheOperatingsystemimageisknownas ‘built-
in drivers’ or ‘onboard’ drivers. Eg. NAND FLASHdriver
• Driverswhichneedstobeinstalledontheflyforcommunicatingwithadd- on
devices are known as ‘Installable drivers’
• Forinstallabledrivers,thedriverisloadedonaneed basiswhenthedevice is
present and it is unloaded when the device isremoved/detached
• TheunderlyingimplementationofdevicedriverisOSkerneldependent
• ThedrivercommunicateswiththekernelisdependentontheOSstructure and
implementation.
• Devicedriverscanrunoneitheruserspaceorkernelspace
• Usermodedriversaresaferthankernelmodedrivers
• Ifanerrororexceptionoccursinausermodedriver,itwon’taffectthe services
of the kernel
• Thewayhowadevicedriveriswrittenandhowtheinterruptsarehandledin it are
Operating system and target hardware specific.
• Thedevicedriverimplementsthefollowing:
• Device(Hardware)InitializationandInterruptconfiguration
• Interrupthandlingandprocessing
• Clientinterfacing(Interfacingwithuser applications)
• ThebasicInterruptconfigurationinvolvesthefollowing.
• Settheinterrupttype(EdgeTriggered(Rising/Falling)orLevelTriggered
(Low or High)), enable the interrupts and set the interruptpriorities.
• TheprocessoridentifiesaninterruptthroughIRQ.
• IRQsaregeneratedbytheInterruptController.
• RegisteranInterruptServiceRoutine(ISR)withanInterruptRequest(IRQ).
• Whenaninterruptoccurs,dependingonitspriority,itisservicedandthe
corresponding ISR is invoked
• TheprocessingpartofaninterruptishandledinanISR
• ThewholeinterruptprocessingcanbedonebytheISRitselforbyinvoking an
Interrupt Service Thread (IST)
• TheISTperformsinterruptprocessingonbehalfoftheISR
• ItisalwaysadvisedtouseanISTforinterruptprocessing,tomaketheISR
compact and short