Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views40 pages

Rtos

Real-Time Operating Systems (RTOS) are designed for applications that require immediate processing of data with strict timing constraints. They offer features like priority-based scheduling, modular task management, and efficient memory handling, making them suitable for embedded systems and critical applications. However, RTOS can be complex, resource-intensive, and may struggle with multitasking, posing challenges in certain scenarios.

Uploaded by

mariyal ece
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views40 pages

Rtos

Real-Time Operating Systems (RTOS) are designed for applications that require immediate processing of data with strict timing constraints. They offer features like priority-based scheduling, modular task management, and efficient memory handling, making them suitable for embedded systems and critical applications. However, RTOS can be complex, resource-intensive, and may struggle with multitasking, posing challenges in certain scenarios.

Uploaded by

mariyal ece
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Introduction to Real-Time Systems

Real-time operating system (RTOS) is an operating system intended to serve real


time application that process data as it comes in, mostly without buffer delay. The full form
of RTOS is Real time operating system. In a RTOS, Processing time requirement are
calculated in tenths of seconds increments of time. It is time-bound system that can be
defined as fixed time constraints. In this type of system, processing must be done inside the
specified constraints. Otherwise, the system will fail.

Usage of RTOS

Here are important reasons for using RTOS:

 It offers priority-based scheduling, which allows you to separate analytical


processing from non-critical processing.
 The Real time OS provides API functions that allow cleaner and smaller application
code.
 Abstracting timing dependencies and the task-based design results in fewer
interdependencies between modules.
 RTOS offers modular task-based development, which allows modular task-based
testing.
 The task-based API encourages modular development as a task, will typically have a
clearly defined role. It allows designers/teams to work independently on their parts
of the project.
 An RTOS is event-driven with no time wastage on processing time for the event
which is not occur

Components of RTOS

Here, are important Component of RTOS

1
Fig:1.1: Components of RTOS

The Scheduler: This component of RTOS tells that in which order, the tasks can be
executed which is generally based on the priority.

Symmetric Multiprocessing (SMP): It is a number of multiple different tasks that can be


handled by the RTOS so that parallel processing can be done.

Function Library: It is an important element of RTOS that acts as an interface that helps
you to connect kernel and application code. This application allows you to send the
requests to the Kernel using a function library so that the application can give the desired
results.

Memory Management: this element is needed in the system to allocate memory to every
program, which is the most important element of the RTOS.

Fast dispatch latency: It is an interval between the termination of the task that can be
identified by the OS and the actual time taken by the thread, which is in the ready queue,
that has started processing.

User-defined data objects and classes: RTOS system makes use of programming languages
like C or C++, which should be organized according to their operation.

Types of RTOS

Three types of RTOS systems are:

2
Hard Real Time :

In Hard RTOS, the deadline is handled very strictly which means that given task
must start executing on specified scheduled time, and must be completed within the
assigned time duration.

Example: Medical critical care system, Aircraft systems, etc.

Firm Real time:

These type of RTOS also need to follow the deadlines. However, missing a deadline
may not have big impact but could cause undesired affects, like a huge reduction in quality
of a product.

Example: Various types of Multimedia applications.

Soft Real Time:

Soft Real time RTOS, accepts some delays by the Operating system. In this type of
RTOS, there is a deadline assigned for a specific job, but a delay for a small amount of time
is acceptable. So, deadlines are handled softly by this type of RTOS.

Example: Online Transaction system and Livestock price quotation System.

Factors for selecting an RTOS

Here, are essential factors that you need to consider for selecting RTOS:

Performance: Performance is the most important factor required to be considered while


selecting for a RTOS.

Middleware: if there is no middleware support in Real time operating system, then the
issue of time-taken integration of processes occurs.

Error-free: RTOS systems are error-free. Therefore, there is no chance of getting an error
while performing the task.

Embedded system usage: Programs of RTOS are of small size. So we widely use RTOS for
embedded systems.

Maximum Consumption: we can achieve maximum Consumption with the help of RTOS.

Task shifting: Shifting time of the tasks is very less.

3
Unique features: A good RTS should be capable, and it has some extra features like how it
operates to execute a command, efficient protection of the memory of the system, etc.

24/7 performance: RTOS is ideal for those applications which require to run

24/7. Here are important differences between GPOS and RTOS:

Table:1.1:Difference between in GPOS and RTOS

General-Purpose Operating System (GPOS) Real-Time Operating System (RTOS)


It used for desktop PC and laptop It is only applied to the embedded
application.
Process-based Scheduling Time-based scheduling used like round-
robin scheduling
Interrupt latency is not considered as Interrupt lag is minimal, which is measured
important as in RTOS in a few microseconds.

No priority inversion mechanism is present The priority inversion mechanism is


in the system. current. So it cannot modify by the system.

Kernel's operation may or may not be Kernel's operation can be preempted.


preempted.

Priority inversion remain unnoticed No predictability guarantees

Applications of Real Time Operating System

Real-time systems are used in:

 Airlines reservation system.


 Air traffic control system.
 Systems that provide immediate updating.
 Used in any system that provides up to date and minute information on stock
prices.
 Defense application systems like RADAR.
 Networked Multimedia Systems
 Command Control Systems
 Internet Telephony
 Anti-lock Brake Systems
 Heart Pacemaker

4
Disadvantages of RTOS

Here, are drawbacks/cons of using RTOS system:

 RTOS system can run minimal tasks together, and it concentrates only on those
applications which contain an error so that it can avoid them.
 RTOS is the system that concentrates on a few tasks. Therefore, it is really hard
for these systems to do multi-tasking.
 Specific drivers are required for the RTOS so that it can offer fast response time
to interrupt signals, which helps to maintain its speed.
 Plenty of resources are used by RTOS, which makes this system expensive.
 The tasks which have a low priority need to wait for a long time as the RTOS
maintains the accuracy of the program, which are under execution.
 Minimum switching of tasks is done in Real time operating systems.
 It uses complex algorithms which is difficult to understand.
 RTOS uses lot of resources, which sometimes not suitable for the system.

RTOS Architecture – Kernel

RTOS Architecture
For simpler applications, RTOS is usually a kernel but as complexity increases,
various modules like networking protocol stacks debugging facilities, device I/Os are
includes in addition to the kernel. The general architecture of RTOS is shown in the below
fig 1.2

Fig:1.2: RTOS Architecture

5
Kernel

RTOS kernel acts as an abstraction layer between the hardware and the
applications. There are three broad categories of kernels
·
Monolithic kernel

Monolithic kernels are part of Unix-like operating systems like Linux, FreeBSD etc.
A monolithic kernel is one single program that contains all of the code necessary to
perform every kernel related task. It runs all basic system services (i.e. process and
memory management, interrupt handling and I/O communication, file system, etc) and
provides powerful abstractions of the underlying hardware. Amount of context switches
and messaging involved are greatly reduced which makes it run faster than microkernel.
· Microkernel

It runs only basic process communication (messaging) and I/O control. It normally
provides only the minimal services such as managing memory protection, Inter process
communication and the process management. The other functions such as running the
hardware processes are not handled directly by micro kernels. Thus, micro kernels
provide a smaller set of simple hardware abstractions. It is more stable than monolithic as
the kernel is unaffected even if the servers failed (i.e., File System). Micro kernels are part
of the operating systems like AIX, BeOS, Mach, Mac OS X, MINIX, and QNX. Etc

·
Hybrid Kernel

Hybrid kernels are extensions of micro kernels with some properties of monolithic
kernels. Hybrid kernels are similar to micro kernels, except that they include additional
code in kernel space so that such code can run more swiftly than it would were it in user
space. These are part of the operating systems such as Microsoft Windows NT, 2000 and
XP. Dragon Fly BSD, etc
·
Exokernel

Exokernels provides efficient control over hardware. It runs only services protecting
the resources (i.e. tracking the ownership, guarding the usage, revoking access to resources,
etc) by providing low-level interface for library operating systems and leaving the
management to the application.

Six types of common services are shown in the following figure below and explained
in subsequent sections

6
Fig:1.3: Representation of Common Services Offered By a RTOS System

Architecture – Task Management

Task Management
In RTOS, The application is decomposed into small, schedulable, and sequential
program units known as “Task”, a basic unit of execution and is governed by three time-
critical properties; release time, deadline and execution time. Release time refers to the
point in time from which the task can be executed. Deadline is the point in time by which
the task must complete. Execution time denotes the time the task takes to execute.

Fig:1.4: Use of RTOS for Time Management Application

Each task may exist in following states

Dormant : Task doesn’t require computer time


Ready: Task is ready to go active state, waiting processor time

7
Active: Task is running

8
Suspended: Task put on hold temporarily
Pending: Task waiting for resource.

Fig:1.5: Representation of Different Time Management Tasks Done by an RTOS

During the execution of an application program, individual tasks are continuously


changing from one state to another. However, only one task is in the running mode (i.e.
given CPU control) at any point of the execution. In the process where CPU control is
change from one task to another, context of the to-be-suspended task will be saved while
context of the to-be-executed task will be retrieved, the process referred to as context
switching. A task object is defined by the following set of components:

Task Control block: Task uses TCBs to remember its context. TCBs are data structures
residing in RAM, accessible only by RTOS

Task Stack: These reside in RAM, accessible by stack pointer.

Task Routine: Program code residing in ROM

Scheduler: The scheduler keeps record of the state of each task and selects from among
them that are ready to execute and allocates the CPU to one of them. Various scheduling
algorithms are used in RTOS.

Polled Loop: Sequentially determines if specific task requires time.

9
Fig:1.6: Process Flow of a Scheduler

Polled System with interrupts. In addition to polling, it takes care of critical tasks.

Fig:1.7: A Figure Illustrating Polled Systems with Interrupts

Round Robin : Sequences from task to task, each task getting a slice of time

Fig:1.8: Round Robin Sequences From Task to Task

Hybrid System: Sensitive to sensitive interrupts, with Round Robin system working in
background

Interrupt Driven: System continuously wait for the interrupts

Non pre-emptive scheduling or Cooperative Multitasking: Highest priority task executes


for some time, then relinquishes control, re-enters ready state

10
Fig:1.9: Non-Preemptive Scheduling or Cooperative Multitasking

Preemptive scheduling Priority multitasking: Current task is immediately


suspended Control is given to the task of the highest priority at all time.

Fig:1.10: Preemptive Scheduling or Priority Multitasking

Dispatcher : The dispatcher gives control of the CPU to the task selected by the scheduler
by performing context switching and changes the flow of execution.

Synchronization and communication:

Task Synchronization & inter task communication serves to pass information amongst
tasks.

Task Synchronization

Synchronization is essential for tasks to share mutually exclusive resources (devices,


buffers, etc) and/or allow multiple concurrent tasks to be executed (e.g. Task A needs a
result from task B, so task A can only run till task B produces it.
Task synchronization is achieved using two types of mechanisms:

11
Event Objects

Event objects are used when task synchronization is required without resource
sharing. They allow one or more tasks to keep waiting for a specified event to occur. Event
object can exist either in triggered or non-triggered state. Triggered state indicates
resumption of the task.

Semaphores.

A semaphore has an associated resource count and a wait queue. The resource count
indicates availability of resource. The wait queue manages the tasks waiting for resources
from the semaphore. A semaphore functions like a key that define whether a task has the
access to the resource. A task gets an access to the resource when it acquires the
semaphore.

There are three types of semaphore:


 Binary Semaphores
 Counting Semaphores
 Mutually Exclusion (Mutex) Semaphores

Semaphore functionality (Mutex) represented pictorially in the following figure

Fig:1.11: Architecture of Semaphore Functionality

Inter task communication

Inter task communication involves sharing of data among tasks through sharing of
memory space, transmission of data, etc. Inter task communications is executed using
following mechanisms

Message queues

A message queue is an object used for inter task communication through which task
send or receive messages placed in a shared memory. The queue may follow 1) First In
First Out (FIFO), 2) Last in First Out(LIFO) or 3) Priority (PRI) sequence. Usually, a
message queue comprises of an associated queue control block (QCB), name, unique ID,

12
memory buffers, queue length, maximum message length and one or more task waiting
lists. A message queue with a length of 1 is commonly known as a mailbox.

Fig:1.12: Flow of a Message Queue in a Mailbox


Pipes
A pipe is an object that provide simple communication channel used for
unstructured data exchange among tasks. A pipe does not store multiple messages but
stream of bytes. Also, data flow from a pipe cannot be prioritized.

Remote procedure call (RPC)

It permits distributed computing where task can invoke the execution of another
task on a remote computer.

Memory Management
Two types of memory managements are provided in RTOS – Stack and Heap. Stack
management is used during context switching for TCBs. Memory other than memory used
for program code, program data and system stack is called heap memory and it is used for
dynamic allocation of data space for tasks. Management of this memory is called heap
management.

Timer Management

Tasks need to be performed after scheduled durations. To keep track of the delays,
timers- relative and absolute are provided in RTOS.

Interrupt and event handling

RTOS provides various functions for interrupt and event handling, viz., Defining
interrupt handler, creation and deletion of ISR, referencing the state of an ISR, enabling
and disabling of an interrupt, etc. It also restricts interrupts from occurring when
modifying a data structure, minimize interrupt latencies due to disabling of interrupts
when RTOS is performing critical operations, minimizes interrupt response times.

Device I/O Management

RTOS generally provides large number of APIs to support diverse hardware device
drivers.

13
Features of RTOS

Here are important features of RTOS:

 Occupy very less memory


 Consume fewer resources
 Response times are highly predictable
 Unpredictable environment
 The Kernel saves the state of the interrupted task ad then determines which task
it should run next.
 The Kernel restores the state of the task and passes control of the CPU for that
task.

POSIX (Portable Operating System Interface)

POSIX (Portable Operating System Interface) is a set of standard operating system


interfaces based on the Unix operating system. The need for standardization arose
because enterprises using computers wanted to be able to develop programs that could be
moved among different manufacturer's computer systems without having to be recoded.
Unix was selected as the basis for a standard system interface partly because it was
"manufacturer-neutral." However, several major versions of Unix existed so there was a
need to develop a common denominator system.

Informally, each standard in the POSIX set is defined by a decimal following the
POSIX. Thus, POSIX.1 is the standard for an application program interface in
the C language. POSIX.2 is the standard shell and utility interface (that is to say, the user's
command interface with the operating system). These are the main two interfaces, but
additional interfaces, such as POSIX.4 for thread management, have been developed or are
being developed. The POSIX interfaces were developed under the auspices of the Institute
of Electrical and Electronics Engineers (IEEE).

POSIX.1 and POSIX.2 interfaces are included in a somewhat larger interface


known as the X/Open Programming Guide (also known as the "Single UNIX Specification"
and "UNIX 03"). The Open Group, an industry standards group, owns the UNIX
trademark and can thus "brand" operating systems that conform to the interface as
"UNIX" systems. IBM's OS/390 is an example of an operating system that includes a
branded UNIX interface.

Terms used in RTOS

Here, are essential terms used in RTOS:

 Task – A set of related tasks that are jointly able to provide some system
functionality.

14
 Job – A job is a small piece of work that can be assigned to a processor, and that
may or may not require resources.
 Release time of a job – It's a time of a job at which job becomes ready for
execution.
 Execution time of a job: It is time taken by job to finish its execution.
 Deadline of a job: It's time by which a job should finish its execution.
 Processors: They are also known as active resources. They are important for the
execution of a job.
 Maximum it is the allowable response time of a job is called its relative deadline
 Response time of a job: It is a length of time from the release time of a job when
the instant finishes.
 Absolute deadline: This is the relative deadline, which also includes its release
time.

Interrupt latency

The time related performance measurements are probably of most concern to


developers using an RTOS. A key characteristic of a real time system is its
timely response to external events and an embedded system is typically notified
of an event by means of an interrupt, so interrupt latency is critical.

 System: the total delay between the interrupt signal being asserted and
the start of the interrupt service routine execution.
 OS: the time between the CPU interrupt sequence starting and the

15
initiation of the ISR. This is really the operating system overhead, but many people refer
to it as the latency. This means that some vendors claim zero interrupt latency.

Fig:2.1: Interrupt latency


Measurement

Interrupt response is the sum of two distinct times:

ƮIL = ƮH + ƮOS

where:


ƮH is the hardware dependent time, which depends on the interrupt
controller on the board as well as the type of the interrupt


ƮOS is the OS induced overhead

Ideally, quoted figures should include the best and worst case scenarios. The
worst case is when the kernel disables interrupts. To measure a time interval,
like interrupt latency, with any accuracy, requires a suitable instrument. The
best tool to use is an oscilloscope. One approach is to use one pin on a GPIO
interface to generate the interrupt. This pin can be monitored on the ‘scope. At
the start of the interrupt service routine, another pin, which is also being
monitored, is toggled. The interval between the two signals may be easily read
from the instrument.

Importance

16
Many embedded systems are real time and it is those applications, along with
fault tolerant systems, where knowledge of interrupt latency is important. If the
requirement is to maximize bandwidth on a particular interface, the latency on
that specific interrupt needs to be measured. To give an idea of numbers, the
majority of systems exhibit no problems, even if they are subjected to interrupt
latencies of tens of microseconds

Interrupt latency is the sum of the hardware dependent time, which


depends on the interrupt controller as well as the type of the interrupt, and
the OS induced overhead. Ideally, quoted figures should include the best and
worst case scenarios. The worst case is when the kernel disables interrupts.
To measure a time interval, like interrupt latency, with any accuracy,
requires a suitable instrument and the best tool to use is an oscilloscope. One
approach is to use one pin on a GPIO interface to generate the interrupt and
monitor it on the 'scope. At the start of the interrupt service routine, another
pin, which is also being monitored, is toggled and the interval between the
two signals may be easily read.

Scheduling latency

A key part of the functionality of an RTOS is its ability to support a multi-


threading execution environment. Being real time, the efficiency at which
threads or tasks are scheduled is of some importance and the scheduler is at the
core of an RTOS. so it is reasonable that a user might be interested in its
performance. It is hard to get a clear picture of performance, as there is a wide
variation in the techniques employed to make measurements and in the
interpretation of the results.

There are really two separate measurements to consider:

 The context switch time


 The time overhead that the RTOS introduces when scheduling a

task Timing kernel services

An RTOS is likely to have a great many system service API (application


program interface) calls, probably numbering into the hundreds. To assess
timing, it is not useful to try to analyze the timing of every single call. It makes
more sense to focus on the frequently used services.

For most RTOS there are four key categories of service call:

 Threading services
 Synchronization services

17
 Inter-process communication services
 Memory services

All RTOS vendors provide performance data for their products, some of
which is more comprehensive than others. This information may be very useful, but
can also be misleading if interpreted incorrectly. It is important to understand the
techniques used to make measurements and the terminology used to describe the
results. There are also trade-offs – generally size against speed – and these, too, need
to be thoroughly understood. Without this understanding, a fair comparison is not
possible. If timing is critical to your application, it is strongly recommend that you
perform your own measurements. This enables you to be sure that the hardware
and software environment is correct and that the figures are directly relevant to
your application.

Real Time Scheduling algorithms

In real time operating systems(RTOS) most of the tasks are periodic in


nature. Periodic data mostly comes from sensors, servo control, and real-
time monitoring systems. In real time operating systems, these periodic tasks
utilize most of the processor computation power. A real-time control system
consists of many concurrent periodic tasks having individual timing
constraints. These timing constraints include release time (ri), worst case
execution time(Ci), period (ti) and deadline(Di) for each individual task Ti.
Real time embedded systems have time constraints linked with the output of
the system. The scheduling algorithms are used to determine which task is
going to execute when more than one task is available in the ready queue.

The operating system must guarantee that each task is activated at its
proper rate and meets its deadline. To ensure this, some periodic scheduling
algorithms are used. There are basic two types of scheduling algorithms

18
Fig:2.2:Classification of scheduling algorithm

Offline Scheduling Algorithm

Offline scheduling algorithm selects a task to execute with reference to a


predetermined schedule, which repeats itself after specific interval of time. For
example, if we have three tasks T a, Tb and Tc then Ta will always execute first,
then Tb and after that Tc respectively.
Online Scheduling Algorithm

In Online scheduling a task executes with respect to its priority, which is


determined in real time according to specific rule and priorities of tasks may
change during execution. The online line scheduling algorithm has two types.
They are more flexible because they can change the priority of tasks on run time
according to the utilization factor of tasks.

Fixed priority algorithms

In fixed priority if the k th job of a task T1 has higher priority than the k th
job of task T2 according to some specified scheduling event, then every job of
T1 will always execute first then the job of T2 i.e. on next occurrence priority
does not change. More formally, if job J(1,K) of task T1 has higher priority than
J(2,K) of task T2 then J(1,K+1) will always has higher priority than of J(2,K+1)
. One of best example of fixed priority algorithm is rate monotonic scheduling
algorithm.
Dynamic priority algorithms

In dynamic priority algorithm, different jobs of a task may have


different priority on next occurrence, it may be higher or it may be lower than

19
the other tasks. One example of a dynamic priority algorithm is the earliest
deadline first algorithm.

Processor utilization factor (U)

For a given task set of n periodic tasks, processor utilization factor U is the
fraction of time that is spent for the execution of the task set. If Si is a task from
task set then Ci/Ti is the time spent by the processor for the execution of Si .
Processor utilization factor is denoted as

Similarly, for the task set of n periodic tasks processor utilization is greater
than one then that task set will not be schedulable by any algorithm. Processor
utilization factor tells about the processor load on a single processor. U=1 means
100% processor utilization. Following scheduling algorithms will be discussed in
details
Rate Monotonic (RM) Scheduling Algorithm

The Rate Monotonic scheduling algorithm is a simple rule that assigns


priorities to different tasks according to their time period. That is task with
smallest time period will have highest priority and a task with longest time
period will have lowest priority for execution. As the time period of a task does
not change so not its priority changes over time, therefore Rate monotonic is
fixed priority algorithm. The priorities are decided before the start of execution
and they does not change overtime.
Rate monotonic scheduling Algorithm works on the principle of
preemption. Preemption occurs on a given processor when higher priority task
blocked lower priority task from execution. This blocking occurs due to priority
level of different tasks in a given task set. rate monotonic is a preemptive
algorithm which means if a task with shorter period comes during execution it
will gain a higher priority and can block or preemptive currently running tasks.
In RM priorities are assigned according to time period. Priority of a task is
inversely proportional to its timer period.
Task with lowest time period has highest priority and the task with
highest period will have lowest priority. A given task is scheduled under rate
monotonic scheduling Algorithm, if its satisfies the following equation:
Example of Rate Monotonic (RM) Scheduling Algorithm

For example, we have a task set that consists of three tasks as follows

Table 2.1 Rate Monotonic (RM) Scheduling Algorithm

20
Tasks Release Execution Deadline (Di) Time
time(ri) time(Ci) period(Ti)
T1 0 0.5 3 3
T2 0 1 4 4
T3 0 2 6 6

Task set U= 0.5/3 +1/4 +2/6 = 0.167+ 0.25 + 0.333 = 0.75

As processor utilization is less than 1 or 100% so task set is schedulable and


it also
satisfies the above equation of rate monotonic scheduling algorithm.

Fig:2.3: RM scheduling of Task set in Table 2.1

A task set given in the above table is RM scheduling in the given figure. The
explanation of above is as follows
1. According to RM scheduling algorithm task with shorter period has
higher priority so T1 has high priority, T2 has intermediate priority and
T3 has lowest priority. At t=0 all the tasks are released. Now T1 has
highest priority so it executes first till t=0.5.
2. At t=0.5 task T2 has higher priority than T3 so it executes first for one-
time units till t=1.5. After its completion only one task is remained in the
system that is T3, so it starts its execution and executes till t=3.
3. At t=3 T1 releases, as it has higher priority than T3 so it preempts or
blocks T3 and starts it execution till t=3.5. After that the remaining part
of T3 executes.
4. At t=4 T2 releases and completes it execution as there is no task running
in the system at this time.
5. At t=6 both T1 and T3 are released at the same time but T1 has higher
priority due to shorter period so it preempts T3 and executes till t=6.5,
after that T3 starts running and executes till t=8.

21
6. At t=8 T2 with higher priority than T3 releases so it preempts T3 and
starts its execution.
7. At t=9 T1 is released again and it preempts T3 and executes first and at
t=9.5 T3 executes its remaining part. Similarly, the execution goes on.

Advantages
 It is easy to implement.
 If any static priority assignment algorithm can meet the deadlines then
rate monotonic scheduling can also do the same. It is optimal.
 It consists of calculated copy of the time periods unlike other time-sharing
algorithms as Round robin which neglects the scheduling needs of the
processes.

Disadvantages
 It is very difficult to support a periodic and sporadic tasks under RMA.
 RMA is not optimal when tasks period and deadline differ.

Least laxity scheduling

Least Laxity First (LLF) is a job level dynamic priority scheduling


algorithm. It means that every instant is a scheduling event because laxity of
each task changes on every instant of time. A task which has least laxity at an
instant, it will have higher priority than others at this instant.
Introduction to Least Laxity first (LLF) scheduling Algorithm

More formally, priority of a task is inversely proportional to its run time


laxity. As the laxity of a task is defined as its urgency to execute. Mathematically
it is described as

Here Di is the deadline of a task, Ci is the worst-case execution


time(WCET) and Li
is laxity of a task. It means laxity is the time remaining after completing the
WCET before the deadline arrives. For finding the laxity of a task in run time
current instant of time also included in the above formula.
Here is the current instant of time and is the remaining WCET of the
task. By using the above equation laxity of each task is calculated at every
instant of time, then the priority is assigned. One important thing to note is that

22
laxity of a running task does not changes it remains same whereas the laxity all
other tasks is decreased by one after every one-time unit.
Example of Least Laxity first scheduling Algorithm

An example of LLF is given below for a task set.

Table 2.2 Least Laxity first scheduling Algorithm

Release Execution Deadline


Task Period(Ti)
time(ri) Time(Ci) (Di)

T1 0 2 6 6

T2 0 2 8 8

T3 0 3 10 10

Fig:2.4: LLF scheduling algorithm

1. At t=0 laxities of each task are calculated by using the equation


L1 = 6-(0+2) =4
L2 = 8-(0+2) =6

L3= 10-(0+3) =7

23
As task T1 has least laxity so it will execute with higher priority.
Similarly, At t=1 its priority is calculated it is 4 and T2 has 5 and T3 has 6,
so again due to least laxity T1 continue to execute.
2. At t=2 T1is out of the system so Now we compare the laxities of T2 and
T3 as following
L2= 8-(2+2) =4

L3= 10-(2+3) =5

Clearly T2 starts execution due to less laxity than T3. At t=3 T2


has laxity 4 and T3 also has laxity 4, so ties are broken randomly so we
continue to execute T2. At t=4 no task is remained in the system except
T3 so it executes till t=6. At t=6 T1 comes in the system so laxities are
calculated again
L1 = 6-(0+2) =4

L3= 10-(6+1) =3

So T3 continues its execution.


3. At t=8 T2 comes in the system where as T1 is running task. So at this
instant laxities are calculated
L1 = 12-(8+1) =3
L2= 16-(8+2) =6

So T1 completes its execution. After that T2 starts running and at


t=10 due to laxity comparison T2 has higher priority than T3 so it
completes it execution.
L2= 16-(10+1) =5

L3= 20-(10+3) =7

t=11 only T3 in the system so it starts its execution.


4. At t=12 T1 comes in the system and due to laxity comparison at t=12 T1
wins the priority and starts its execution by preempting T3. T1 completes
it execution and after that at t=14 due to alone task T3 starts running its
remaining part. So, in this way this task set executes under LLF
algorithm.

LLF is an optimal algorithm because if a task set will pass utilization test
then it is surely schedulable by LLF. Another advantage of LLF is that it
some advance knowledge about which task going to miss its deadline. On
other hand it also has some disadvantages as well one is its enormous
computation demand as each time instant is a scheduling event. It gives poor
performance when more than one task has least laxity.

24
Cyclic executives:
 Scheduling tables
 Frames
 Frame size constraints
 Generating schedules
 Non-independent tasks
 Pros and cons

Cyclic Scheduling
This is an important way to sequence tasks in a real time system. Cyclic
scheduling is static – computed offline and stored in a table. Task scheduling is
non-preemptive. Non-periodic work can be run during time slots not used by
periodic tasks. Implicit low priority for non-periodic work. Usually non-periodic
work must be scheduled preemptively. Scheduling table executes completely in
one hyper period H. Then repeats H is least common multiple of all task periods
N quanta per hyper period. Multiple tables can support multiple system modes
E.g., an aircraft might support takeoff, cruising, landing, and taxiing modes.
Mode switches permitted only at hyper period boundaries. Otherwise,
hard to meet deadlines.

Frames:
Divide hyper periods into frames .Timing is enforced only at frame
boundaries.
Consider a system with four task

25
task is executed as a function call and must fit within a single frame. Multiple
tasks may be executed in a frame size is f Number of frames per hyper period is
F = H/f.

Frame Size Constraints:

1. Tasks must fit into frames so, f ≥ Ci for all tasks Justification: Non-
preemptive tasks should finish executing within a single frame
2. f must evenly divide H Equivalently, f must evenly divide P for some task
i Justification: Keep table size small
3. There should be a complete frame between the release and deadline of
every task
Justification: Want to detect missed deadlines by the time the deadline arrives

Cyclic executive is one of the major software architectures for embedded


systems. Historically, cyclic executives dominate safety-critical systems
Simplicity and predictability wins. However, there are significant drawbacks.
Finding a schedule might require significant offline computation.

26
Drawbacks:

• Difficult to incorporate sporadic processes.


• Incorporate processes with long periods
• Problematic for tasks with dependencies – Think about an OS without
synchronization
• Time consuming to construct the cyclic executive – Or adding a new task
to the task set
• “Manual” scheduler construction
• Cannot deal with any runtime changes
• Denies the advantages of
concurrent programming

1
Sharing of critical resources among tasks requires a different set of rules, compared
to the rules used for sharing resources such as a CPU among tasks. We have in the last
Chapter discussed how resources such as CPU can be shared among tasks. Priority
inversion is a operating system scenario in which a higher priority process is preempted by
a lower priority process. This implies the inversion of the priorities of the two processes.

Problems due to Priority Inversion


Some of the problems that occur due to priority inversion are given as follows −

A system malfunction may occur if a high priority process is not provided the
required resources.
 Priority inversion may also lead to implementation of corrective measures. These
may include the resetting of the entire system.
 The performance of the system can be reduces due to priority inversion. This may
happen because it is imperative for higher priority tasks to execute promptly.
 System responsiveness decreases as high priority tasks may have strict time
constraints or real time response guarantees.
 Sometimes there is no harm caused by priority inversion as the late execution of the
high priority process is not noticed by the system.
Solutions of Priority Inversion
Some of the solutions to handle priority inversion are given as follows −

 Priority Ceiling
All of the resources are assigned a priority that is equal to the highest priority of
any task that may attempt to claim them. This helps in avoiding priority inversion
 Disabling Interrupts
There are only two priorities in this case i.e. interrupts disabled and preemptible. So
priority inversion is impossible as there is no third option.

 Priority Inheritance
This solution temporarily elevates the priority of the low priority task that is
executing to the highest priority task that needs the resource. This means that
medium priority tasks cannot intervene and lead to priority inversion.

 No blocking
Priority inversion can be avoided by avoiding blocking as the low priority task

2
blocks the high priority task.

 Random boosting
The priority of the ready tasks can be randomly boosted until they exit the critical
section.
Difference between Priority Inversion and Priority Inheritance

Both of these concepts come under Priority scheduling in Operating System. In one
line, Priority Inversion is a problem while Priority Inheritance is a solution. Literally,
Priority Inversion means that priority of tasks get inverted and Priority Inheritance means
that priority of tasks get inherited. Both of these phenomena happen in priority scheduling.
Basically, in Priority Inversion, higher priority task
(H) ends up waiting for middle priority task (M) when H is sharing critical section with
lower priority task (L) and L is already in critical section. Effectively, H waiting for M
results in inverted priority i.e. Priority Inversion. One of the solution for this problem is
Priority Inheritance.
In Priority Inheritance, when L is in critical section, L inherits priority of H at the
time when H starts pending for critical section. By doing so, M doesn’t interrupt L and H
doesn’t wait for M to finish. Please note that inheriting of priority is done temporarily i.e. L
goes back to its old priority when L comes out of critical section.

Priority Inheritance Protocol (PIP)

Priority Inheritance Protocol (PIP) is a technique which is used for sharing critical
resources among different tasks. This allows the sharing of critical resources among different
without the occurrence of unbounded priority inversions.

Basic Concept of PIP :

The basic concept of PIP is that when a task goes through priority inversion, the
priority of the lower priority task which has the critical resource is increased by the priority
inheritance mechanism. It allows this task to use the critical resource as early as possible
without going through the preemption. It avoids the unbounded priority inversion.

Working of PIP :

When several tasks are waiting for the same critical resource, the task which is
currently holding this critical resource is given the highest priority among all the tasks
which are waiting for the same critical resource. Now after the lower priority task
having the critical resource is given the highest priority then the intermediate priority
tasks cannot preempt this task. This helps in avoiding the unbounded priority inversion.
When the task which is given the highest priority among all tasks, finishes the job and
releases the critical resource then it gets back to its original priority value (which may
be less or equal). If a task is holding multiple critical resources then after releasing one
critical resource it cannot go back to it original priority value. In this case it inherits the
highest priority among all tasks waiting for the same critical resource.

3
Advantages of PIP :

Priority Inheritance protocol has the following advantages:


 It allows the different priority tasks to share the critical resources.
 The most prominent advantage with Priority Inheritance Protocol is that it avoids the
unbounded priority inversion.

Disadvantages of PIP :

Priority Inheritance Protocol has two major problems which may

occur: Deadlock :

There is possibility of deadlock in the priority inheritance protocol. For


example, there are two tasks T1 and T2. Suppose T1 has the higher priority than T2.
T2 starts running first and holds the critical resource CR 2. After that, T1 arrives and
preempts T2. T1 holds critical resource CR1 and also tries to hold CR2 which is held by
T2. Now T1 blocks and T2 inherits the priority of T1 according to PIP. T2 starts
execution and now T2 tries to hold CR1 which is held by T1. Thus, both T1 and T2 are
deadlocked.

Chain Blocking :

When a task goes through priority inversion each time it needs a resource then
this process is called chain blocking. For example, there are two tasks T 1 and T2.
Suppose T1 has the higher priority than T2. T2 holds the critical resource CR1 and CR2.
T1 arrives and requests for CR1. T2 undergoes the priority inversion according to PIP.
Now, T1 request CR2, again T2 goes for priority inversion according to PIP. Hence,
multiple priority inversion to hold the critical resource leads to chain blocking.

Priority Ceiling Protocol

In real-time computing, the priority ceiling protocol is a synchronization protocol


for shared resources to avoid unbounded priority inversion and mutual deadlock due to
wrong nesting of critical sections. In this protocol each resource is assigned a priority
ceiling, which is a priority equal to the highest priority of any task which may lock the
resource. The protocol works by temporarily raising the priorities of tasks in certain
situations, thus it requires a scheduler that supports dynamic priority scheduling. It is a
job task synchronization protocol in a real-time system that is better than Priority
inheritance protocol in many ways. Real-Time Systems are multitasking systems that
involve the use of semaphore variables, signals, and events for job synchronization. In
Priority ceiling protocol an assumption is made that all the jobs in the system have a fixed
priority. It does not fall into a deadlock state.

The chained blocking problem of the Priority Inheritance Protocol is resolved in the
Priority Ceiling Protocol.
4
The basic properties of Priority Ceiling Protocols are:

5
1. Each of the resources in the system is assigned a priority ceiling.
2. The assigned priority ceiling is determined by the highest priority among all the jobs
which may acquire the resource.
3. It makes use of more than one resource or semaphore variable, thus eliminating chain
blocking.
4. A job is assigned a lock on a resource if no other job has acquired lock on that
resource.
5. A job J, can acquire a lock only if the job’s priority is strictly greater than the
priority ceilings of all the locks held by other jobs.
6. If a high priority job has been blocked by a resource, then the job holding that
resource gets the priority of the high priority task.
7. Once the resource is released, the priority is reset back to the original.
8. In the worst case, the highest priority job J1 can be blocked by T lower priority tasks
in the system when J1 has to access T semaphores to finish its execution.

UNIT -IV

Fig:3.1: Priority Ceiling Protocol


RTOS PROGRAMMING – SECA5302
Priority Scheduling Protocol can be used to tackle the problem of the priority
inversion problem unlike that of Priority Inheritance Protocol. It makes use of semaphores
to share the resources with the jobs in a real-time system.

1
Task synchronization using semaphores, Inter task communication: message queues and
pipes, Remote procedure call- Timers and Interrupts-Memory management and I/O
management

Task Management:

Task is the basic notion in RTOS.

Fig:4.1:Typical RTOS Task Model

Periodic tasks: arriving at fixed frequency, can be characterized by 3 parameters


(C,D,T) where, C = computing time, D = deadline and T = period. Also called Time-driven
tasks, their activations are generated by timers.

Fig:4.2:Task states

Non-Periodic or aperiodic tasks = all tasks that are not periodic, also known as Event
driven, their activations may be generated by external interrupts. Sporadic tasks=
aperiodic tasks with minimum inter arrival time Tmin.

Managing tasks:

 Execution of quasi-parallel tasks on a processor using processes or threads by


maintaining process states, process queuing, allowing for preemptive tasks and quick
interrupt handling.
 CPU scheduling
 Process synchronization
 Inter-process communication
 Support of a real-time clock as an internal time reference

2
 Task creation: create a new TCB (task control block)
 Task termination: remove the TCB
 Change Priority: modify the TCB
 State-inquiry: read the TCB

Task management is depicted in the below diagram

Fig:4.3: Block Diagram of Task management

Task synchronization:

In classical operating systems, synchronization and mutual exclusion is performed


via semaphores and monitors. In real-time OS, special semaphores and a deep Integration
into scheduling is necessary (priority inheritance protocols).

Further responsibilities: Initializations of internal data structures (tables, queues, task


description blocks, semaphores)

Fig:4.4: Minimal Set of Task States

3
Run:
A task enters this state as it starts executing on the processor
Ready:
State of those tasks that are ready to execute but cannot be executed, because the
processor is assigned to another task.
Wait:
A task enters this state when it executes a synchronization primitive to wait for an event,
e.g. a wait primitive on a semaphore. In this case, the task is inserted in a queue
associated with the semaphore. The task at the head is resumed when the semaphore is
unlocked by a signal primitive.
Idle:
A periodic job enters this state when it completes its execution and has to wait
for the beginning of the next period.

Challenges for an RTOS:


 Creating an RT task, it has to get the memory without delay: this is difficult because
memory has to be allocated and a lot of data structures, code segment must be
copied/initialized
 The memory blocks for RT tasks must be locked in main memory to avoid access
latencies due to swapping
 Changing run-time priorities dangerous: it may change the run-time behavior and
predictability of the whole system

Inter task communication and Synchronization:

Inter process Communication (IPC) enables processes to communicate with each


other to share information
 Pipes (half duplex)
 FIFOs(named pipes)
 Stream pipes (full duplex)
 Named stream pipes
 Message queues
 Semaphores
 Shared Memory
 Sockets
 Streams
 Pipes

Oldest (and perhaps simplest) form of UNIX IPC


 Half duplex
 Data flows in only one direction
 Only usable between processes with a common ancestor
 Usually parent-child
 Also child-child

4
In computing, a named pipe (also known as a FIFO) is one of the methods for intern-process
communication.
 It is an extension to the traditional pipe concept on Unix. A traditional pipe is
“unnamed” and lasts only as long as the process.
 A named pipe, however, can last as long as the system is up, beyond the life of the
process. It can be deleted if no longer used.
 Usually a named pipe appears as a file and generally processes attach to it for inter-
process communication. A FIFO file is a special kind of file on the local storage which
allows two or more processes to communicate with each other by reading/writing
to/from this file.
 A FIFO special file is entered into the filesystem by calling mkfifo() in C. Once we have
created a FIFO special file in this way, any process can open it for reading or writing, in
the same way as an ordinary file. However, it has to be open at both ends
simultaneously before you can proceed to do any input or output operations on it.

Understanding Pipes
 Within a process
 Writes to files can be read on files
 Not very useful
 Between processes
 After a fork()
 Writes to files by one process can be read on files by the other

Using Pipes:
 Usually, the unused end of the pipe is closed by the process
 If process A is writing and process B is reading, then process A would close
files[0] and process B would close files[1]
 Reading from a pipe whose write end has been closed returns 0 (end of file)
 Writing to a pipe whose read end has been closed generates SIGPIPE
 PIPE_BUF specifies kernel pipe buffer size

Creating a Pipe

The primitive for creating a pipe is the pipe function. This creates both the reading
and writing ends of the pipe. It is not very useful for a single process to use a pipe to talk to
itself. In typical use, a process creates a pipe just before it forks one or more child processes.
The pipe is then used for communication either between the parent or child processes, or
between two sibling processes.

The pipe function is declared in the header file unistd.h. Here is an example of a
simple program that creates a pipe. The parent process writes data to the pipe, which is read
by the child process.

5
FIFO

A FIFO special file is similar to a pipe, except that it is created in a different way.
Instead of being an anonymous communications channel, a FIFO special file is entered into
the file system by calling mkfifo.

6
Once you have created a FIFO special file in this way, any process can open it for
reading or writing, in the same way as an ordinary file. However, it has to be open at both
ends simultaneously before you can proceed to do any input or output operations on it.
Opening a FIFO for reading normally blocks until some other process opens the same FIFO
for writing, and vice versa.

First: Co processes–Nothing more than a process whose input and output are both
redirected from another process
FIFOs–named pipes
With regular pipes, only processes with a common ancestor can communicate With
FIFOs, any two processes can communicate
Creating and opening a FIFO is just like creating and opening a file

Four Queue States:


1. Both the sending and receiving queue are running.
2. Only the sender is executing.
3. Sender not executing, but receiver is running.
4. Neither the sender nor the receiver are running.
Parts of Message Queue:
The queue that is local to the sending machine is called the source queue. The local queue of
the
receiver is called the destination queue. Queues are managed by queue managers. Mapping
of queues to network locations.
Message sending methods:
1. Message Queues
2. Mail box:

Mail box:
Mailboxes are similar to queues.
 Capacity of mailboxes is usually fixed after initialization
 Some RTOS allow only a single message in the mailbox. (mailbox is either full or
empty)
 Some RTOS allow prioritization of messages

7
Mailbox functions at OS
Some OS provide the mailbox and queue both IPC functions. When the IPC functions
for mailbox are not provided by an OS, then the OS employs queue for the same purpose. A
mailbox of a task can receive from other tasks and has a distinct ID. Mailbox (for message)
is an IPC through a message at an OS that can be received only one single destined task for
the message from the tasks. Two or more tasks cannot take message from same Mailbox. A
task on an OS function call puts (means post and also send) into the mailbox only a pointer
to a mailbox message . Mailbox message may also include a header to identify the message-
type specification.
OS provides for inserting and deleting message into the mailbox message pointer.
Deleting means message-pointer pointing to Null. Each mailbox for a message need
initialization (creation) before using the functions in the scheduler for the message queue
and message pointer pointing to Null. There may be a provision for multiple mailboxes for
the multiple types or destinations of messages. Each mailbox has an ID. Each mailbox
usually has one message pointer only, which can point to message.

Mailbox Types

Fig:4.5:Classification of mailbox

Mailbox IPC features

When an OS call is to post into the mailbox, the message bytes are as per the pointed
number of bytes by the mailbox message pointer.

Mailbox Related Functions at the OS

Fig:4.6:Features of mailbox

8
9

You might also like