Rtos
Rtos
Usage of RTOS
Components of RTOS
1
Fig:1.1: Components of RTOS
The Scheduler: This component of RTOS tells that in which order, the tasks can be
executed which is generally based on the priority.
Function Library: It is an important element of RTOS that acts as an interface that helps
you to connect kernel and application code. This application allows you to send the
requests to the Kernel using a function library so that the application can give the desired
results.
Memory Management: this element is needed in the system to allocate memory to every
program, which is the most important element of the RTOS.
Fast dispatch latency: It is an interval between the termination of the task that can be
identified by the OS and the actual time taken by the thread, which is in the ready queue,
that has started processing.
User-defined data objects and classes: RTOS system makes use of programming languages
like C or C++, which should be organized according to their operation.
Types of RTOS
2
Hard Real Time :
In Hard RTOS, the deadline is handled very strictly which means that given task
must start executing on specified scheduled time, and must be completed within the
assigned time duration.
These type of RTOS also need to follow the deadlines. However, missing a deadline
may not have big impact but could cause undesired affects, like a huge reduction in quality
of a product.
Soft Real time RTOS, accepts some delays by the Operating system. In this type of
RTOS, there is a deadline assigned for a specific job, but a delay for a small amount of time
is acceptable. So, deadlines are handled softly by this type of RTOS.
Here, are essential factors that you need to consider for selecting RTOS:
Middleware: if there is no middleware support in Real time operating system, then the
issue of time-taken integration of processes occurs.
Error-free: RTOS systems are error-free. Therefore, there is no chance of getting an error
while performing the task.
Embedded system usage: Programs of RTOS are of small size. So we widely use RTOS for
embedded systems.
Maximum Consumption: we can achieve maximum Consumption with the help of RTOS.
3
Unique features: A good RTS should be capable, and it has some extra features like how it
operates to execute a command, efficient protection of the memory of the system, etc.
24/7 performance: RTOS is ideal for those applications which require to run
4
Disadvantages of RTOS
RTOS system can run minimal tasks together, and it concentrates only on those
applications which contain an error so that it can avoid them.
RTOS is the system that concentrates on a few tasks. Therefore, it is really hard
for these systems to do multi-tasking.
Specific drivers are required for the RTOS so that it can offer fast response time
to interrupt signals, which helps to maintain its speed.
Plenty of resources are used by RTOS, which makes this system expensive.
The tasks which have a low priority need to wait for a long time as the RTOS
maintains the accuracy of the program, which are under execution.
Minimum switching of tasks is done in Real time operating systems.
It uses complex algorithms which is difficult to understand.
RTOS uses lot of resources, which sometimes not suitable for the system.
RTOS Architecture
For simpler applications, RTOS is usually a kernel but as complexity increases,
various modules like networking protocol stacks debugging facilities, device I/Os are
includes in addition to the kernel. The general architecture of RTOS is shown in the below
fig 1.2
5
Kernel
RTOS kernel acts as an abstraction layer between the hardware and the
applications. There are three broad categories of kernels
·
Monolithic kernel
Monolithic kernels are part of Unix-like operating systems like Linux, FreeBSD etc.
A monolithic kernel is one single program that contains all of the code necessary to
perform every kernel related task. It runs all basic system services (i.e. process and
memory management, interrupt handling and I/O communication, file system, etc) and
provides powerful abstractions of the underlying hardware. Amount of context switches
and messaging involved are greatly reduced which makes it run faster than microkernel.
· Microkernel
It runs only basic process communication (messaging) and I/O control. It normally
provides only the minimal services such as managing memory protection, Inter process
communication and the process management. The other functions such as running the
hardware processes are not handled directly by micro kernels. Thus, micro kernels
provide a smaller set of simple hardware abstractions. It is more stable than monolithic as
the kernel is unaffected even if the servers failed (i.e., File System). Micro kernels are part
of the operating systems like AIX, BeOS, Mach, Mac OS X, MINIX, and QNX. Etc
·
Hybrid Kernel
Hybrid kernels are extensions of micro kernels with some properties of monolithic
kernels. Hybrid kernels are similar to micro kernels, except that they include additional
code in kernel space so that such code can run more swiftly than it would were it in user
space. These are part of the operating systems such as Microsoft Windows NT, 2000 and
XP. Dragon Fly BSD, etc
·
Exokernel
Exokernels provides efficient control over hardware. It runs only services protecting
the resources (i.e. tracking the ownership, guarding the usage, revoking access to resources,
etc) by providing low-level interface for library operating systems and leaving the
management to the application.
Six types of common services are shown in the following figure below and explained
in subsequent sections
6
Fig:1.3: Representation of Common Services Offered By a RTOS System
Task Management
In RTOS, The application is decomposed into small, schedulable, and sequential
program units known as “Task”, a basic unit of execution and is governed by three time-
critical properties; release time, deadline and execution time. Release time refers to the
point in time from which the task can be executed. Deadline is the point in time by which
the task must complete. Execution time denotes the time the task takes to execute.
7
Active: Task is running
8
Suspended: Task put on hold temporarily
Pending: Task waiting for resource.
Task Control block: Task uses TCBs to remember its context. TCBs are data structures
residing in RAM, accessible only by RTOS
Scheduler: The scheduler keeps record of the state of each task and selects from among
them that are ready to execute and allocates the CPU to one of them. Various scheduling
algorithms are used in RTOS.
9
Fig:1.6: Process Flow of a Scheduler
Polled System with interrupts. In addition to polling, it takes care of critical tasks.
Round Robin : Sequences from task to task, each task getting a slice of time
Hybrid System: Sensitive to sensitive interrupts, with Round Robin system working in
background
10
Fig:1.9: Non-Preemptive Scheduling or Cooperative Multitasking
Dispatcher : The dispatcher gives control of the CPU to the task selected by the scheduler
by performing context switching and changes the flow of execution.
Task Synchronization & inter task communication serves to pass information amongst
tasks.
Task Synchronization
11
Event Objects
Event objects are used when task synchronization is required without resource
sharing. They allow one or more tasks to keep waiting for a specified event to occur. Event
object can exist either in triggered or non-triggered state. Triggered state indicates
resumption of the task.
Semaphores.
A semaphore has an associated resource count and a wait queue. The resource count
indicates availability of resource. The wait queue manages the tasks waiting for resources
from the semaphore. A semaphore functions like a key that define whether a task has the
access to the resource. A task gets an access to the resource when it acquires the
semaphore.
Inter task communication involves sharing of data among tasks through sharing of
memory space, transmission of data, etc. Inter task communications is executed using
following mechanisms
Message queues
A message queue is an object used for inter task communication through which task
send or receive messages placed in a shared memory. The queue may follow 1) First In
First Out (FIFO), 2) Last in First Out(LIFO) or 3) Priority (PRI) sequence. Usually, a
message queue comprises of an associated queue control block (QCB), name, unique ID,
12
memory buffers, queue length, maximum message length and one or more task waiting
lists. A message queue with a length of 1 is commonly known as a mailbox.
It permits distributed computing where task can invoke the execution of another
task on a remote computer.
Memory Management
Two types of memory managements are provided in RTOS – Stack and Heap. Stack
management is used during context switching for TCBs. Memory other than memory used
for program code, program data and system stack is called heap memory and it is used for
dynamic allocation of data space for tasks. Management of this memory is called heap
management.
Timer Management
Tasks need to be performed after scheduled durations. To keep track of the delays,
timers- relative and absolute are provided in RTOS.
RTOS provides various functions for interrupt and event handling, viz., Defining
interrupt handler, creation and deletion of ISR, referencing the state of an ISR, enabling
and disabling of an interrupt, etc. It also restricts interrupts from occurring when
modifying a data structure, minimize interrupt latencies due to disabling of interrupts
when RTOS is performing critical operations, minimizes interrupt response times.
RTOS generally provides large number of APIs to support diverse hardware device
drivers.
13
Features of RTOS
Informally, each standard in the POSIX set is defined by a decimal following the
POSIX. Thus, POSIX.1 is the standard for an application program interface in
the C language. POSIX.2 is the standard shell and utility interface (that is to say, the user's
command interface with the operating system). These are the main two interfaces, but
additional interfaces, such as POSIX.4 for thread management, have been developed or are
being developed. The POSIX interfaces were developed under the auspices of the Institute
of Electrical and Electronics Engineers (IEEE).
Task – A set of related tasks that are jointly able to provide some system
functionality.
14
Job – A job is a small piece of work that can be assigned to a processor, and that
may or may not require resources.
Release time of a job – It's a time of a job at which job becomes ready for
execution.
Execution time of a job: It is time taken by job to finish its execution.
Deadline of a job: It's time by which a job should finish its execution.
Processors: They are also known as active resources. They are important for the
execution of a job.
Maximum it is the allowable response time of a job is called its relative deadline
Response time of a job: It is a length of time from the release time of a job when
the instant finishes.
Absolute deadline: This is the relative deadline, which also includes its release
time.
Interrupt latency
System: the total delay between the interrupt signal being asserted and
the start of the interrupt service routine execution.
OS: the time between the CPU interrupt sequence starting and the
15
initiation of the ISR. This is really the operating system overhead, but many people refer
to it as the latency. This means that some vendors claim zero interrupt latency.
ƮIL = ƮH + ƮOS
where:
ƮH is the hardware dependent time, which depends on the interrupt
controller on the board as well as the type of the interrupt
ƮOS is the OS induced overhead
Ideally, quoted figures should include the best and worst case scenarios. The
worst case is when the kernel disables interrupts. To measure a time interval,
like interrupt latency, with any accuracy, requires a suitable instrument. The
best tool to use is an oscilloscope. One approach is to use one pin on a GPIO
interface to generate the interrupt. This pin can be monitored on the ‘scope. At
the start of the interrupt service routine, another pin, which is also being
monitored, is toggled. The interval between the two signals may be easily read
from the instrument.
Importance
16
Many embedded systems are real time and it is those applications, along with
fault tolerant systems, where knowledge of interrupt latency is important. If the
requirement is to maximize bandwidth on a particular interface, the latency on
that specific interrupt needs to be measured. To give an idea of numbers, the
majority of systems exhibit no problems, even if they are subjected to interrupt
latencies of tens of microseconds
Scheduling latency
For most RTOS there are four key categories of service call:
Threading services
Synchronization services
17
Inter-process communication services
Memory services
All RTOS vendors provide performance data for their products, some of
which is more comprehensive than others. This information may be very useful, but
can also be misleading if interpreted incorrectly. It is important to understand the
techniques used to make measurements and the terminology used to describe the
results. There are also trade-offs – generally size against speed – and these, too, need
to be thoroughly understood. Without this understanding, a fair comparison is not
possible. If timing is critical to your application, it is strongly recommend that you
perform your own measurements. This enables you to be sure that the hardware
and software environment is correct and that the figures are directly relevant to
your application.
The operating system must guarantee that each task is activated at its
proper rate and meets its deadline. To ensure this, some periodic scheduling
algorithms are used. There are basic two types of scheduling algorithms
18
Fig:2.2:Classification of scheduling algorithm
In fixed priority if the k th job of a task T1 has higher priority than the k th
job of task T2 according to some specified scheduling event, then every job of
T1 will always execute first then the job of T2 i.e. on next occurrence priority
does not change. More formally, if job J(1,K) of task T1 has higher priority than
J(2,K) of task T2 then J(1,K+1) will always has higher priority than of J(2,K+1)
. One of best example of fixed priority algorithm is rate monotonic scheduling
algorithm.
Dynamic priority algorithms
19
the other tasks. One example of a dynamic priority algorithm is the earliest
deadline first algorithm.
For a given task set of n periodic tasks, processor utilization factor U is the
fraction of time that is spent for the execution of the task set. If Si is a task from
task set then Ci/Ti is the time spent by the processor for the execution of Si .
Processor utilization factor is denoted as
Similarly, for the task set of n periodic tasks processor utilization is greater
than one then that task set will not be schedulable by any algorithm. Processor
utilization factor tells about the processor load on a single processor. U=1 means
100% processor utilization. Following scheduling algorithms will be discussed in
details
Rate Monotonic (RM) Scheduling Algorithm
For example, we have a task set that consists of three tasks as follows
20
Tasks Release Execution Deadline (Di) Time
time(ri) time(Ci) period(Ti)
T1 0 0.5 3 3
T2 0 1 4 4
T3 0 2 6 6
A task set given in the above table is RM scheduling in the given figure. The
explanation of above is as follows
1. According to RM scheduling algorithm task with shorter period has
higher priority so T1 has high priority, T2 has intermediate priority and
T3 has lowest priority. At t=0 all the tasks are released. Now T1 has
highest priority so it executes first till t=0.5.
2. At t=0.5 task T2 has higher priority than T3 so it executes first for one-
time units till t=1.5. After its completion only one task is remained in the
system that is T3, so it starts its execution and executes till t=3.
3. At t=3 T1 releases, as it has higher priority than T3 so it preempts or
blocks T3 and starts it execution till t=3.5. After that the remaining part
of T3 executes.
4. At t=4 T2 releases and completes it execution as there is no task running
in the system at this time.
5. At t=6 both T1 and T3 are released at the same time but T1 has higher
priority due to shorter period so it preempts T3 and executes till t=6.5,
after that T3 starts running and executes till t=8.
21
6. At t=8 T2 with higher priority than T3 releases so it preempts T3 and
starts its execution.
7. At t=9 T1 is released again and it preempts T3 and executes first and at
t=9.5 T3 executes its remaining part. Similarly, the execution goes on.
Advantages
It is easy to implement.
If any static priority assignment algorithm can meet the deadlines then
rate monotonic scheduling can also do the same. It is optimal.
It consists of calculated copy of the time periods unlike other time-sharing
algorithms as Round robin which neglects the scheduling needs of the
processes.
Disadvantages
It is very difficult to support a periodic and sporadic tasks under RMA.
RMA is not optimal when tasks period and deadline differ.
22
laxity of a running task does not changes it remains same whereas the laxity all
other tasks is decreased by one after every one-time unit.
Example of Least Laxity first scheduling Algorithm
T1 0 2 6 6
T2 0 2 8 8
T3 0 3 10 10
L3= 10-(0+3) =7
23
As task T1 has least laxity so it will execute with higher priority.
Similarly, At t=1 its priority is calculated it is 4 and T2 has 5 and T3 has 6,
so again due to least laxity T1 continue to execute.
2. At t=2 T1is out of the system so Now we compare the laxities of T2 and
T3 as following
L2= 8-(2+2) =4
L3= 10-(2+3) =5
L3= 10-(6+1) =3
L3= 20-(10+3) =7
LLF is an optimal algorithm because if a task set will pass utilization test
then it is surely schedulable by LLF. Another advantage of LLF is that it
some advance knowledge about which task going to miss its deadline. On
other hand it also has some disadvantages as well one is its enormous
computation demand as each time instant is a scheduling event. It gives poor
performance when more than one task has least laxity.
24
Cyclic executives:
Scheduling tables
Frames
Frame size constraints
Generating schedules
Non-independent tasks
Pros and cons
Cyclic Scheduling
This is an important way to sequence tasks in a real time system. Cyclic
scheduling is static – computed offline and stored in a table. Task scheduling is
non-preemptive. Non-periodic work can be run during time slots not used by
periodic tasks. Implicit low priority for non-periodic work. Usually non-periodic
work must be scheduled preemptively. Scheduling table executes completely in
one hyper period H. Then repeats H is least common multiple of all task periods
N quanta per hyper period. Multiple tables can support multiple system modes
E.g., an aircraft might support takeoff, cruising, landing, and taxiing modes.
Mode switches permitted only at hyper period boundaries. Otherwise,
hard to meet deadlines.
Frames:
Divide hyper periods into frames .Timing is enforced only at frame
boundaries.
Consider a system with four task
25
task is executed as a function call and must fit within a single frame. Multiple
tasks may be executed in a frame size is f Number of frames per hyper period is
F = H/f.
1. Tasks must fit into frames so, f ≥ Ci for all tasks Justification: Non-
preemptive tasks should finish executing within a single frame
2. f must evenly divide H Equivalently, f must evenly divide P for some task
i Justification: Keep table size small
3. There should be a complete frame between the release and deadline of
every task
Justification: Want to detect missed deadlines by the time the deadline arrives
26
Drawbacks:
1
Sharing of critical resources among tasks requires a different set of rules, compared
to the rules used for sharing resources such as a CPU among tasks. We have in the last
Chapter discussed how resources such as CPU can be shared among tasks. Priority
inversion is a operating system scenario in which a higher priority process is preempted by
a lower priority process. This implies the inversion of the priorities of the two processes.
A system malfunction may occur if a high priority process is not provided the
required resources.
Priority inversion may also lead to implementation of corrective measures. These
may include the resetting of the entire system.
The performance of the system can be reduces due to priority inversion. This may
happen because it is imperative for higher priority tasks to execute promptly.
System responsiveness decreases as high priority tasks may have strict time
constraints or real time response guarantees.
Sometimes there is no harm caused by priority inversion as the late execution of the
high priority process is not noticed by the system.
Solutions of Priority Inversion
Some of the solutions to handle priority inversion are given as follows −
Priority Ceiling
All of the resources are assigned a priority that is equal to the highest priority of
any task that may attempt to claim them. This helps in avoiding priority inversion
Disabling Interrupts
There are only two priorities in this case i.e. interrupts disabled and preemptible. So
priority inversion is impossible as there is no third option.
Priority Inheritance
This solution temporarily elevates the priority of the low priority task that is
executing to the highest priority task that needs the resource. This means that
medium priority tasks cannot intervene and lead to priority inversion.
No blocking
Priority inversion can be avoided by avoiding blocking as the low priority task
2
blocks the high priority task.
Random boosting
The priority of the ready tasks can be randomly boosted until they exit the critical
section.
Difference between Priority Inversion and Priority Inheritance
Both of these concepts come under Priority scheduling in Operating System. In one
line, Priority Inversion is a problem while Priority Inheritance is a solution. Literally,
Priority Inversion means that priority of tasks get inverted and Priority Inheritance means
that priority of tasks get inherited. Both of these phenomena happen in priority scheduling.
Basically, in Priority Inversion, higher priority task
(H) ends up waiting for middle priority task (M) when H is sharing critical section with
lower priority task (L) and L is already in critical section. Effectively, H waiting for M
results in inverted priority i.e. Priority Inversion. One of the solution for this problem is
Priority Inheritance.
In Priority Inheritance, when L is in critical section, L inherits priority of H at the
time when H starts pending for critical section. By doing so, M doesn’t interrupt L and H
doesn’t wait for M to finish. Please note that inheriting of priority is done temporarily i.e. L
goes back to its old priority when L comes out of critical section.
Priority Inheritance Protocol (PIP) is a technique which is used for sharing critical
resources among different tasks. This allows the sharing of critical resources among different
without the occurrence of unbounded priority inversions.
The basic concept of PIP is that when a task goes through priority inversion, the
priority of the lower priority task which has the critical resource is increased by the priority
inheritance mechanism. It allows this task to use the critical resource as early as possible
without going through the preemption. It avoids the unbounded priority inversion.
Working of PIP :
When several tasks are waiting for the same critical resource, the task which is
currently holding this critical resource is given the highest priority among all the tasks
which are waiting for the same critical resource. Now after the lower priority task
having the critical resource is given the highest priority then the intermediate priority
tasks cannot preempt this task. This helps in avoiding the unbounded priority inversion.
When the task which is given the highest priority among all tasks, finishes the job and
releases the critical resource then it gets back to its original priority value (which may
be less or equal). If a task is holding multiple critical resources then after releasing one
critical resource it cannot go back to it original priority value. In this case it inherits the
highest priority among all tasks waiting for the same critical resource.
3
Advantages of PIP :
Disadvantages of PIP :
occur: Deadlock :
Chain Blocking :
When a task goes through priority inversion each time it needs a resource then
this process is called chain blocking. For example, there are two tasks T 1 and T2.
Suppose T1 has the higher priority than T2. T2 holds the critical resource CR1 and CR2.
T1 arrives and requests for CR1. T2 undergoes the priority inversion according to PIP.
Now, T1 request CR2, again T2 goes for priority inversion according to PIP. Hence,
multiple priority inversion to hold the critical resource leads to chain blocking.
The chained blocking problem of the Priority Inheritance Protocol is resolved in the
Priority Ceiling Protocol.
4
The basic properties of Priority Ceiling Protocols are:
5
1. Each of the resources in the system is assigned a priority ceiling.
2. The assigned priority ceiling is determined by the highest priority among all the jobs
which may acquire the resource.
3. It makes use of more than one resource or semaphore variable, thus eliminating chain
blocking.
4. A job is assigned a lock on a resource if no other job has acquired lock on that
resource.
5. A job J, can acquire a lock only if the job’s priority is strictly greater than the
priority ceilings of all the locks held by other jobs.
6. If a high priority job has been blocked by a resource, then the job holding that
resource gets the priority of the high priority task.
7. Once the resource is released, the priority is reset back to the original.
8. In the worst case, the highest priority job J1 can be blocked by T lower priority tasks
in the system when J1 has to access T semaphores to finish its execution.
UNIT -IV
1
Task synchronization using semaphores, Inter task communication: message queues and
pipes, Remote procedure call- Timers and Interrupts-Memory management and I/O
management
Task Management:
Fig:4.2:Task states
Non-Periodic or aperiodic tasks = all tasks that are not periodic, also known as Event
driven, their activations may be generated by external interrupts. Sporadic tasks=
aperiodic tasks with minimum inter arrival time Tmin.
Managing tasks:
2
Task creation: create a new TCB (task control block)
Task termination: remove the TCB
Change Priority: modify the TCB
State-inquiry: read the TCB
Task synchronization:
3
Run:
A task enters this state as it starts executing on the processor
Ready:
State of those tasks that are ready to execute but cannot be executed, because the
processor is assigned to another task.
Wait:
A task enters this state when it executes a synchronization primitive to wait for an event,
e.g. a wait primitive on a semaphore. In this case, the task is inserted in a queue
associated with the semaphore. The task at the head is resumed when the semaphore is
unlocked by a signal primitive.
Idle:
A periodic job enters this state when it completes its execution and has to wait
for the beginning of the next period.
4
In computing, a named pipe (also known as a FIFO) is one of the methods for intern-process
communication.
It is an extension to the traditional pipe concept on Unix. A traditional pipe is
“unnamed” and lasts only as long as the process.
A named pipe, however, can last as long as the system is up, beyond the life of the
process. It can be deleted if no longer used.
Usually a named pipe appears as a file and generally processes attach to it for inter-
process communication. A FIFO file is a special kind of file on the local storage which
allows two or more processes to communicate with each other by reading/writing
to/from this file.
A FIFO special file is entered into the filesystem by calling mkfifo() in C. Once we have
created a FIFO special file in this way, any process can open it for reading or writing, in
the same way as an ordinary file. However, it has to be open at both ends
simultaneously before you can proceed to do any input or output operations on it.
Understanding Pipes
Within a process
Writes to files can be read on files
Not very useful
Between processes
After a fork()
Writes to files by one process can be read on files by the other
Using Pipes:
Usually, the unused end of the pipe is closed by the process
If process A is writing and process B is reading, then process A would close
files[0] and process B would close files[1]
Reading from a pipe whose write end has been closed returns 0 (end of file)
Writing to a pipe whose read end has been closed generates SIGPIPE
PIPE_BUF specifies kernel pipe buffer size
Creating a Pipe
The primitive for creating a pipe is the pipe function. This creates both the reading
and writing ends of the pipe. It is not very useful for a single process to use a pipe to talk to
itself. In typical use, a process creates a pipe just before it forks one or more child processes.
The pipe is then used for communication either between the parent or child processes, or
between two sibling processes.
The pipe function is declared in the header file unistd.h. Here is an example of a
simple program that creates a pipe. The parent process writes data to the pipe, which is read
by the child process.
5
FIFO
A FIFO special file is similar to a pipe, except that it is created in a different way.
Instead of being an anonymous communications channel, a FIFO special file is entered into
the file system by calling mkfifo.
6
Once you have created a FIFO special file in this way, any process can open it for
reading or writing, in the same way as an ordinary file. However, it has to be open at both
ends simultaneously before you can proceed to do any input or output operations on it.
Opening a FIFO for reading normally blocks until some other process opens the same FIFO
for writing, and vice versa.
First: Co processes–Nothing more than a process whose input and output are both
redirected from another process
FIFOs–named pipes
With regular pipes, only processes with a common ancestor can communicate With
FIFOs, any two processes can communicate
Creating and opening a FIFO is just like creating and opening a file
Mail box:
Mailboxes are similar to queues.
Capacity of mailboxes is usually fixed after initialization
Some RTOS allow only a single message in the mailbox. (mailbox is either full or
empty)
Some RTOS allow prioritization of messages
7
Mailbox functions at OS
Some OS provide the mailbox and queue both IPC functions. When the IPC functions
for mailbox are not provided by an OS, then the OS employs queue for the same purpose. A
mailbox of a task can receive from other tasks and has a distinct ID. Mailbox (for message)
is an IPC through a message at an OS that can be received only one single destined task for
the message from the tasks. Two or more tasks cannot take message from same Mailbox. A
task on an OS function call puts (means post and also send) into the mailbox only a pointer
to a mailbox message . Mailbox message may also include a header to identify the message-
type specification.
OS provides for inserting and deleting message into the mailbox message pointer.
Deleting means message-pointer pointing to Null. Each mailbox for a message need
initialization (creation) before using the functions in the scheduler for the message queue
and message pointer pointing to Null. There may be a provision for multiple mailboxes for
the multiple types or destinations of messages. Each mailbox has an ID. Each mailbox
usually has one message pointer only, which can point to message.
Mailbox Types
Fig:4.5:Classification of mailbox
When an OS call is to post into the mailbox, the message bytes are as per the pointed
number of bytes by the mailbox message pointer.
Fig:4.6:Features of mailbox
8
9