Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views40 pages

ch3 Processes and Threads

Chapter 3 discusses processes and threads in operating systems, highlighting the evolution from single program execution to concurrent processing. It covers the concept of processes, their states, the Process Control Block (PCB), and process scheduling, as well as the distinction between independent and cooperating processes. Additionally, the chapter explains threads, their benefits, types, and multithreading models, emphasizing their role in efficient resource sharing and execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views40 pages

ch3 Processes and Threads

Chapter 3 discusses processes and threads in operating systems, highlighting the evolution from single program execution to concurrent processing. It covers the concept of processes, their states, the Process Control Block (PCB), and process scheduling, as well as the distinction between independent and cooperating processes. Additionally, the chapter explains threads, their benefits, types, and multithreading models, emphasizing their role in efficient resource sharing and execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Chapter 3: Processes and Threads

3.1 Process Concept


 Early operating systems allowed only one program to be executed at a time. This
program had complete control of the system.

 Current-day OS allow multiple programs to be loaded and executed concurrently


with the CPU(CPUs) multiplexed among them.

 Textbook uses the terms job and process almost interchangeably

 Process – is a program in execution; it is more than program code

 A program by itself is not a process; it is a passive entity such as a file containing

a list of instructions stored on disk (often called executable file), whereas a


process is an active entity. A program becomes a process when an executable file
is loaded into memory either by double-clicking the program’s icon or entering the
name of the executable file on the CLI.
3.1 Process Concept…..

 Each process in an OS has unique characteristics that define its behavior:

 . Independent Execution: Processes do not share memory or data unless


explicitly allowed.

 Resource Ownership: Every process has its own memory space and system
resources.

 Execution Context: The OS keeps track of process execution through the


Process Control Block (PCB).

 Inter-Process Communication (IPC): Processes communicate via message


passing or shared memory
3.2 Process State
 The state of a process is defined in part by the current activity of that process. As a process
executes, it changes state
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur(such as an I/O completion or
reception of a signal)
 ready: The process is waiting to be assigned to a processor
 terminated: The process has finished execution
 Note: only one process can be running on any processor at any instant. Many processes
may be ready and waiting however.
• 3.3 Process Control Block (PCB)
 Each process is represented in the OS by PCB also called task control block.
 The Process Control Block (PCB) is a data structure maintained by the
operating system for each process. It contains important information needed for
process management, including:
 Process ID (PID): A unique identifier for each process.
 Process State: Indicates if the process is new, ready, running, waiting, or
terminated.
 Program Counter: Holds the address of the next instruction to execute.
 CPU Registers: Stores temporary data needed for process execution.
 Memory Management Information: Contains details about memory
allocation, page tables, and segment tables.
 I/O Status Information: Tracks files opened by the process and I/O devices
used.
 Scheduling Information: Stores priority levels and scheduling queues.
 The OS uses the PCB for context switching and process management.
3.4 Process Scheduling
 Process Scheduling is the mechanism by which the OS selects which process
gets CPU time and when.
 .Types of Scheduling:
 Long-Term Scheduling: Determines which processes are admitted to the system for
execution.
 Short-Term Scheduling: Selects which process runs next from the ready queue.
 Medium-Term Scheduling: Manages swapping processes in and out of memory.
 Schedulers:
• Long-Term Scheduler (Job Scheduler): Determines which processes enter the ready
queue.
• Short-Term Scheduler (CPU Scheduler): Selects the next process for execution.
• Medium-Term Scheduler: Temporarily removes processes from memory to optimize
system performance.
 Scheduling Policies:
• Preemptive Scheduling: Allows a running process to be interrupted (e.g., Round
Robin, Priority Scheduling).
• Non-Preemptive Scheduling: Once a process starts, it runs until completion (e.g.,
First-Come-First-Serve, Shortest Job Next).
3.5. Context Switching (CPU switching
between processes)
 Context switching is the process by which the CPU switches from one
process to another.
 It allows multiple processes to share CPU time efficiently.
 Steps in context switching:
 The OS saves the current process's state (program counter, registers,
etc.) into its PCB.
 The OS loads the state of the next process from its PCB.
 The CPU starts executing the new process.
 Context switching is managed by the short-term scheduler and is triggered
by:
 A process voluntarily yielding the CPU (e.g., waiting for I/O).
 A higher-priority process becoming ready.
 A time slice expiration in preemptive scheduling.
 Although necessary, frequent context switching can reduce CPU efficiency
due to overhead.
CPU Switch From Process to Process
3.6 Operations on Processes
3.6.1 Process Creation
 Parent process create children processes, which, in turn create other processes,
forming a tree of processes
 To accomplish its tasks, a process will need certain resources which it will obtain
directly from OS or parent process.
 Resource sharing
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources
 Restricting a child process to a subset of the parent’s resources prevents any
process from overloading the system by creating too many sub processes.
 Two possibilities in terms of execution
 Parent and children execute concurrently
 Parent waits until children terminate
 Two possibilities in terms of address space
 Child duplicate of parent(it has the same program and data as the parent)
 Child has a program loaded into it
Process Creation (Cont.)
 UNIX examples
 fork system call creates new process
 exec system call used after a fork to replace the process’ memory
space with a new program
3.6.2 Process Termination

 A process terminates when it finishes its last statement and asks


the operating system to delete it (by using the exit() system call).
 The process may return a status value to its parent process
(via wait() system call).
 Process’ resources are deallocated by operating system
 Parent may terminate execution of its children processes using
appropriate system call (abort) for various reasons such as :
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 If parent is exiting
 Some operating system do not allow child to continue if its
parent terminates
– All children terminated - cascading termination
3.7 Cooperating Processes
 Independent process cannot affect or be affected by the execution of another
process(does not share data with any other process).
 Cooperating process can affect or be affected by the execution of another
process(shares data with other process)
 Advantages of process cooperation
 Information sharing: several users may need same information
 Computation speed-up: enables parallel execution
 Modularity
 Convenience
 Cooperating processes require an inter process communication (IPC)
mechanism to exchange data and information. Two models of IPC are
 Shared memory
 Message passing
Communications Models

(a) Message Passing (b) Shared Memory


3.7.1 Shared Memory

 Processes can exchange information by reading and writing data to a


shared memory region established by the cooperating processes.
 A shared memory region resides in the address space of the process
creating it. The other processes that wish to communicate attach it to their
address space.
 Allows maximum speed and convenience of communication as it can be
done at memory speeds when within a computer.
 System calls are required only to establish shared memory regions and no
assistance from the kernel is required to access the shared memory.
 Producer-consumer problem: paradigm for cooperating processes,
producer process produces information that is consumed by a consumer
process. One solution is to use buffer
 unbounded-buffer places no practical limit on the size of the buffer
 bounded-buffer assumes that there is a fixed buffer size
3.7.2 Message Passing
 Communication takes place by means of messages exchanged between the
cooperating processes. Example : a chat program used on WWW
 Useful for exchanging smaller amount of data
 Easier to implement than shared-memory for inter computer communication
 Typically implemented using system calls and thus require more time-consuming
task of OS intervention.
 Message-passing facility provides two operations:
 send(message) – message size can be either fixed or variable
 receive(message)
 The message size is either fixed or variable
 If P and Q wish to communicate, they need to:
 establish a communication link between them
 exchange messages via send/receive
 Implementation of communication link
 physical (e.g., shared memory, hardware bus, Network)
 logical (e.g., direct or indirect, synchronous or asynchronous , automatic or
explicit buffering )
3.7.2.1 Methods of massage passing
 There are two primary massage passing methods:
1. Direct communication: the sending and receiving processes must explicitly
name each other. Each process knows exactly with whom it is communicating.
 Characteristics:
 Tight Coupling: Processes are aware of each other’s identity.
 Simplicity: Directly send and receive messages using system calls.
2. Indirect communication: processes do not communicate directly with one
another. Instead, messages are sent to and received from intermediary data
structures (mailboxes or message queues).
 Characteristics:
• Loose Coupling: Sender and receiver do not need to know each other's
identities.
• Flexibility: Multiple senders and receivers can interact with a common
mailbox.
3.7.2.2 massage passing Synchronization
 When using message passing, synchronization mechanisms
ensure that senders and receivers operate in a coordinated
manner. Two common modes are:
1. Blocking is considered synchronous
 Blocking send has the sender block until the message is
received by process or mailbox.
 Blocking receive has the receiver block until a message is
available
2. Non-blocking is considered asynchronous
 Non-blocking send has the sender send the message and
continue
 Non-blocking receive has the receiver receive a valid
message or null
3.7.2.3 Message Buffering in massage passing
 Queue of messages attached to the link to temporarily store
messages; implemented in one of three ways
1. Zero capacity :The system does not store the message; it must be
transferred directly from the sender to the receiver.
• The sender is forced to wait until the receiver is ready to receive,
ensuring synchronization.
2. Bounded capacity – A fixed-size buffer is available for storing
messages. Balances resource constraints with the need for
asynchronous operation.
• If the buffer is full, the sender may block or be required to handle
the situation (e.g., retry later).
3. Unbounded capacity – The system allows the buffer to grow
dynamically to hold any number of messages. Can lead to high
memory usage if messages accumulate faster than they are
processed.
• The sender can always deposit a message without waiting
Client-Server Communication
 Client-server communication is a model in which clients request services,
and servers provide responses. It is used in distributed computing
environments.
 Although client-server communication often involves network protocols, it is
another form of IPC where the processes are distributed across different
systems like that of:
1. Sockets
2. Remote Procedure Calls
3. Remote Method Invocation (Java)
Sockets

 A socket is defined as an endpoint for communication

 Allow communication over a network using TCP (reliable,


connection-oriented) or UDP (faster, connectionless).

 Concatenation of IP address and port

 The socket 161.25.19.8:1625 refers to port 1625 on host


161.25.19.8

 Communication consists between a pair of sockets


Socket Communication
Remote Procedure Calls

 Remote procedure call (RPC) abstracts procedure calls between


processes on networked systems.
 RPC allows a process to execute a function on a remote system as
if it were local.
 Stubs – client-side proxy for the actual procedure on the server.
 The client-side stub locates the server and marshalls the
parameters.
 The server-side stub receives this message, unpacks the
marshalled parameters, and peforms the procedure on the server.
Remote Method Invocation

 Remote Method Invocation (RMI) is a Java mechanism similar to


RPCs.
 RMI allows a Java program on one machine to invoke a method on
a remote object.
3.2. thread
 A thread is the smallest unit of execution within a process. A process can
contain multiple threads, all of which share the same memory space but
execute independently.
 Unlike a process, a thread shares resources (memory, files, etc.) with other
threads in the same process.
 Key Characteristics of Threads:
 Lighter than processes: Threads share the same process
space, reducing memory overhead.
 Faster context switching: Switching between threads is
quicker than switching between processes.
 Concurrency & Parallelism: Threads allow multiple operations
within the same process to run concurrently.
 Shared Resources: Threads within the same process can
share data without IPC mechanisms.
Benefit of threads

 Responsiveness – may allow continued execution if part of


process is blocked, especially important for user interfaces

 Resource Sharing – threads share resources of process,

easier than shared memory or message passing

 Economy – cheaper than process creation, thread switching

lower overhead than context switching

 Scalability – process can take advantage of multiprocessor


architectures
Types of Threads
1. User-Level Threads:
 Managed by user-level libraries or thread libraries.
 The operating system is unaware of their existence.
 Can be implemented on any OS, but may suffer from poor performance
when one thread blocks (since the OS treats it as a single process).
2. Kernel-Level Threads:
 Managed directly by the operating system.
 The OS schedules and manages the threads independently.
 More efficient in utilizing multi-core processors, as each thread can be
scheduled independently.
3. Hybrid Threads:
 Combines both user-level and kernel-level threads.
 For example, user-level threads are managed by libraries, but the kernel
can manage these threads to improve scheduling.
Cont…
 Threads can be Single-threading or multithreading depending on how many
threads of execution are utilized within a process.
1. In a single-threaded, there is only one thread of execution. The process executes
one instruction at a time in a sequential manner.
 · Characteristics:
 Simplicity: Easier to design and debug since there's no concurrent execution to
manage.
 Resource Sharing: All operations are performed in a single sequence, eliminating
the need for synchronization between threads.
2. Multithreading allows a process to have multiple threads executing concurrently.
Each thread can perform different tasks or parts of a task simultaneously.
 · Characteristics:
 Concurrency: Threads can run concurrently, allowing for better utilization of multi-
core processors.
 Responsiveness: Multithreading can keep the user interface responsive by
offloading time-consuming tasks to background threads.
 Shared Resources: Threads share the process’s memory space, enabling efficient
data sharing but also requiring careful synchronization to avoid race conditions.
Multithreading Models

 Multithreading models define how user threads


map to kernel threads. The three main models
are:
1. Many-to-One
2. One-to-One
3. Many-to-Many
Many-to-one
 Many user-level threads mapped to single kernel thread

 Efficient but lacks parallelism (one thread blocks all threads).

 Examples: Solaris Green Threads, GNU Portable Threads


One-To-One
 Each user thread maps to a separate kernel thread.

 Supports true parallelism on multi-core systems.

 Disadvantage: Creating too many threads can overwhelm the system.

 Example: Windows, Linux, and POSIX threads (pthreads).


Many-To- Many
 Allows many user level threads to be mapped to many kernel threads
 Allows the operating system to create a sufficient number of kernel threads
 Balances flexibility and efficiency by allowing multiple threads without
excessive kernel management.
 Example: Solaris, modern Linux threading models.
Thread Creation and Termination

 Thread Creation:
 In many operating systems, threads are created using thread
libraries or system calls like pthread_create() in UNIX-based
systems.
 Thread Termination:
 Threads can terminate either when they complete their task or
when explicitly terminated by the process or the operating
system.
 The pthread_exit() function can be used in POSIX systems to
terminate a thread.
Thread Scheduling
 Thread Scheduling: Just like processes, threads need to be scheduled
to make efficient use of CPU resources.
1. Preemptive Scheduling: The operating system can preemptively
pause a thread and switch to another one (common in kernel-level
thread scheduling).
2. Cooperative Scheduling: Threads voluntarily yield control to other
threads (common in user-level threads).
3. Thread Priorities: Threads may have different priorities. Higher-
priority threads are scheduled before lower-priority threads.
Thread Synchronization
 Race Condition: A race condition occurs when two or more

threads attempt to access shared resources simultaneously,


leading to inconsistent or incorrect results.

 Mutexes (Mutual Exclusion): A mutex is a locking mechanism

that prevents other threads from accessing a critical section while


one thread is executing it.

 Semaphores: A semaphore is a signaling mechanism used to

control access to shared resources by multiple threads.

 Condition Variables: Used in conjunction with mutexes to allow

threads to wait for certain conditions before proceeding.


Thread cancellation

 Terminating a thread before it has finished


 Two general approaches:
 Asynchronous cancellation terminates the target thread
immediately
 Deferred cancellation allows the target thread to periodically
check if it should be cancelled
Window XP Threads

 Implements the one-to-one mapping


 Each thread contains
 A thread id
 Register set
 Separate user and kernel stacks
 Private data storage area
 The register set, stacks, and private storage area are known as the
context of the threads
 The primary data structures of a thread include:
 ETHREAD (executive thread block)
 KTHREAD (kernel thread block)
 TEB (thread environment block)
Linux Threads

 Linux refers to them as tasks rather than threads


 Thread creation is done through clone() system call
 clone() allows a child task to share the address space of the parent
task (process)
Java threads

 Java threads are managed by the JVM

 Java threads may be created by:

 Extending Thread class


 Implementing the Runnable interface
Java threads states
End of Chapter 3

You might also like