Chapter 4: Threads &
Concurrency
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Objectives
To introduce the notion of a thread
To examine issues related to multithreaded programming
To cover operating system support for threads
Operating System Concepts – 10th Edition 4.2 Silberschatz, Galvin and Gagne ©2018
Concurrency and Parallelism
Operating System Concepts – 10th Edition 4.3 Silberschatz, Galvin and Gagne ©2018
Concurrency and Parallelism
Parallelism implies that a system can perform more than one task
simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
Multi-core or multiprocessor systems putting pressure on
programmers, challenges include:
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
Operating System Concepts – 10th Edition 4.4 Silberschatz, Galvin and Gagne ©2018
Concurrency vs. Parallelism
Concurrent execution on single-core system:
Parallelism on a multi-core system:
Operating System Concepts – 10th Edition 4.5 Silberschatz, Galvin and Gagne ©2018
Types of Parallelism
Data parallelism – distributes subsets of the same data across multiple
cores, same operation on each
Task parallelism – distributing threads across cores, each thread
performing unique operation
Operating System Concepts – 10th Edition 4.6 Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law
Identifies performance gains from adding additional cores to an
application that has both serial and parallel components
S is serial portion
N processing cores
That is, if application is 75% parallel / 25% serial, moving from 1 to 2
cores results in speedup of 1.6 times
As N approaches infinity, speedup approaches 1 / S
Operating System Concepts – 10th Edition 4.7 Silberschatz, Galvin and Gagne ©2018
Introduction to Threads and
Multithreading
Operating System Concepts – 10th Edition 4.8 Silberschatz, Galvin and Gagne ©2018
What is a thread?
Traditional (single-threaded) process has a single thread of control.
A thread is a basic unit of CPU utilization
Also called a “lightweight process (LWP)” low creation overhead
The thread shares with other threads (belonging to the same process) its
code section, data section, and other OS resources, such as open files.
A process with multiple threads can do more than one task at a time.
What resources are used when a thread is created? How do they differ
from those used when a process is created?
What two advantages do threads have over multiple processes? What
major disadvantage do they have?
Operating System Concepts – 10th Edition 4.9 Silberschatz, Galvin and Gagne ©2018
Single and Multithreaded (MT)
Processes
Operating System Concepts – 10th Edition 4.10 Silberschatz, Galvin and Gagne ©2018
Examples of MT Applications
Web browser
Word processor
Web-server
Problem?
Two possible solutions
Suggest another application that would benefit from the use of threads,
and an application that would not.
Operating System Concepts – 10th Edition 4.11 Silberschatz, Galvin and Gagne ©2018
Benefits of MT Applications
Responsiveness – may allow continued execution if part of
process is blocked, especially important for user interfaces
Resource Sharing – threads share resources of process, easier
than shared memory or message passing
Economy – cheaper than process creation, thread switching
lower overhead than context switching
Scalability – process can take advantage of multiprocessor
architectures
Operating System Concepts – 10th Edition 4.12 Silberschatz, Galvin and Gagne ©2018
Multithreaded Server Architecture
Operating System Concepts – 10th Edition 4.13 Silberschatz, Galvin and Gagne ©2018
User Threads and Kernel Threads
Operating System Concepts – 10th Edition 4.14 Silberschatz, Galvin and Gagne ©2018
User Threads
Implemented by a thread library at the user level (above the kernel)
The library provides support for thread creation, scheduling, and
management.
The programmer of the library writes code to synchronize threads and to
context switch them, and they all run in one process.
Kernel does not provide any support; it is unaware that user-level
threads are even running.
Fast to create and manage (i.e., high efficiency), mainly because the
kernel is not involved.
What if the kernel is single-threaded?
Three primary thread libraries: POSIX Pthreads, Windows threads, Java
threads
Operating System Concepts – 10th Edition 4.15 Silberschatz, Galvin and Gagne ©2018
Kernel Threads
Supported directly by the OS
The kernel performs thread creation, scheduling, and management
in kernel space.
Slower to create and manage than user threads and have more
overhead in the kernel.
If a thread performs a blocking system call, the kernel can schedule
another thread in the application for execution.
In a parallel environment, the kernel can schedule threads on different
processors.
Examples – virtually all general-purpose OS’s, including Windows,
Solaris, Linux, Mac OS X
Operating System Concepts – 10th Edition 4.16 Silberschatz, Galvin and Gagne ©2018
Multithreading Models
Operating System Concepts – 10th Edition 4.17 Silberschatz, Galvin and Gagne ©2018
MT Models
Many-to-One
One-to-One
Many-to-Many
Operating System Concepts – 10th Edition 4.18 Silberschatz, Galvin and Gagne ©2018
Many-to-One MT
Many user-level threads mapped to
single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel
on muti-core system because only one
may be in kernel at a time
Used in systems that do not support
kernel threads
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
Operating System Concepts – 10th Edition 4.19 Silberschatz, Galvin and Gagne ©2018
One-to-One MT
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
allows a thread to run when another thread makes a blocking
system call
allows multiple threads to run in parallel on microprocessors
Examples:
Windows
Linux
Solaris 9 and later
Operating System Concepts – 10th Edition 4.20 Silberschatz, Galvin and Gagne ©2018
Many-to-Many MT
Allows many user-level threads to be
mapped to many kernel threads
Allows the operating system to create
a sufficient number of kernel threads
Addresses the limitations of one-to-
many and one-to-one models
Examples:
Solaris prior to version 9
Windows with the ThreadFiber
package
Operating System Concepts – 10th Edition 4.21 Silberschatz, Galvin and Gagne ©2018
Two-Level MT
Similar to many-to-many model, except that it allows a
user thread to be bound to kernel thread
Examples
IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier
Operating System Concepts – 10th Edition 4.22 Silberschatz, Galvin and Gagne ©2018
MT Models: Summary
Many-to-one
The developer can create as many user threads as he wishes
But the true concurrency is not gained because the kernel can
schedule only one thread at a time
One-to-one
Greater concurrency than many-to-one
But the developer has to be careful (and in some instances be
limited) not to create too many threads within an application
Many-to-many
Does not suffer from the above shortcomings
Developers can create as many user threads as necessary, and the
corresponding kernel threads can run in parallel on a multiprocessor
Operating System Concepts – 10th Edition 4.23 Silberschatz, Galvin and Gagne ©2018
End of Chapter 4
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Multithreading Issues
Operating System Concepts – 10th Edition 4.25 Silberschatz, Galvin and Gagne ©2018
MT Issues
Signal handling
Synchronous and asynchronous
Thread cancellation of target thread
Asynchronous or deferred
Thread’s local storage
Scheduler Activations
Operating System Concepts – 10th Edition 4.26 Silberschatz, Galvin and Gagne ©2018
Signal Handling
Signals are used in UNIX systems to notify a process that a particular
event has occurred.
A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
1. default
2. user-defined
Every signal has default handler that kernel runs when handling signal
User-defined signal handler can override default
For single-threaded, signal delivered to process
Operating System Concepts – 10th Edition 4.27 Silberschatz, Galvin and Gagne ©2018
Signal Handling (Cont.)
Where should a signal be delivered for multi-threaded?
Deliver the signal to the thread to which the signal applies
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the process
Operating System Concepts – 10th Edition 4.28 Silberschatz, Galvin and Gagne ©2018
Thread Cancellation
Terminating a thread before it has finished
Thread to be canceled is target thread
Two general approaches:
Asynchronous cancellation terminates the target thread
immediately
Deferred cancellation allows the target thread to periodically
check if it should be cancelled
Operating System Concepts – 10th Edition 4.29 Silberschatz, Galvin and Gagne ©2018
Thread Cancellation (Cont.)
Invoking thread cancellation requests cancellation, but actual
cancellation depends on thread state
If thread has cancellation disabled, cancellation remains pending until
thread enables it
On Linux systems, thread cancellation is handled through signals
Operating System Concepts – 10th Edition 4.30 Silberschatz, Galvin and Gagne ©2018
Thread’s Local Storage
Thread’s local storage (TLS) allows each thread to have its own copy
of data
Useful when you do not have control over the thread creation process
(i.e., when using a thread pool)
Different from local variables
Local variables visible only during single function invocation
TLS visible across function invocations
Operating System Concepts – 10th Edition 4.31 Silberschatz, Galvin and Gagne ©2018
Scheduler Activations
Both many-to-many and two-level models
require communication to maintain the
appropriate number of kernel threads allocated
to the application
Typically use an intermediate data structure
between user and kernel threads – lightweight
process (LWP)
Appears to be a virtual processor on which
process can schedule user thread to run
Each LWP attached to kernel thread
How many LWPs to create?
Scheduler activations provide upcalls - a
communication mechanism from the kernel to
the upcall handler in the thread library
This communication allows an application to
maintain the correct number kernel threads
Operating System Concepts – 10th Edition 4.32 Silberschatz, Galvin and Gagne ©2018
Operating System Examples
Windows Threads
Linux Threads
Operating System Concepts – 10th Edition 4.33 Silberschatz, Galvin and Gagne ©2018
Windows Threads
Windows implements the Windows API – primary API for Win
98, Win NT, Win 2000, Win XP, and Win 7
Implements the one-to-one mapping, kernel-level
Each thread contains
A thread id
Register set representing state of processor
Separate user and kernel stacks for when thread runs in
user mode or kernel mode
Private data storage area used by run-time libraries and
dynamic link libraries (DLLs)
The register set, stacks, and private storage area are known as
the context of the thread
Operating System Concepts – 10th Edition 4.34 Silberschatz, Galvin and Gagne ©2018
Windows Threads (Cont.)
The primary data structures of a thread include:
ETHREAD (executive thread block) – includes pointer to
process to which thread belongs and to KTHREAD, in
kernel space
KTHREAD (kernel thread block) – scheduling and
synchronization info, kernel-mode stack, pointer to TEB, in
kernel space
TEB (thread environment block) – thread id, user-mode
stack, thread-local storage, in user space
Operating System Concepts – 10th Edition 4.35 Silberschatz, Galvin and Gagne ©2018
Windows Threads Data Structures
Operating System Concepts – 10th Edition 4.36 Silberschatz, Galvin and Gagne ©2018
Linux Threads
Linux refers to them as tasks rather than threads
Thread creation is done through clone() system call
clone() allows a child task to share the address space of the
parent task (process)
Flags control behavior
struct task_struct points to process data structures
(shared or unique)
Operating System Concepts – 10th Edition 4.37 Silberschatz, Galvin and Gagne ©2018