Chapter 4: Threads & Concurrency
§ Overview
§ Multicore Programming
§ Multithreading Models
§ Thread Libraries
§ Implicit Threading
§ Threading Issues
§ Operating System Examples
Operating System Concepts – 10th Edition 4.1 Silberschatz, Galvin and Gagne ©2018
Objectives
§ Identify the basic components of a thread, and contrast threads
and processes
§ Describe the benefits and challenges of designng
multithreaded applications
§ Illustrate different approaches to implicit threading including
thread pools, fork-join, and Grand Central Dispatch
§ Describe how the Windows and Linux operating systems
represent threads
§ Design multithreaded applications using the Pthreads, Java,
and Windows threading APIs
Operating System Concepts – 10th Edition 4.2 Silberschatz, Galvin and Gagne ©2018
Motivation
§ Most modern applications are multithreaded
§ Threads run within application
§ Multiple tasks with the application can be implemented by
separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
§ Process creation is heavy-weight while thread creation is
light-weight
§ Can simplify code, increase efficiency
§ Kernels are generally multithreaded
Operating System Concepts – 10th Edition 4.3 Silberschatz, Galvin and Gagne ©2018
Single and Multithreaded Processes
Operating System Concepts – 10th Edition 4.4 Silberschatz, Galvin and Gagne ©2018
Multithreaded Server Architecture
Operating System Concepts – 10th Edition 4.5 Silberschatz, Galvin and Gagne ©2018
Benefits
§ Responsiveness – may allow continued execution if part of
process is blocked, especially important for user interfaces
§ Resource Sharing – threads share resources of process, easier
than shared memory or message passing
§ Economy – cheaper than process creation, thread switching
lower overhead than context switching
§ Scalability – process can take advantage of multicore
architectures
Operating System Concepts – 10th Edition 4.6 Silberschatz, Galvin and Gagne ©2018
Multicore Programming
§ Multicore or multiprocessor systems putting pressure on
programmers, challenges include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
§ Parallelism implies a system can perform more than one task
simultaneously
§ Concurrency supports more than one task making progress
• Single processor / core, scheduler providing concurrency
Operating System Concepts – 10th Edition 4.7 Silberschatz, Galvin and Gagne ©2018
Concurrency vs. Parallelism
§ Concurrent execution on single-core system:
§ Parallelism on a multi-core system:
Operating System Concepts – 10th Edition 4.8 Silberschatz, Galvin and Gagne ©2018
Multicore Programming
§ Types of parallelism
• Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
• Task parallelism – distributing threads across cores, each
thread performing unique operation
Operating System Concepts – 10th Edition 4.9 Silberschatz, Galvin and Gagne ©2018
Data and Task Parallelism
Operating System Concepts – 10th Edition 4.10 Silberschatz, Galvin and Gagne ©2018
User Threads and Kernel Threads
§ User threads - management done by user-level threads library
§ Three primary thread libraries:
• POSIX Pthreads
• Windows threads
• Java threads
§ Kernel threads - Supported by the Kernel
§ Examples – virtually all general -purpose operating systems, including:
• Windows
• Linux
• Mac OS X
• iOS
• Android
Operating System Concepts – 10th Edition 4.11 Silberschatz, Galvin and Gagne ©2018
User and Kernel Threads
Operating System Concepts – 10th Edition 4.12 Silberschatz, Galvin and Gagne ©2018
Multithreading Models
§ Many-to-One
§ One-to-One
§ Many-to-Many
Operating System Concepts – 10th Edition 4.13 Silberschatz, Galvin and Gagne ©2018
Many-to-One
§ Many user-level threads mapped to single kernel thread
§ One thread blocking causes all to block
§ Multiple threads may not run in parallel on muticore system because
only one may be in kernel at a time
§ Few systems currently use this model
§ Examples:
• Solaris Green Threads
• GNU Portable Threads
Operating System Concepts – 10th Edition 4.14 Silberschatz, Galvin and Gagne ©2018
One-to-One
§ Each user-level thread maps to kernel thread
§ Creating a user-level thread creates a kernel thread
§ More concurrency than many-to-one
§ Number of threads per process sometimes restricted due to overhead
§ Examples
• Windows
• Linux
Operating System Concepts – 10th Edition 4.15 Silberschatz, Galvin and Gagne ©2018
Many-to-Many Model
§ Allows many user level threads to be mapped to many kernel threads
§ Allows the operating system to create a sufficient number of kernel
threads
§ Windows with the ThreadFiber package
§ Otherwise not very common
Operating System Concepts – 10th Edition 4.16 Silberschatz, Galvin and Gagne ©2018
Two-level Model
§ Similar to M:M, except that it allows a user thread to be bound to
kernel thread
Operating System Concepts – 10th Edition 4.17 Silberschatz, Galvin and Gagne ©2018
Thread Libraries
§ Thread library provides programmer with API for creating and
managing threads
§ Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
Operating System Concepts – 10th Edition 4.18 Silberschatz, Galvin and Gagne ©2018
Java Threads
§ Java threads are managed by the JVM
§ Typically implemented using the threads model provided by underlying
OS
§ Java threads may be created by:
• Extending Thread class
• Implementing the Runnable interface
• Standard practice is to implement Runnable interface
Operating System Concepts – 10th Edition 4.19 Silberschatz, Galvin and Gagne ©2018
Java Threads
Implementing Runnable interface:
Creating a thread:
Waiting on a thread:
Operating System Concepts – 10th Edition 4.20 Silberschatz, Galvin and Gagne ©2018
Java Executor Framework
§ Rather than explicitly creating threads, Java also allows thread creation
around the Executor interface:
§ The Executor is used as follows:
Operating System Concepts – 10th Edition 4.21 Silberschatz, Galvin and Gagne ©2018
Java Executor Framework
Operating System Concepts – 10th Edition 4.22 Silberschatz, Galvin and Gagne ©2018
Java Executor Framework (Cont.)
Operating System Concepts – 10th Edition 4.23 Silberschatz, Galvin and Gagne ©2018
Implicit Threading
§ Growing in popularity as numbers of threads increase, program
correctness more difficult with explicit threads
§ Creation and management of threads done by compilers and run-time
libraries rather than programmers
§ Five methods explored
• Thread Pools
• Fork-Join
• OpenMP
• Grand Central Dispatch
• Intel Threading Building Blocks
Operating System Concepts – 10th Edition 4.24 Silberschatz, Galvin and Gagne ©2018
Thread Pools
§ Create a number of threads in a pool where they await work
§ Advantages:
• Usually slightly faster to service a request with an existing thread
than create a new thread
• Allows the number of threads in the application(s) to be bound to
the size of the pool
• Separating task to be performed from mechanics of creating task
allows different strategies for running task
4 i.e.,Tasks could be scheduled to run periodically
§ Windows API supports thread pools:
Operating System Concepts – 10th Edition 4.25 Silberschatz, Galvin and Gagne ©2018
Java Thread Pools
§ Three factory methods for creating thread pools in Executors class:
Operating System Concepts – 10th Edition 4.26 Silberschatz, Galvin and Gagne ©2018
Java Thread Pools (Cont.)
Operating System Concepts – 10th Edition 4.27 Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism
§ Multiple threads (tasks) are forked, and then joined.
Operating System Concepts – 10th Edition 4.28 Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism
§ General algorithm for fork-join strategy:
Operating System Concepts – 10th Edition 4.29 Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism
Operating System Concepts – 10th Edition 4.30 Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism in Java
Operating System Concepts – 10th Edition 4.31 Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism in Java
Operating System Concepts – 10th Edition 4.32 Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism in Java
§ The ForkJoinTask is an abstract base class
§ RecursiveTask and RecursiveAction classes extend
ForkJoinTask
§ RecursiveTask returns a result (via the return value from the
compute() method)
§ RecursiveAction does not return a result
Operating System Concepts – 10th Edition 4.33 Silberschatz, Galvin and Gagne ©2018
OpenMP
§ Set of compiler directives and
an API for C, C++,
FORTRAN
§ Provides support for parallel
programming in shared-
memory environments
§ Identifies parallel regions –
blocks of code that can run in
parallel
#pragma omp parallel
Create as many threads as there
are cores
Operating System Concepts – 10th Edition 4.34 Silberschatz, Galvin and Gagne ©2018
§ Run the for loop in parallel
Operating System Concepts – 10th Edition 4.35 Silberschatz, Galvin and Gagne ©2018
Threading Issues
§ Semantics of fork() and exec() system calls
§ Signal handling
• Synchronous and asynchronous
§ Thread cancellation of target thread
• Asynchronous or deferred
§ Thread-local storage
§ Scheduler Activations
Operating System Concepts – 10th Edition 4.36 Silberschatz, Galvin and Gagne ©2018
Semantics of fork() and exec()
§ Does fork()duplicate only the calling thread or all threads?
• Some UNIXes have two versions of fork
§ exec() usually works as normal – replace the running process
including all threads
Operating System Concepts – 10th Edition 4.37 Silberschatz, Galvin and Gagne ©2018
Signal Handling
§ Signals are used in UNIX systems to notify a process that a particular
event has occurred.
§ A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
1. default
2. user-defined
§ Every signal has default handler that kernel runs when handling
signal
• User-defined signal handler can override default
• For single-threaded, signal delivered to process
Operating System Concepts – 10th Edition 4.38 Silberschatz, Galvin and Gagne ©2018
Signal Handling (Cont.)
§ Where should a signal be delivered for multi-threaded?
• Deliver the signal to the thread to which the signal applies
• Deliver the signal to every thread in the process
• Deliver the signal to certain threads in the process
• Assign a specific thread to receive all signals for the process
Operating System Concepts – 10th Edition 4.39 Silberschatz, Galvin and Gagne ©2018
Thread Cancellation
§ Terminating a thread before it has finished
§ Thread to be canceled is target thread
§ Two general approaches:
• Asynchronous cancellation terminates the target thread
immediately
• Deferred cancellation allows the target thread to periodically
check if it should be cancelled
§ Pthread code to create and cancel a thread:
Operating System Concepts – 10th Edition 4.40 Silberschatz, Galvin and Gagne ©2018
Thread Cancellation (Cont.)
§ Invoking thread cancellation requests cancellation, but actual
cancellation depends on thread state
§ If thread has cancellation disabled, cancellation remains pending until
thread enables it
§ Default type is deferred
• Cancellation only occurs when thread reaches cancellation point
4 i.e., pthread_testcancel()
4 Then cleanup handler is invoked
§ On Linux systems, thread cancellation is handled through signals
Operating System Concepts – 10th Edition 4.41 Silberschatz, Galvin and Gagne ©2018
Thread Cancellation in Java
§ Deferred cancellation uses the interrupt() method, which sets the
interrupted status of a thread.
§ A thread can then check to see if it has been interrupted:
Operating System Concepts – 10th Edition 4.42 Silberschatz, Galvin and Gagne ©2018
Thread-Local Storage
§ Thread-local storage (TLS) allows each thread to have its own copy
of data
§ Useful when you do not have control over the thread creation process
(i.e., when using a thread pool)
§ Different from local variables
• Local variables visible only during single function invocation
• TLS visible across function invocations
§ Similar to static data
• TLS is unique to each thread
Operating System Concepts – 10th Edition 4.43 Silberschatz, Galvin and Gagne ©2018
Scheduler Activations
§ Both M:M and Two-level models require
communication to maintain the appropriate
number of kernel threads allocated to the
application
§ Typically use an intermediate data structure
between user and kernel threads – lightweight
process (LWP)
• Appears to be a virtual processor on which
process can schedule user thread to run
• Each LWP attached to kernel thread
• How many LWPs to create?
§ Scheduler activations provide upcalls - a
communication mechanism from the kernel to
the upcall handler in the thread library
§ This communication allows an application to
maintain the correct number kernel threads
Operating System Concepts – 10th Edition 4.44 Silberschatz, Galvin and Gagne ©2018
Windows Threads
§ Windows API – primary API for Windows applications
§ Implements the one-to-one mapping, kernel-level
§ Each thread contains
• A thread id
• Register set representing state of processor
• Separate user and kernel stacks for when thread runs in user mode
or kernel mode
• Private data storage area used by run-time libraries and dynamic
link libraries (DLLs)
§ The register set, stacks, and private storage area are known as the
context of the thread
Operating System Concepts – 10th Edition 4.45 Silberschatz, Galvin and Gagne ©2018
Windows Threads (Cont.)
§ The primary data structures of a thread include:
• ETHREAD (executive thread block) – includes pointer to process
to which thread belongs and to KTHREAD, in kernel space
• KTHREAD (kernel thread block) – scheduling and synchronization
info, kernel-mode stack, pointer to TEB, in kernel space
• TEB (thread environment block) – thread id, user-mode stack,
thread-local storage, in user space
Operating System Concepts – 10th Edition 4.46 Silberschatz, Galvin and Gagne ©2018
Windows Threads Data Structures
Operating System Concepts – 10th Edition 4.47 Silberschatz, Galvin and Gagne ©2018
Linux Threads
§ Linux refers to them as tasks rather than threads
§ Thread creation is done through clone() system call
§ clone() allows a child task to share the address space of the
parent task (process)
• Flags control behavior
§ struct task_struct points to process data structures (shared or
unique)
Operating System Concepts – 10th Edition 4.48 Silberschatz, Galvin and Gagne ©2018