Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views16 pages

Lecture 2

This document discusses software architecture with a focus on threads and shared memory in multi-threaded environments. It covers the benefits and limitations of threads, the shared memory model, synchronization mechanisms, common pitfalls in multi-threaded programming, and advanced concepts like thread pools. The conclusion emphasizes the importance of managing synchronization to ensure efficient communication and resource sharing in concurrent programs.

Uploaded by

Kashmala Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views16 pages

Lecture 2

This document discusses software architecture with a focus on threads and shared memory in multi-threaded environments. It covers the benefits and limitations of threads, the shared memory model, synchronization mechanisms, common pitfalls in multi-threaded programming, and advanced concepts like thread pools. The conclusion emphasizes the importance of managing synchronization to ensure efficient communication and resource sharing in concurrent programs.

Uploaded by

Kashmala Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Software Architecture:

Threads and Shared Memory


Prepared By Shafiq Ahmad
Contents:

• Introduction to Software Architecture


• Threads in Multi-Threaded Environments
• Shared Memory Model in Threads
• Thread Synchronization Mechanisms
• Examples of Threads and Shared Memory in Action
• Common Pitfalls in Multi-Threaded Programming
• Advanced Concepts: Thread Pools
• Conclusion
Introduction to Software Architecture
• Software architecture is a blueprint for the system and the
project, laying out how different software components
interact and organize. In concurrent programming, especially
in multi-threaded environments, two common models for
inter-process communication (IPC) are threads and shared
memory.

• This lecture focuses on the multi-threaded paradigm and how


shared memory is employed to enable communication
between these threads.
1. What are Threads?
Threads are the smallest unit of a process that can be scheduled for execution.
They run within the same memory space of a process, allowing them to share
resources such as data and variables efficiently.

Characteristics of Thread:
• Each thread has its own stack but shares the heap, code, and other memory
areas with other threads.
• Threads are lighter compared to processes because they do not require
separate memory.
• Multi-threading allows concurrent execution of parts of a program, which is
particularly useful for performance optimization, especially in multi-core
processors.
Benefits of Thread Limitation of Thread

Efficient Resource Use: Since threads share Complexity: Multi-threading can lead to complex
memory, they are faster to create and switch designs and hard-to-trace bugs.
between compared to processes.
Parallelism: Threads enable parallel execution, Synchronization Issues: Shared resources must
making better use of modern multi-core CPUs. be handled carefully to avoid problems like race
Responsiveness: In a user interface, threads can conditions, deadlocks, and inconsistent data.
allow background tasks like data loading while
maintaining a responsive UI.
Limitations of Threads:
Complexity: Multi-threading can lead to complex
designs and hard-to-trace bugs.
Synchronization Issues: Shared resources must
be handled carefully to avoid problems like race
conditions, deadlocks, and inconsistent data.
2. Shared Memory Model
Shared memory is a paradigm in concurrent programming where multiple threads (or
processes) can read and write to the same memory region. Shared memory is a key
mechanism for communication between threads.

Characteristics:

• Speed: Since memory is shared between threads, no data copying is required, which
speeds up the communication.

• Memory Regions: Threads share heap and global data regions but have separate
stacks.

• Data Synchronization: Access to shared memory must be synchronized using


mechanisms like locks (mutexes), semaphores, or condition variables to prevent data
inconsistency.
Shared Memory in Threads vs Processes:
• In the threading model, shared memory is naturally
built-in since threads of the same process share the
same address space.
• In multiprocessing, shared memory regions must be
explicitly set up using OS-provided APIs such as
shmget (POSIX) or CreateFileMapping (Windows).
3. Thread Synchronization in Shared Memory
Since multiple threads can access the same memory at the same
time, there must be mechanisms to synchronize their access to
avoid issues such as:

• Race Conditions: When multiple threads access and modify


shared data concurrently, the final outcome depends on the
timing of access, leading to unpredictable results.

• Deadlock: A situation where two or more threads are blocked


forever, each waiting for a resource that another thread holds.
4. Key Synchronization Mechanisms:
Mutexes (Lock):
A mutex ensures that only one thread can access a
shared resource at a time. It works by locking the
resource when one thread starts using it and unlocking
it when done.
Example: A mutex is used to protect access to a shared
counter so that only one thread at a time increments
the counter.
Key Synchronization Mechanisms (cont):
Semaphore:
A semaphore is a signaling mechanism that controls access to shared
resources by multiple threads.
It can be used to allow a fixed number of threads to access a resource.

Conditional Variables:
Condition variables are used for signaling between threads. One thread can
signal another thread that some condition is true, often in producer-
consumer problems.

Atomic Operations: Atomic operations are operations that complete in a


single step relative to other threads. They are crucial when modifying shared
variables without locks.
Examples of Thread and Shared Memory Usage
Example 1: Web Servers

• Multi-threaded Web Server: Each client request is handled by a separate thread.


The shared memory includes a pool of resources such as socket connections or
buffers.
• Shared Memory Use: Threads share data like cached files or connection states,
which allows them to serve multiple requests efficiently.

Example 2: Matrix Multiplication

• Parallel Matrix Multiplication: Each thread calculates part of the matrix product,
with the result stored in shared memory.
• Shared Memory Use: The resulting matrix is stored in shared memory, with
threads writing to different parts of the matrix simultaneously.
5. Common Pitfalls in Multi-Threaded Programming
Race Conditions: Can occur when multiple threads read and write
shared data without proper synchronization.

Deadlocks: Occur when threads are stuck waiting for resources that
other threads are holding.

Starvation: When one or more threads never get a chance to access


the shared resource because other threads monopolize it.

Priority Inversion: When a lower-priority thread holds a lock needed


by a higher-priority thread, preventing the higher-priority thread from
making progress.
How to Avoid Pitfalls?
• Use proper synchronization techniques like mutexes,
semaphores, and condition variables.
• Avoid holding locks for long durations.
• Implement timeout mechanisms to prevent
deadlocks.
• Use priority inheritance to prevent priority inversion.
6. Advanced Concepts
Thread Pools:
A thread pool manages a group of reusable threads that
execute tasks. Thread pools help avoid the overhead of
creating and destroying threads frequently.

Advantages:
Improves performance and resource management.
Threads are reused, which reduces overhead in managing
them.
7. Conclusion
In summary, the threads and shared memory model is
essential in multi-threaded systems for enabling
efficient communication and resource sharing. While
this model provides significant performance benefits, it
comes with the responsibility of carefully managing
synchronization to avoid common pitfalls like race
conditions and deadlocks. Understanding these
concepts is key to developing high-performance,
reliable concurrent programs.

You might also like