Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
37 views15 pages

Thread Issues

The document discusses various aspects of thread management and process synchronization in multithreaded programming, including the use of fork() and exec() system calls, thread cancellation methods, and signal handling. It highlights challenges such as race conditions, deadlocks, and starvation, and emphasizes the importance of mutual exclusion to prevent data inconsistencies. Additionally, it covers mechanisms like locks and mutexes to control access to critical sections of code, ensuring that only one process can modify shared data at a time.

Uploaded by

anitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views15 pages

Thread Issues

The document discusses various aspects of thread management and process synchronization in multithreaded programming, including the use of fork() and exec() system calls, thread cancellation methods, and signal handling. It highlights challenges such as race conditions, deadlocks, and starvation, and emphasizes the importance of mutual exclusion to prevent data inconsistencies. Additionally, it covers mechanisms like locks and mutexes to control access to critical sections of code, ensuring that only one process can modify shared data at a time.

Uploaded by

anitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

THREAD ISSUES

The fork() and exec() system calls:


• The fork() is used to create a duplicate process. The meaning of the
fork() and exec() system calls change in a multithreaded program.
• If we take, some UNIX systems have chosen to have two versions of
fork(), one that duplicates all threads and another that duplicates
only the thread that invoked the fork() system call.
• If a thread calls the exec() system call, the program specified in the
parameter to exec() will replace the entire process which includes all
threads.
• The exec() call replaces the whole process that called it including all
the threads in the process with a new program.
THREAD CANCELLATION
• Thread cancellation is the task of terminating a thread before it has
completed.
• For example − If multiple database threads are concurrently searching
through a database and one thread returns the result the remaining
threads might be cancelled.
• A target thread is a thread that is to be cancelled, cancellation of target
thread may occur in two different scenarios −
• Asynchronous cancellation − The asynchronous cancellation involves
only one thread that cancels the target thread immediately.
• Deferred Cancellation: In the case of deferred cancellation, the target
thread checks itself repeatedly until it is able to cancel itself voluntarily
or decide otherwise.
Signal Handling

• In UNIX systems, a signal is used to notify a process that a


particular event has happened. Based on the source of the
signal, signal handling can be categorized as:
• Asynchronous Signal: The signal which is generated outside
the process which receives it.
• Synchronous Signal: The signal which is generated and
delivered in the same process.
Thread Pool
• The server develops an independent thread every time an individual
attempts to access a page on it. However, the server also has certain
challenges.
• Bear in mind that no limit in the number of active threads in the
system will lead to exhaustion of the available system resources
because we will create a new thread for each request.
• A group of threads that forms this collection of threads is referred as a
thread pool. There are always threads that stay on the thread pool
waiting for an assigned request to service.
CONTD..
CONTD..
• Whenever the server receives the request, and fails to identify a
specific thread at the ready thread pool, it may only have to wait until
some of the threads are available at the ready thread pool. It is better
than starting a new thread whenever a request arises because this
system works well with machines that cannot support multiple
threads at once.
Process synchronization
• Process synchronization involves the coordination and control of
concurrent processes to ensure correct and predictable outcomes. Its
primary purpose is to prevent race conditions, data inconsistencies,
and resource conflicts that may arise when multiple processes access
shared resources simultaneously.
Challenges in Concurrent Execution
• Concurrent execution introduces several challenges, including −
• Race Conditions − Concurrent processes accessing shared resources may result in
unexpected and erroneous outcomes. For example, if two processes simultaneously
write to the same variable, the final value may be unpredictable or incorrect.
• Deadlocks − Processes may become stuck in a state of waiting indefinitely due to
resource dependencies. Deadlocks occur when processes are unable to proceed because
each process is waiting for a resource held by another process, creating a circular
dependency.
• Starvation − A process may be denied access to a shared resource indefinitely, leading to
its inability to make progress. This situation arises when certain processes consistently
receive priority over others, causing some processes to wait indefinitely for resource
access.
• Data Inconsistencies − Inconsistent or incorrect data may occur when processes
manipulate shared data concurrently. For example, if multiple processes simultaneously
update a database record, the final state of the record may be inconsistent or corrupted.
Race Condition

• When more than one process is either running the same code or
modifying the same memory or any shared data, there is a risk that
the result or value of the shared data may be incorrect because all
processes try to access and modify this shared resource. Thus, all the
processes race to say that my result is correct. This condition is called
the race condition. Since many processes use the same data, the
results of the processes may depend on the order of their execution.
• by treating the critical section as a section that can be accessed by
only a single process at a time. This kind of section is called an atomic
section.
Critical Section Problem
• A part of code that can only be accessed by a single process at any moment
is known as a critical section. This means that when a lot of programs want
to access and change a single shared data, only one process will be allowed
to change at any given moment. The other processes have to wait until the
data is free to be used.
• Let us look at different elements/sections of a program:
• Entry Section: The entry Section decides the entry of a process.
• Critical Section: The Critical section allows and makes sure that only one
process is modifying the shared data.
• Exit Section: The entry of other processes in the shared data after the
execution of one process is handled by the Exit section.
• Remainder Section: The remaining part of the code which is not categorized
as above is contained in the Remainder section.
Contd..
Mutual Exclusion
• “no two processes can exist in the critical section at any given point
of time“. Any process synchronization technique being used must
satisfy the property of mutual exclusion, without which it would not
be possible to get rid of a race condition.
• The need for mutual exclusion comes with concurrency. There are
several kinds of concurrent execution:
• Interrupt handlers
• Interleaved, preemptively scheduled processes/threads
• Multiprocessor clusters, with shared memory
• Distributed systems
contd..
• Approaches To Implementing Mutual Exclusion
• Software Method: Leave the responsibility to the processes
themselves. These methods are usually highly error-prone and carry
high overheads.
• Hardware Method: Special-purpose machine instructions are used for
accessing shared resources. This method is faster but cannot provide
a complete solution. Hardware solutions cannot give guarantee the
absence of deadlock and starvation.
• Programming Language Method: Provide support through the
operating system or through the programming language.
LOCK
• Lock is a shared variable. That the process read a variable to
determine that no other process is excecuting a critical section,then
set a variable known as lock.
• Initially lock variable is initialized with zero. The process set it to 1 and
enters to critical section.
• Lock =0 (no process in the critical section)
• Lock=1 ( process in the critical section)
MUTEX
• multiple threads want to access the critical section, mutex allows only
one thread at a time to be in the critical section.
• Mutex ensures that the code in the critical section (which has shared
resources) being controlled will only be used by a single thread at a
time.
• Mutex is easy and implement can be in one of two states
• Locked or unlocked.
• If the mutex is locked,the calling process is enters into blocked state
and wait until the process in the critical section finishes its execution.

You might also like