Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
36 views25 pages

Dbms 4

The document discusses various concurrency control protocols in database management systems, including lock-based, timestamp ordering, and validation-based protocols. It highlights common issues such as dirty reads, non-repeatable reads, and deadlocks, along with methods for deadlock avoidance and detection. Additionally, it outlines the advantages and disadvantages of each protocol, emphasizing the importance of ensuring serializability and managing transaction execution effectively.

Uploaded by

S Varshitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views25 pages

Dbms 4

The document discusses various concurrency control protocols in database management systems, including lock-based, timestamp ordering, and validation-based protocols. It highlights common issues such as dirty reads, non-repeatable reads, and deadlocks, along with methods for deadlock avoidance and detection. Additionally, it outlines the advantages and disadvantages of each protocol, emphasizing the importance of ensuring serializability and managing transaction execution effectively.

Uploaded by

S Varshitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT 4

Created @January 16, 2025 3:54 PM

Class DBMS

Concurrency Control Protocols


Concurrency Control Protocol allow concurrent schedules, but ensure that the
schedules are conflict/view serializable, and are recoverable and maybe even
cascadeless. These protocols do not examine the precedence graph as it is
being created, instead a protocol imposes a discipline that avoids non-
serializable schedules. Different concurrency control protocols provide
different advantages between the amount of concurrency they allow and the
amount of overhead that they impose.

Lock Based Protocol

Graph Based Protocol

Time-Stamp Ordering Protocol

Multiple Granularity Protocol

Multi-version Protocol

Common Issues in Concurrency:


1. Dirty Reads:

One transaction reads data modified by another uncommitted


transaction.

Example: Transaction A updates a value but hasn’t committed, and


Transaction B reads this uncommitted value.

2. Non-Repeatable Reads:

A transaction reads the same row twice but gets different data because
another transaction modifies it between reads.

Example: Transaction A reads a value, Transaction B updates it, and


Transaction A reads it again.

UNIT 4 1
3. Phantom Reads:

A transaction sees different sets of rows in two queries because


another transaction inserts or deletes rows between them.

Example: Transaction A queries rows with a condition, Transaction B


inserts a new row matching the condition, and Transaction A queries
again.

4. Lost Updates:

Multiple transactions overwrite each other’s changes without


awareness of prior modifications.

Example: Transaction A and B both update a value, but B’s update


overrides A’s without considering A's changes.

lock based protocol


A lock is a variable associated with a data item that describes the status of the
data item to possible operations that can be applied to it. They synchronize the
access by concurrent transactions to the database items. It is required in this
protocol that all the data items must be accessed in a mutually exclusive
manner. Let me introduce you to two common locks that are used and some
terminology followed in this protocol.

Types of Lock
1. Shared Lock (S):Shared Lock is also known as Read-only lock. As the
name suggests it can be shared between transactions because while
holding this lock the transaction does not have the permission to update
data on the data item. S-lock is requested using lock-S instruction.

2. Exclusive Lock (X): Data item can be both read as well as written. This is
Exclusive and cannot be held simultaneously on the same data item. X-lock
is requested using lock-X instruction.

There are four types of lock protocols available:


1. Simplistic lock protocol

UNIT 4 2
It is the simplest way of locking the data while transaction. Simplistic lock-
based protocols allow all the transactions to get the lock on the data before
insert or delete or update on it. It will unlock the data item after completing the
transaction.

2. Pre-claiming Lock Protocol


Pre-claiming Lock Protocols evaluate the transaction to list all the data
items on which they need locks.

Before initiating an execution of the transaction, it requests DBMS for all the
lock on all those data items.

If all the locks are granted then this protocol allows the transaction to begin.
When the transaction is completed then it releases all the lock.

If all the locks are not granted then this protocol allows the transaction to
rolls back and waits until all the locks are granted.

3. Two-phase locking (2PL)


The two-phase locking protocol divides the execution phase of the
transaction into three parts.

In the first part, when the execution of the transaction starts, it seeks
permission for the lock it requires.

In the second part, the transaction acquires all the locks. The third phase is
started as soon as the transaction releases its first lock.

In the third phase, the transaction cannot demand any new locks. It only
releases the acquired locks.

UNIT 4 3
There are two phases of 2PL:
Growing phase: In the growing phase, a new lock on the data item may be
acquired by the transaction, but none can be released.

Shrinking phase: In the shrinking phase, existing lock held by the transaction
may be released, but no new locks can be acquired.

In the below example, if lock conversion is allowed then the following phase
can happen:

1. Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.

2. Downgrading of lock (from X(a) to S(a)) must be done in shrinking phase.

Example:

UNIT 4 4
The following way shows how unlocking and locking work with 2-PL.

Transaction T1:

Growing phase: from step 1-3

Shrinking phase: from step 5-7

Lock point: at 3

Transaction T2:

Growing phase: from step 2-6

Shrinking phase: from step 8-9

Lock point: at 6

4. Strict Two-phase locking (Strict-2PL)


The first phase of Strict-2PL is similar to 2PL. In the first phase, after
acquiring all the locks, the transaction continues to execute normally.

UNIT 4 5
The only difference between 2PL and strict 2PL is that Strict-2PL does not
release a lock after using it.

Strict-2PL waits until the whole transaction to commit, and then it releases
all the locks at a time.

Strict-2PL protocol does not have shrinking phase of lock release.

It does not have cascading abort as 2PL does.

Timestamp Ordering Protocol


The Timestamp Ordering Protocol is used to order the transactions based
on their Timestamps. The order of transaction is nothing but the ascending
order of the transaction creation.

The priority of the older transaction is higher that's why it executes first. To
determine the timestamp of the transaction, this protocol uses system time
or logical counter.

The lock-based protocol is used to manage the order between conflicting


pairs among transactions at the execution time. But Timestamp based
protocols start working as soon as a transaction is created.

Basic Timestamp Ordering


Every transaction is issued a timestamp based on when it enters the system.
Suppose, if an old transaction Ti has timestamp TS(Ti), a new transaction Tj is
assigned timestamp TS(Tj) such that TS(Ti) < TS(Tj).

UNIT 4 6
The protocol manages concurrent execution such that the timestamps
determine the serializability order. The timestamp ordering protocol ensures
that any conflicting read and write operations are executed in timestamp order.

Whenever some Transaction T tries to issue a R_item(X) or a W_item(X), the


Basic TO algorithm compares the timestamp of T with R_TS(X) & W_TS(X) to
ensure that the Timestamp order is not violated. This describes the Basic TO
protocol in the following two cases.

Whenever a Transaction T issues a W_item(X) operation, check the


following conditions:

If R_TS(X) > TS(T) and if W_TS(X) > TS(T), then abort and rollback T
and reject the operation. else,

Execute W_item(X) operation of T and set W_TS(X) to TS(T).

Whenever a Transaction T issues a R_item(X) operation, check the


following conditions:

If W_TS(X) > TS(T), then abort and reject T and reject the operation,
else

If W_TS(X) <= TS(T), then execute the R_item(X) operation of T and set
R_TS(X) to the larger of TS(T) and current R_TS(X).

Whenever the Basic TO algorithm detects two conflicting operations that occur
in an incorrect order, it rejects the latter of the two operations by aborting the
Transaction that issued it.

Timestamp Ordering protocol ensures serializability since the precedence


graph will be of the form:

UNIT 4 7
Precedence Graph for TS ordering

Timestamp protocol ensures freedom from deadlock as no transaction ever


waits.

But the schedule may not be cascade free, and may not even be
recoverable.

Strict Timestamp Ordering


A variation of Basic TO is called Strict TO ensures that the schedules are both
Strict and Conflict Serializable. In this variation, a Transaction T that issues a
R_item(X) or W_item(X) such that TS(T) > W_TS(X) has its read or write
operation delayed until the Transaction T‘ that wrote the values of X has
committed or aborted.

Advantages
High Concurrency: Timestamp-based concurrency control allows for a
high degree of concurrency by ensuring that transactions do not interfere
with each other.

Efficient: The technique is efficient and scalable, as it does not require


locking and can handle a large number of transactions.

No Deadlocks: Since there are no locks involved, there is no possibility


of deadlocks occurring.

Improved Performance: By allowing transactions to execute concurrently,


the overall performance of the database system can be improved.

Disadvantages
Limited Granularity: The granularity of timestamp-based concurrency
control is limited to the precision of the timestamp. This can lead to
situations where transactions are unnecessarily blocked, even if they do not
conflict with each other.

Timestamp Ordering: In order to ensure that transactions are executed in


the correct order, the timestamps need to be carefully managed. If not
managed properly, it can lead to inconsistencies in the database.

UNIT 4 8
Timestamp Synchronization: Timestamp-based concurrency control
requires that all transactions have synchronized clocks. If the clocks are not
synchronized, it can lead to incorrect ordering of transactions.

Timestamp Allocation: Allocating unique timestamps for each transaction


can be challenging, especially in distributed systems where transactions
may be initiated at different locations.

deadlock handling
The Deadlock is a condition in a multi-user database environment where
transactions are unable to the complete because they are each waiting for the
resources held by other transactions. This results in a cycle of the
dependencies where no transaction can proceed.

Characteristics of Deadlock
Mutual Exclusion: Only one transaction can hold a particular resource at a
time.

Hold and Wait: The Transactions holding resources may request additional
resources held by others.

No Preemption: The Resources cannot be forcibly taken from the


transaction holding them.

Circular Wait: A cycle of transactions exists where each transaction is


waiting for the resource held by the next transaction in the cycle.

UNIT 4 9
Deadlock Avoidance
When a database is stuck in a deadlock state, then it is better to avoid the
database rather than aborting or restating the database. This is a waste of
time and resource.

Deadlock avoidance mechanism is used to detect any deadlock situation in


advance. A method like "wait for graph" is used for detecting the deadlock
situation but this method is suitable only for the smaller database. For the
larger database, deadlock prevention method can be used.

Deadlock Detection
In a database, when a transaction waits indefinitely to obtain a lock, then the
DBMS should detect whether the transaction is involved in a deadlock or not.
The lock manager maintains a Wait for the graph to detect the deadlock cycle in
the database.

Wait for Graph


Wait-for-graph is one of the methods for detecting the deadlock situation. This
method is suitable for smaller databases. In this method, a graph is drawn
based on the transaction and its lock on the resource. If the graph created has
a closed loop or a cycle, then there is a deadlock.

The wait for a graph for the above scenario is shown below:

UNIT 4 10
Deadlock Prevention
Deadlock prevention method is suitable for a large database. If the
resources are allocated in such a way that deadlock never occurs, then the
deadlock can be prevented.

The Database management system analyzes the operations of the


transaction whether they can create a deadlock situation or not. If they do,
then the DBMS never allowed that transaction to be executed.

Deadlock prevention mechanism proposes two schemes:

Wait-Die Scheme: In this scheme, If a transaction requests a resource that


is locked by another transaction, then the DBMS simply checks the
timestamp of both transactions and allows the older transaction to wait until
the resource is available for execution.

Wound Wait Scheme: In this scheme, if an older transaction requests for a


resource held by a younger transaction, then an older transaction forces a
younger transaction to kill the transaction and release the resource. The
younger transaction is restarted with a minute delay but with the same
timestamp. If the younger transaction is requesting a resource that is held
by an older one, then the younger transaction is asked to wait till the older
one releases it.

Wait – Die Wound -Wait

It is based on a non-preemptive technique. It is based on a preemptive technique.

In this, older transactions must wait for the In this, older transactions never wait
younger one to release its data items. for younger transactions.

The number of aborts and rollbacks is higher in In this, the number of aborts and
these techniques. rollback is lesser.

validation based protocol


Validation Based Protocol is also called Optimistic Concurrency Control
Technique. This protocol is used in DBMS (Database Management System) for
avoiding concurrency in transactions. It is called optimistic because of the
assumption it makes, i.e. very less interference occurs, therefore, there is no
need for checking while the transaction is executed.

In this technique, no checking is done while the transaction is been executed.


Until the transaction end is reached updates in the transaction are not applied

UNIT 4 11
directly to the database. All updates are applied to local copies of data items
kept for the transaction.

At the end of transaction execution, while execution of the transaction,


a validation phase checks whether any of transaction updates violate
serializability. If there is no violation of serializability the transaction is
committed and the database is updated; or else, the transaction is updated and
then restarted.
Optimistic Concurrency Control is a three-phase protocol. The three phases for
validation based protocol:

1. Read phase: In this phase, the transaction T is read and executed. It is used
to read the value of various data items and stores them in temporary local
variables. It can perform all the write operations on temporary variables
without an update to the actual database.

2. Validation phase: In this phase, the temporary variable value will be


validated against the actual data to see if it violates the serializability.

3. Write phase: If the validation of the transaction is validated, then the


temporary results are written to the database or system otherwise the
transaction is rolled back.

The validation phase examines the reads and writes of the transaction that may
cause overlapping. So each transaction is assigned with the following different
timestamps:

Start(Ti): It contains the time when Ti started its execution.


Validation (Ti): It contains the time when Ti finishes its read phase and starts
its validation phase.
Finish(Ti): It contains the time when Ti finishes its write phase.

The validation phase for Ti checks that for all transaction Tj one of the following
below conditions must hold to being validated or pass validation phase:
1. Finish(Tj)<Starts(Ti), since Tj finishes its execution means completes its
write-phase before Ti started its execution(read-phase). Then the serializability
indeed maintained.

UNIT 4 12
2. Ti begins its write phase after Tj completes its write phase, and the read_set
of Ti should be disjoint with write_set of Tj.
3. Tj completes its read phase before Ti completes its read phase and both
read_set and write_set of Ti are disjoint with the write_set of Tj.

Advantages:
1. Avoid Cascading-rollbacks: This validation based scheme avoid cascading
rollbacks since the final write operations to the database are performed only
after the transaction passes the validation phase. If the transaction fails then no
updation operation is performed in the database. So no dirty read will happen
hence possibilities cascading-rollback would be null.
2. Avoid deadlock: Since a strict time-stamping based technique is used to
maintain the specific order of transactions. Hence deadlock isn’t possible in
this scheme.

Disadvantages:
1. Starvation: There might be a possibility of starvation for long-term
transactions, due to a sequence of conflicting short-term transactions that
cause the repeated sequence of restarts of the long-term transactions so on
and so forth. To avoid starvation, conflicting transactions must be temporarily
blocked for some time, to let the long-term transactions to finish.

multiple granularity
Granularity: It is the size of data item allowed to lock.

Multiple Granularity:
It can be defined as hierarchically breaking up the database into blocks
which can be locked.

The Multiple Granularity protocol enhances concurrency and reduces lock


overhead.

It maintains the track of what to lock and how to lock.

It makes easy to decide either to lock a data item or to unlock a data item.
This type of hierarchy can be graphically represented as a tree.

For example: Consider a tree which has four levels of nodes.

UNIT 4 13
The first level or higher level shows the entire database.

The second level represents a node of type area. The higher level database
consists of exactly these areas.

The area consists of children nodes which are known as files. No file can
be present in more than one area.

Finally, each file contains child nodes known as records. The file has
exactly those records that are its child nodes. No records represent in more
than one file.

Hence, the levels of the tree starting from the top level are as follows:

1. Database

2. Area

3. File

4. Record

UNIT 4 14
Failure Classification
To find that where the problem has occurred, we generalize a failure into the
following categories:

1. Transaction failure

2. System crash

3. Disk failure

1. Transaction failure
The transaction failure occurs when it fails to execute or when it reaches a
point from where it can't go any further. If a few transaction or process is hurt,
then this is called as transaction failure.
Reasons for a transaction failure could be -

1. Logical errors: If a transaction cannot complete due to some code error or


an internal error condition, then the logical error occurs.

2. Syntax error: It occurs where the DBMS itself terminates an active


transaction because the database system is not able to execute it. For
example, The system aborts an active transaction, in case of deadlock or
resource unavailability.

2. System Crash
System failure can occur due to power failure or other hardware or software
failure. Example: Operating system error.
Fail-stop assumption: In the system crash, non-volatile storage is assumed
not to be corrupted.

3. Disk Failure
It occurs where hard-disk drives or storage drives used to fail frequently. It
was a common problem in the early days of technology evolution.

Disk failure occurs due to the formation of bad sectors, disk head crash,
and unreachability to the disk or any other failure, which destroy all or part
of disk storage.

UNIT 4 15
storage
A database system provides an ultimate view of the stored data. However, data
in the form of bits, bytes get stored in different storage devices.
In this section, we will take an overview of various types of storage devices that
are used for accessing and storing data.

Types of Data Storage


For storing the data, there are different types of storage options available.
These storage types differ from one another as per the speed and accessibility.
There are the following types of storage devices used for storing the data:

Primary Storage

Secondary Storage

Tertiary Storage

Primary Storage
It is the primary area that offers quick access to the stored data. We also know
the primary storage as volatile storage. It is because this type of memory does
not permanently store the data. As soon as the system leads to a power cut or

UNIT 4 16
a crash, the data also get lost. Main memory and cache are the types of
primary storage.

Main Memory: It is the one that is responsible for operating the data that is
available by the storage medium. The main memory handles each
instruction of a computer machine. This type of memory can store
gigabytes of data on a system but is small enough to carry the entire
database. At last, the main memory loses the whole content if the system
shuts down because of power failure or other reasons.

1. Cache: It is one of the costly storage media. On the other hand, it is the
fastest one. A cache is a tiny storage media which is maintained by the
computer hardware usually. While designing the algorithms and query
processors for the data structures, the designers keep concern on the
cache effects.

Secondary Storage
Secondary storage is also called as Online storage. It is the storage area that
allows the user to save and store data permanently. This type of memory does
not lose the data due to any power failure or system crash. That's why we also
call it non-volatile storage.
There are some commonly described secondary storage media which are
available in almost every type of computer system:

Flash Memory: A flash memory stores data in USB (Universal Serial Bus)
keys which are further plugged into the USB slots of a computer system.
These USB keys help transfer data to a computer system, but it varies in
size limits. Unlike the main memory, it is possible to get back the stored
data which may be lost due to a power cut or other reasons. This type of
memory storage is most commonly used in the server systems for caching
the frequently used data. This leads the systems towards high performance
and is capable of storing large amounts of databases than the main
memory.

Magnetic Disk Storage: This type of storage media is also known as online
storage media. A magnetic disk is used for storing the data for a long time.
It is capable of storing an entire database. It is the responsibility of the
computer system to make availability of the data from a disk to the main
memory for further accessing. Also, if the system performs any operation
over the data, the modified data should be written back to the disk. The

UNIT 4 17
tremendous capability of a magnetic disk is that it does not affect the data
due to a system crash or failure, but a disk failure can easily ruin as well as
destroy the stored data.

Tertiary Storage
It is the storage type that is external from the computer system. It has the
slowest speed. But it is capable of storing a large amount of data. It is also
known as Offline storage. Tertiary storage is generally used for data backup.
There are following tertiary storage devices available:

Optical Storage: An optical storage can store megabytes or gigabytes of


data. A Compact Disk (CD) can store 700 megabytes of data with a
playtime of around 80 minutes. On the other hand, a Digital Video Disk or a
DVD can store 4.7 or 8.5 gigabytes of data on each side of the disk.

Tape Storage: It is the cheapest storage medium than disks. Generally,


tapes are used for archiving or backing up the data. It provides slow access
to data as it accesses data sequentially from the start. Thus, tape storage is
also known as sequential-access storage. Disk storage is known as direct-
access storage as we can directly access the data from any location on
disk.

Storage Hierarchy
Besides the above, various other storage devices reside in the computer
system. These storage media are organized on the basis of data accessing
speed, cost per unit of data to buy the medium, and by medium's reliability.
Thus, we can create a hierarchy of storage media on the basis of its cost and
speed.
Thus, on arranging the above-described storage media in a hierarchy
according to its speed and cost, we conclude the below-described image:

UNIT 4 18
In the image, the higher levels are expensive but fast. On moving down, the
cost per bit is decreasing, and the access time is increasing. Also, the storage
media from the main memory to up represents the volatile nature, and below
the main memory, all are non-volatile devices.

recovery and atomicity


The recovery procedures in DBMS ensure the database's atomicity and
durability. Data recovery procedures in DBMS make sure that the data is always
recoverable to protect the durability property and that its state is retained to
protect the atomic property.

The atomicity property of DBMS states that either all the operations of
transactions must be performed or none. The modifications done by an aborted
transaction should not be visible to the database and the modifications done by
the committed transaction should be visible. To achieve our goal of atomicity,
the user must first output stable storage information describing the
modifications, without modifying the database itself.
Log-based recovery is a technique used in database management systems
(DBMS) to recover a database to a consistent state in the event of a failure or
crash. It involves the use of transaction logs, which are records of all the

UNIT 4 19
transactions performed on the database.

Log based Recovery in DBMS

Log and log records


The log is a sequence of log records, recording all the updated activities in the
database. In stable storage, logs for each transaction are maintained. Any
operation which is performed on the database is recorded on the log. Prior to
performing any modification to the database, an updated log record is created
to reflect that modification. An update log record represented as: <Ti, Xj, V1,
V2> has these fields:

1. Transaction identifier: Unique Identifier of the transaction that performed


the write operation.

2. Data item: Unique identifier of the data item written.

3. Old value: Value of data item prior to write.

4. New value: Value of data item after write operation.

UNIT 4 20
Other types of log records are:

1. <Ti start> : It contains information about when a transaction Ti starts.

2. <Ti commit> : It contains information about when a transaction Ti commits.

3. <Ti abort> : It contains information about when a transaction Ti aborts.

Undo and Redo Operations


Because all database modifications must be preceded by the creation of a log
record, the system has available both the old value prior to the modification of
the data item and new value that is to be written for data item. This allows
system to perform redo and undo operations as appropriate:

1. Undo: using a log record sets the data item specified in log record to old
value.

2. Redo: using a log record sets the data item specified in log record to new
value.

The database can be modified using two approaches –

1. Deferred Modification Technique: If the transaction does not modify the


database until it has partially committed, it is said to use deferred
modification technique.

2. Immediate Modification Technique: If database modification occur while


the transaction is still active, it is said to use immediate modification
technique.

Recovery using Log records


After a system crash has occurred, the system consults the log to determine
which transactions need to be redone and which need to be undone.

1. Transaction Ti needs to be undone if the log contains the record <Ti start>
but does not contain either the record <Ti commit> or the record <Ti
abort>.

2. Transaction Ti needs to be redone if log contains record <Ti start> and


either the record <Ti commit> or the record <Ti abort>.

Use of Checkpoints – When a system crash occurs, user must consult the log.
In principle, that need to search the entire log to determine this information.

UNIT 4 21
There are two major difficulties with this approach:

1. The search process is time-consuming.

2. Most of the transactions that, according to our algorithm, need to be redone


have already written their updates into the database. Although redoing them
will cause no harm, it will cause recovery to take longer.

ARIES
ARIES Algorithm

ARIES is a recovery algorithm.

It stands for Algorithm for Recovery and Isolation Exploiting Semantics.

It is based on Write Ahead Log (WAL) protocol.

When database crashes during some transaction processing, we have the


logs that got saved to the disk.

ARIES has 3 phases that occur in the following order -

1) Analysis:

Scan the log from start to reconstruct the transaction and dirty page table.
Dirtypages contain data that has been changed but not yet written to disk.

The active transactions which were present at the time of crash are
identified.

During analysis phase the log is scanned forward from the checkpoint
record to construct snapshot of what system looks like at the time of crash.

2) Redo :

This phase is started only after completion of analysis phase.

The log is read forward and each update is redone.

3) Undo :

This phase is started after redo phase.

The log is scanned backward and updates to corresponding active


transactions are undone.

Three main principles lie behind the ARIES recovery algorithm:

UNIT 4 22
Write-Ahead Logging: Any change to a database object is first recorded in
the log. The log record must be written to stable storage before the change
to the database object is written to disk.

Repeating History During Redo: On restart following a crash, ARIES retraces


all actions of the DBMS before the crash and brings the system back to the
exact state it was in at the time of the crash. Then, it undoes the actions of
transactions that were still active at the time of the crash (effectively
aborting them).

Logging Changes During Undo: Changes made to the database while


undoing a transaction are logged to ensure such actions are not repeated if
multiple failures cause repeated restarts.

Advantages:
1) It is simple and flexible.
2) It supports concurrency control protocol.

3) Independent recovery of every page.

remote back up systems

UNIT 4 23
UNIT 4 24
UNIT 4 25

You might also like