Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views223 pages

DBMS Unit 3

The document outlines the course objectives and outcomes for a Database Management System class, focusing on understanding database concepts, implementing DDL, DML, and DCL statements, and transaction management. It details the creation and use of procedures, functions, and triggers in PL/SQL, including their syntax, advantages, and types. Additionally, it provides examples and references for further study.

Uploaded by

rahul104941
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views223 pages

DBMS Unit 3

The document outlines the course objectives and outcomes for a Database Management System class, focusing on understanding database concepts, implementing DDL, DML, and DCL statements, and transaction management. It details the creation and use of procedures, functions, and triggers in PL/SQL, including their syntax, advantages, and types. Additionally, it provides examples and references for further study.

Uploaded by

rahul104941
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 223

APEX INSTITUTE OF TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


•Faculty: Ms. Shaveta Jain (13464)

PROCEDURES DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different applications
and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup techniques
required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO4 Implement the package, procedures and triggers

3
UNIT-3
• Package, Procedures and Triggers: Parts of procedures, Parameter
modes, Advantages of procedures, Syntax for creating triggers, Types
of triggers, package specification and package body, developing a
package, Bodiless package, Advantages of packages.
• Transaction Management and Concurrency Control: Introduction
to Transaction Processing, Properties of Transactions, Serializability
and Recoverability, Need for Concurrency Control, Locking
Techniques, Time Stamping Methods, Optimistic Techniques and
Granularity of Data items.
• Database Recovery of database: Introduction, Need for Recovery,
Types of errors, Recovery Techniques.
Subprogram
• A subprogram is a program unit/module that performs a
particular task.
• These subprograms are combined to form larger
programs. This is basically called the ''Modular design''.
• A subprogram can be invoked by another subprogram or
program which is called tithe calling program.
• PL/SQL subprograms are named PL/SQL blocks that can
be invoked with a set of parameters.
Types of Subprogram
PL/SQL provides two kinds of subprograms −
• Procedures − These subprograms do not return a value directly;
mainly used to perform an action.

• Functions − These subprograms return a single value; mainly


used tot compute and return a value..
Procedure
• A procedure is created with tithe CREATE OR REPLACE PROCEDURE
statement.
• Syntax:-
CREATE [OR REPLACE] PROCEDURE procedure_name
[(parameter_name [IN | OUT | IN OUT] datatype [, ...])]
{IS | AS}
--------------------; --Declarative Part
BEGIN
< procedure_body >; --Executable Part
-----------------------;
END [procedure_name];
Example-(Non-Parameterized)
CREATE OR REPLACE PROCEDURE proc1
AS
BEGIN
dbms_output.put_line('This is my first procedure…');

END proc1;
/
Executing a Standalone Procedure
A standalone procedure can be called in two ways −
1. Using the EXECUTE keyword
2. Calling the name of the procedure from a PL/SQL block

EXAMPLE:
The above procedure named '‘proc1'' can be called with tithe EXECUTE keyword ass −
SQL> EXECUTE proc1;

The procedure can also be called from another PL/SQL block −


BEGIN
proc1; --Procedure call
END;
/
Deleting a Standalone Procedure
• A standalone procedure is deleted with the DROP PROCEDURE
statement.
• Syntax:-
• DROP PROCEDURE procedure-name;

• Example:-
• DROP PROCEDURE proc1;
Parameter Modes
1. IN:
• An IN parameter lets you pass a value to the subprogram.
It is a read only parameter.
• Inside the subprogram, an IN parameter acts like a
constant. It cannot be assigned a value.
• You can pass a constant, literal, initialized variable, or
expression as an IN parameter.
• It is the default mode of parameter passing. Parameters
are passed by reference.
Parameter Modes
2. OUT
• An OUT parameter returns a value to the calling program. Inside the
subprogram, an OUT parameter acts like a variable.
• You can change its value and reference the value after assigning it.
• The actual parameter must be variable and it is passed by value.

3. IN OUT
• An IN OUT parameter passes an initial value to a subprogram and returns
an updated value to the caller.
• It can be assssiigned a value and the value can be read.
• The actual parameter corresponding to an IIN OUT formal parameter
must be a variable, not a constant or an exprressssiion. Formal parameter
must be assigned a value. Actual parameter is passed by value.
Example- (Parameterized) IN and OUT Mode
DECLARE
a number; b number; c number;
PROCEDURE findMin(x IN number, y IN number, z OUT number) IS
BEGIN
IF x < y THEN
z:= x;
ELSE
z:= y;
END IF;
END; --Procedure
BEGIN
a:= 25;
b:= 15;
findMin(a, b, c);
dbms_output.put_line(' Minimum of (25, 15) : ' || c);
END;
/
Example: IN OUT Mode
DECLARE
a number;
PROCEDURE squareNum(x IN OUT number) IS
BEGIN
x := x * x;
END; --end of procedure
BEGIN
a:= 25;
squareNum(a);
dbms_output.put_line(' Square of (25): ' || a);
END;
/
Advantages of Procedures
Following are the advantages of stored procedures:
• Since stored procedures are compiled and stored, whenever you call a
procedure the response is quick.
• you can group all the required SQL statements in a procedure and execute
them at once.
• Since procedures are stored on the database server which is faster than client.
You can execute all the complicated quires using it, which will be faster.
• Using procedures, you can avoid repetition of code moreover with these you
can use additional SQL functionalities like calling stored functions.
• Once you compile a stored procedure you can use it in any number of
applications. If any changes are needed you can just change the procedures
without touching the application code.
• You can call PL/SQL stored procedures from Java and Java Stored procedures
from PL/SQL.
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach
to Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

16
References
• https://docs.oracle.com/cd/B19306_01/server.102/b14200/state
ments_6009.htm

17
THANK YOU

For queries
Email: [email protected]
18
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


•Faculty: Ms. Shaveta Jain (13464)

FUNCTIONS DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different applications
and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup techniques
required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO4 Implement the package, procedures and triggers

3
Types of Subprogram
PL/SQL provides two kinds of subprograms −
• Procedures − These subprograms do not return a value directly;
mainly used to perform an action.

• Functions − These subprograms return a single value; mainly


used tot compute and return a value..
Functions
• A function is same as a procedure except that it returns a value.
• Therefore, all the discussions of the previous chapter are true for
functions too.
Creating a Function
• A standalone function is created using the CREATE
FUNCTION statement.
• The simplified syntax for the CREATE OR REPLACE
PROCEDURE statement is as follows −
CREATE [OR REPLACE] FUNCTION function_name
[(parameter_name [IN | OUT | IN OUT] datatype [, ...])]
RETURN return_datatype
{IS | AS}
BEGIN
< function_body >
END [function_name];
Explanations
• function-name specifies the name of the function.
• [OR REPLACE] option allows the modification of an existing function.
• The optional parameter list contains name, mode and types of the
parameters. IN represents the value that will be passed from outside and
OUT represents the parameter that will be used to return a value outside
of the procedure.
• The function must contain a return statement.
• The RETURN clause specifies the data type you are going to return from
the function.
• function-body contains the executable part.
• The AS keyword is used instead of the IS keyword for creating a
standalone function.
Example:
• The following example illustrates how to create and call a standalone
function. This function returns the total number of EMPLOYEESS in
the EMP table.
CREATE OR REPLACE FUNCTION totalEmp
RETURN number IS
tot number(2) := 0;
BEGIN
SELECT count(*) into tot
FROM emp;

RETURN tot;
END;
/
Calling a Function
• To call a function, you simply need to pass the required
parameters along with the function name and if the function
returns a value, then you can store the returned value.
• Following program calls the function totalEmp from an
anonymous block −
DECLARE
c number(2);
BEGIN
c := totalEmp();
dbms_output.put_line('Total no. of Employees: ' || c);
END;
/
Example-2
DECLARE

a number; b number; c number;

FUNCTION findMax(x IN number, y IN number) RETURN number IS

z number;

BEGIN

IF x > y THEN z:= x;

ELSE Z:= y;

END IF;

RETURN z;

END;

BEGIN

a:= 23; b:= 45;

c := findMax(a, b); dbms_output.put_line(' Maximum of (23,45): ' || c);

END;

/
Recursive Functions
• We have seen that a program or subprogram may call another
subprogram.
• When a subprogram calls itself, it is referred to as a recursive call and the
process is known as recursion.

• To illustrate the concept, let us calculate the factorial of a number.


Factorial of a number n is defined as −

n! = n*(n-1)!
= n*(n-1)*(n-2)!
...
= n*(n-1)*(n-2)*(n-3)... 1
Example Prg:-
DECLARE
n number;
f number;
FUNCTION fact(x number) RETURN number IS --definition of function
f number;
BEGIN
IF x=0 THEN f := 1;
ELSE f := x * fact(x-1);
END IF;
RETURN f;
END; --end of function
BEGIN
n:= 6;
f := fact(n); --calling the functions
dbms_output.put_line(' Factorial '|| n || ' is ' || f);
END; /
Advantages of Stored Procedure/Functions
Following are some advantages of stored procedure and function in
PL/SQL:
1. Improves Database Performance
• Compilation is automatically done by oracle engine.
• Whenever the calling of procedure or function is done ,the oracle engine
loads the compiled code into a memory area called System Global
Area(SGA) due to which execution becomes faster.
2. Provides Reusability and avoids redundancy
• The same block of code for procedure or function can be called any
number of times for working on multiple data.
• Due to which number of lines of code cannot be written repeatedly.
Advantages of Stored Procedure/Functions
3. Maintains Integrity
• Integrity means accuracy. Use of procedure or function ensures integrity
because they are stored as database objects by the oracle engine.
4. Maintains Security
• Use of stored procedure or function helps in maintaining the security of
the database as access to them and their usage can be controlled by
granting access/permission to users while the permission to change or to
edit or to manipulate the database may not be granted to users.
5. Saves Memory
• Stored procedure or function have shared memory. Due to which it saves
memory as a single copy of either a procedure or a function can be loaded
for execution by any number of users who have access permission.
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach
to Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

15
References
• https://docs.oracle.com/cd/B19306_01/server.102/b14200/state
ments_6009.htm

16
THANK YOU

For queries
Email: [email protected]
17
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


•Faculty: Ms. Shaveta Jain (13464)

Triggers DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different applications
and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup techniques
required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO4 Implement the package, procedures and triggers

3
Triggers
• Triggers in oracle are blocks of PL/SQL code which oracle
engine can execute automatically based on some action or
event.
• These events can be:
DDL statements (CREATE, ALTER, DROP, TRUNCATE)
DML statements (INSERT, SELECT, UPDATE, DELETE)
Database operation like connecting or disconnecting to oracle (LOGON,
LOGOFF, SHUTDOWN)
Parts of a Trigger
Whenever a trigger is created, it contains the following
three sequential parts:
1. Triggering Event or Statement

2. Trigger Restriction

3. Trigger Action
Types of Triggers

Triggers can be classified into three categories:


1. Level Triggers
2. Event Triggers
3. Timing Triggers

• which are further divided into different parts.


1. Level Triggers
There are 2 different types of level triggers, they are:
1. ROW LEVEL TRIGGERS
• It fires for every record that got affected with the execution of DML
statements like INSERT, UPDATE, DELETE etc.
• It always use a FOR EACH ROW clause in a triggering statement.
2. STATEMENT LEVEL TRIGGERS
• It fires once for each statement that is executed.
2. Event Triggers
There are 3 different types of event triggers, they are:
1. DDL EVENT TRIGGER
• It fires with the execution of every DDL statement(CREATE, ALTER, DROP,
TRUNCATE).
2. DML EVENT TRIGGER
• It fires with the execution of every DML statement(INSERT, UPDATE,
DELETE).
3. DATABASE EVENT TRIGGER
• It fires with the execution of every database operation which can be
LOGON, LOGOFF, SHUTDOWN, SERVERERROR etc.
3. Timing Triggers
There are 2 different types of timing triggers, they are:
1. BEFORE TRIGGER
• It fires before executing DML statement.
• Triggering statement may or may not executed depending upon the before
condition block.

2. AFTER TRIGGER
• It fires after executing DML statement.
Syntax for creating Triggers
CREATE OR REPLACE TRIGGER <trigger_name>
BEFORE/AFTER/INSTEAD OF
INSERT/DELETE/UPDATE ON <table_name>
REFERENCING (OLD AS O, NEW AS N)
FOR EACH ROW WHEN (test_condition)
DECLARE
-- Variable declaration;
BEGIN
-- Executable statements;
EXCEPTION
-- Error handling statements;
END <trigger_name>;
Explanation
• CREATE OR REPLACE TRIGGER is a keyword used to create a trigger
and <trigger_name> is user-defined where a trigger can be given a name.
• BEFORE/AFTER/INSTEAD OF specify the timing of the trigger's
occurance. INSTEAD OF is used when a view is created.
• INSERT/UPDATE/DELETE specify the DML statement.
• <table_name> specify the name of the table on which DML statement is to be
applied.
• REFERENCING is a keyword used to provide reference to old and new values for
DML statements.
• FOR EACH ROW is the clause used to specify row level tigger.
• WHEN is a clause used to specify condition to be applied and is only applicable
for row-level trigger.
• DECLARE, BEGIN, EXCEPTION, END are the different sections of PL/SQL code
block containing variable declaration, executable statements, error handling
statements and marking end of PL/SQL block respectively where DECLARE and
EXCEPTION part are optional.
Example Programs!!! Before Insert
1. CREATE TABLE TEST(num number)
2. CREATE OR REPLACE TRIGGER mytrig1 BEFORE INSERT ON Test FOR
EACH ROW
BEGIN
DBMS_OUTPUT.PUT_LINE('The control is inside MyTrig1');
DBMS_OUTPUT.PUT_LINE('Before insert of '||:NEW.num);
END;
3. INSERT INTO TEST VALUES(1)
Output:
Before insert of 1
1 row(s) inserted.
Example Programs!!! After insert Trigger
(a) CREATE OR REPLACE TRIGGER mytrig2
AFTER INSERT ON Test FOR EACH ROW
BEGIN
DBMS_OUTPUT.PUT_LINE('The control is inside MyTrig2');
DBMS_OUTPUT.PUT_LINE(‘After insert of '||:NEW.num);
END;
/
(b) INSERT INTO Test VALUES(2);
Another Example!!!
• CREATE OR REPLACE TRIGGER test_insert_before BEFORE
INSERT ON Test FOR EACH ROW
BEGIN
If :NEW.num>0 then
DBMS_OUTPUT.PUT_LINE('This is +ve No. '||:NEW.num);
else
DBMS_OUTPUT.PUT_LINE('This is -ve No. '||:NEW.num);
end if;
END;
• Output: INSERT INTO TEST VALUES(-5)
This is –ve no…
Defining Your Own Error Messages:
RAISE_APPLICATION_ERROR()
• The procedure RAISE_APPLICATION_ERROR lets you issue user-
defined ORA- error messages from stored subprograms. That way, you
can report errors to your application and avoid returning unhandled
exceptions.
• To call RAISE_APPLICATION_ERROR, use the following Syntax
RAISE_APPLICATION_ERROR(error_number, message[, {TRUE | FALSE}]);

Where,
• error_number is a negative integer in the range -20000 .. -20999
• message is a character string up to 2048 bytes long.
Example Program!!!
CREATE OR REPLACE TRIGGER mytrig1 BEFORE INSERT ON test FOR
EACH ROW
BEGIN
IF :New.num > 0 THEN
DBMS_OUTPUT.PUT_LINE('This is my trigger-1, which is executed before
inserting value '||:NEW.num||' Into the table');
ELSE
RAISE_APPLICATION_ERROR(-20001 ,'Negative Numbers are not allows, pls
insert +ve values....');
END IF;
END;
Example…
CREATE OR REPLACE TRIGGER CheckAge
BEFORE INSERT OR UPDATE ON student FOR EACH ROW
BEGIN
IF :new.Age > 30 THEN
RAISE_APPLICATION_ERROR(-20001, 'Age should not be greater than 30');
END IF;
END;

• Student(rn, name, age)


Let's take a few examples and try to understand this,
Example 1:
• INSERT into STUDENT values(16, 'Saina', 32);
• Output: Age should not be greater than 30
Example 2:
• INSERT into STUDENT values(17, 'Anna', 22);
• Output: 1 row created
Example 3:
• UPDATE STUDENT set age=31 where ROLLNO=12;
• Output: Age should not be greater than 30
Example 4:
• UPDATE STUDENT set age=23 where ROLLNO=12;
• Output: 1 row updated.
Example:- Statement Trigger
CREATE OR REPLACE TRIGGER mytrig1 BEFORE DELETE OR INSERT OR UPDATE
ON emp
BEGIN
RAISE_APPLICATION_ERROR(-20500, 'table is secured');

END;
Example:- Statement Trigger
CREATE OR REPLACE TRIGGER mytrig4 BEFORE DELETE OR INSERT OR UPDATE
ON emp
BEGIN
IF (TO_CHAR(SYSDATE, 'day') IN ('sat', 'sun')) OR (TO_CHAR(SYSDATE,'hh:mi') NOT
BETWEEN '08:30' AND '18:30') THEN
RAISE_APPLICATION_ERROR(-20500, 'table is secured');
END IF;
END;

/*The above example shows a trigger that limits the DML actions to the employee
table to weekdays from 8.30am to 6.30pm. If a user tries to insert/update/delete a
row in the EMPLOYEE table, a warning message will be prompted.*/
Simple Log of Ex_Stu
CREATE OR REPLACE TRIGGER mytrig2 AFTER DELETE ON stu
FOR EACH ROW
BEGIN
INSERT INTO ex_stu VALUES (:old.rn, :old.name,:old.age, sysdate);
END;
CREATE OR REPLACE TRIGGER mytrig5
AFTER DELETE OR INSERT OR UPDATE ON employee FOR EACH ROW
BEGIN
IF DELETING THEN
INSERT INTO xemployee (emp_ssn, emp_last_name,emp_first_name, deldate)
VALUES (:old.emp_ssn, :old.emp_last_name,:old.emp_first_name, sysdate);
ELSIF INSERTING THEN
INSERT INTO nemployee (emp_ssn, emp_last_name,emp_first_name, adddate)
VALUES (:new.emp_ssn, :new.emp_last_name,:new.emp_first_name, sysdate);
ELSE
INSERT INTO uemployee (emp_ssn, emp_address, up_date)
VALUES (:old.emp_ssn, :new.emp_address, sysdate);
END IF;
END;
Check the log now,
• SQL> DELETE FROM employee WHERE ename = 'Joshi';
1 row deleted.
• SQL> SELECT * FROM xemployee;
Enabling or Disabling the Triggers
• SQL>ALTER TRIGGER trigger_name DISABLE;
• SQL>ALTER TABLE table_name DISABLE ALL TRIGGERS;

To enable a trigger, which is disabled, we can use the following


syntax:
• SQL>ALTER TABLE table_name ENABLE trigger_name;

All triggers can be enabled for a specific table by using the following
command
• SQL> ALTER TABLE table_name ENABLE ALL TRIGGERS;
• SQL> DROP TRIGGER trigger_name
Uses of Triggers
• Maintaining complex constraints
• Recording the changes made on the table.
• Automatically generating primary key values.
• Prevent invalid transactions to occur.
• Granting authorization and providing security to
database.
• Enforcing referential integrity.
• Audit data modification
• Log events transparently
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach
to Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

26
References
• https://docs.oracle.com/database/121/LNPLS/packages.h
tm

27
THANK YOU

For queries
Email: [email protected]
28
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


•Faculty: Ms. Shaveta Jain (13464)

Packages in PL/SQL DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different applications
and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup techniques
required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO4 Implement the package, procedures and triggers

3
Packages in PL/SQL
• A package is a way of logically storing the subprograms
like procedure, function, exception or cursor into a single
common unit.

• A package can be defined as an oracle object that is


compiled and stored in the database.

• Once it is compiled and stored in the database it can be


used by all the users of database who have executable
permissions on Oracle database.
Components of Package

Package has two basic components:


• Specification: It is the declaration section of a
Package
• Body: It is the definition section of a Package.
Creating Package
STEP 1: Package specification or declaration,
• It mainly comprises of the following:
• Package Name.
• Variable/constant/cursor/procedure/function/excepti
on declaration.
• This declaration is global to the package.
Syntax: Package specification or declaration
CREATE OR REPLACE PACKAGE <package_name> IS/AS
FUNCTION <function_name> (<list of arguments>)
RETURN <datatype>;
PROCEDURE <procedure_name> (<list of arguments>);
-- code statements
END <package_name>;
STEP 2: Package Body

It mainly comprises of the following:


• It contains the definition of procedure, function or
cursor that is declared in the package specification.
• It contains the subprogram bodies containing
executable statements for which package has been
created
Here is the syntax:
Syntax- Package Body
CREATE OR REPLACE PACKAGE BODY <package_name> IS/AS
FUNCTION <function_name> (<list of arguments>) RETURN <datatype>IS/AS
-- local variable declaration;
BEGIN
-- executable statements;
EXCEPTION
-- error handling statements;
END <function_name>;

PROCEDURE <procedure_name> (<list of arguments>)IS/AS


-- local variable declaration;
BEGIN
-- executable statements;
EXCEPTION
-- error handling statements;
END <procedure_name>;
END <package_name>;
Where,
• CREATE OR REPLACE PACKAGE BODY are keywords used to create the
package with a body.
• FUNCTION and PROCEDURE are keywords used to define function and
procedure while creating package.
• <package_name>, <function_name>, <procedure_name> are user-
defined.
• IS/AS are keywords used to define the body of package, function and
procedure.
• RETURN is a keyword specifying value returned by the function defined.
• DECLARE, BEGIN, EXCEPTION, END are the different sections of PL/SQL
code block containing variable declaration, executable statements, error
handling statements and marking end of PL/SQL block respectively
where DECLARE and EXCEPTION part are optional.
Referring A Package Object
• Note: Creating a package only defines it, to use it we must
refer it using the package object.

• Syntax: Following is the syntax for referring a package


object:
Packagename.objectname;
• The Object can be a function, procedure, cursor, exception that
has been declared in the package specification and defined in
the package body and to access their executable statements
above syntax is used.
Example: PL/SQL code for package specification:
CREATE OR REPLACE PACKAGE pkg_student IS
PROCEDURE updateRecord(sno student.rollno%type);
FUNCTION deleteRecord(snm student.sname%type)
RETURN boolean;
END pkg_Emp;

Output: Package is created.


PL/SQL code for package body:
CREATE OR REPLACE PACKAGE BODY pkg_student IS
PROCEDURE updateRecord(sno student.rollno%type) IS
BEGIN
Update student set age=23 where rollno=sno;
IF SQL%FOUND THEN
dbms_output.put_line('RECORD UPDATED');
ELSE
dbms_output.put_line('RECORD NOT FOUND');
END IF;
END updateRecord;
FUNCTION deleteRecord(snm student.sname%type) RETURN boolean IS
BEGIN
Delete from student where sname=snm;
RETURN SQL%FOUND;
END deleteRecord;
END pkg_student;
Calling the Procedure and Function
set serveroutput on;
DECLARE
sno student.rollno%type;
s_age student.age%type;
snm student.sname%type;
BEGIN
sno := &sno;
snm := &snm
pkg_student.updateRecord(sno);
IF pkg_student.deleteRecord(snm) THEN
dbms_output.put_line('RECORD DELETED');
ELSE
dbms_output.put_line('RECORD NOT FOUND');
END IF;
END;
Error!!!
• Note: If the package specification or package body has
been created with compilation errors then a following
warning message is displayed on the screen:
• WARNING: Package Body created with compilation errors.
• In that case, the errors can be seen by executing following
statement:
SHOW ERRORS;
Benefits of using Package
Following are some of the benefits of packages in
PL/SQL:
• REUSABILITY

• OVERLOADING

• CREATING MODULES

• IMPROVES PERFORMANCE

• GLOBAL DECLARATION
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach
to Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

17
References
• https://docs.oracle.com/database/121/LNPLS/packages.h
tm

18
THANK YOU

For queries
Email: [email protected]
19
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


•Faculty: Ms. Shaveta Jain (13464)

TRANSACTIONS DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different applications
and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup techniques
required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO5 Understand the concept of transaction processing and concurrency control

3
Table of Content
1. Introduction to Transaction Processing
2. Properties of Transactions
3. Serializability and Recoverability
4. Need for Concurrency Control
5. Locking Techniques
6. Time Stamping Methods
7. Optimistic Techniques and Granularity of Data items
Transaction
• A transaction is an event which occurs on the database.
• Generally a transaction reads a value from the database or
writes a value to the database.
ACID Properties:
• Every transaction, for whatever purpose it is being used, has the following
four properties.
• Taking the initial letters of these four properties we collectively call them the
ACID Properties.
• Here we try to describe them and explain them.
1. Atomicity
2. Consistency
3. Isolation
4. Durability
ACID Properties:
1. ATOMICIY
• This means that either all of the instructions within the transaction will be
reflected in the database, or none of them will be reflected.
• Say for example, we have two accounts A and B, each containing Rs 1000/-.
We now start a transaction to deposit Rs 100/- from account A to Account B.
Read A;
A = A – 100;
Write A;
Read B;
B = B + 100;
Write B;
ACID Properties:
Consistent
Each user is responsible to ensure that their transaction (if
executed by itself) would leave the database in a consistent state

2. CONSISTENCY: If we execute a particular transaction in


isolation or together with other transaction, (i.e. presumably in a
multi-programming environment), the transaction will yield the
same expected result
ACID Properties:
3. ISOLATION
The final effects of multiple simultaneous transactions must be
the same as if they were executed one right after the other

4. DURABILITY
If a transaction has been committed, the DBMS must ensure that
its effects are permanently recorded in the database (even if the
system crashes)
Transaction States:
There are the following six states in which a transaction may exist:
• Active: The initial state when the transaction has just started
execution.
• Partially Committed: At any given point of time if the
transaction is executing properly, then it is going towards it
COMMIT POINT. The values generated during the execution are
all stored in volatile storage.
• Failed: If the transaction fails for some reason. The temporary
values are no longer required, and the transaction is set
to ROLLBACK. It means that any change made to the database
by this transaction up to the point of the failure must be undone.
Transaction States:
• Aborted: When the ROLLBACK operation is over, the database
reaches the BFIM. The transaction is now said to have been
aborted.
• Committed: If no failure occurs then the transaction reaches the
COMMIT POINT. All the temporary values are written to the
stable storage and the transaction is said to have been
committed.
• Terminated: Either committed or aborted, the transaction
finally reaches this state.
Transaction States:
Concurrency Control Algorithms:
• Locking
A Transaction “locks” a database object to prevent another
object from modifying the object
• Time-Stamping
Assign a global unique time stamp to each transaction
• Optimistic
Assumption that most database operations do not conflict
Concurrency Control Algorithms:
Locking:
• Lock guarantees exclusive use of data item to current transaction
• Prevents reading Inconsistent Data
• Lock Manager is responsible for assigning and policing the locks
used by the transaction
Concurrency Control Algorithms:
Locking Granularity:
Indicates the level of lock use
• Database Level – Entire Database is Locked
• Table Level – Entire Table is Locked
• Page Level – Locks an Entire Disk page
(Most Frequently Used)
• Row Level – Locks Single Row of Table
• Field Level – Locks a Single Attribute of a Single Row (Rarely
Done)
Concurrency Control Algorithms:
Types of Locks:
Binary:
• Binary Locks – Lock with 2 States
• Locked – No other transaction can use that object
• Unlocked – Any transaction can lock and use object

All Transactions require a Lock and Unlock Operation for Each Object Accessed
(Handled by DBMS)
• Eliminates Lost Updates
• Too Restrictive to Yield Optimal Concurrency Conditions
Concurrency Control Algorithms:
Types of Locks: Shared / Exclusive Locks:
• Indicates the Nature of the Lock
• Shared Lock – Concurrent Transactions are granted READ access on the basis
of a common lock.
• Exclusive Lock – Access is reserved for the transaction that locked the object.
• 3 States: Unlocked, Shared (Read), Exclusive (Write)
• More Efficient Data Access Solution
• More Overhead for Lock Manager
• Type of lock needed must be known
• 3 Operations:
• Read_Lock – Check to see the type of lock
• Write_Lock – Issue a Lock
• Unlock – Release a Lock
• Allow Upgrading / Downgrading of Locks
Concurrency Control Algorithms:
Problems with Locking:
• Transaction Schedule May Not be Serializable
• Can be solved with 2-Phase Locking
• May Cause Deadlocks
• A deadlock is caused when 2 transactions wait for each other to unlock data
Concurrency Control Algorithms:
Two Phase Locking:
Defines how transactions Acquire and Relinquish Locks
1. Growing Phase – The transaction acquires all locks (doesn’t
unlock any data)
2. Shrinking Phase – The transaction releases locks (doesn’t
lock any additional data)
• Transactions acquire all locks it needs until it reaches locked
point
• When locked, data is modified and locks are released
Deadlocks:

• Occur when 2 transactions exist in the following mode:


T1 = access data item X and Y
T2 = Access data items Y and X

If T1 does not unlock Y, T2 cannot begin


If T2 does not unlock X, T1 cannot continue

T1 & T2 wait indefinitely for each other to unlock data


• Deadlocks are only possible if a transactions wants an Exclusive
Lock (No Deadlocks on Shared Locks)
Controlling Deadlocks:

• Prevention – A transaction requesting a new lock is aborted if


there is the possibility of a deadlock – Transaction is rolled back,
Locks are released, Transaction is rescheduled

• Detection – Periodically test the database for deadlocks. If a


deadlock is found, abort / rollback one of the transactions

• Avoidance – Requires a transaction to obtain all locks needed


before it can execute – requires locks to be obtained in succession
Timestamp Ordering Protocol
• The Timestamp Ordering Protocol is used to order the
transactions based on their Timestamps. The order of transaction
is nothing but the ascending order of the transaction creation.
• The priority of the older transaction is higher that's why it
executes first. To determine the timestamp of the transaction, this
protocol uses system time or logical counter.
• The lock-based protocol is used to manage the order between
conflicting pairs among transactions at the execution time.
• But Timestamp based protocols start working as soon as a
transaction is created.
Timestamp Ordering Protocol
• Let's assume there are two transactions T1 and T2. Suppose the
transaction T1 has entered the system at 007 times and
transaction T2 has entered the system at 009 times. T1 has the
higher priority, so it executes first as it is entered the system first.
• The timestamp ordering protocol also maintains the timestamp of
last 'read' and 'write' operation on a data.
Advantage and Disadvantage:-
• TS protocol ensures freedom from deadlock that means no
transaction ever waits.
• But the schedule may not be recoverable
Optimistic Method:
• Most database operations do not conflict
• No locking or time stamping
• Transactions execute until commit
• Read Phase – Read database, execute computations, make local
updates (temporary update file)
• Validate Phase – Transaction is validated to ensure changes
will not effect integrity of database
• If Validated  Go to Write Phase
• If Not Validated  Restart Transaction and discard initial changes
• Write Phase – Commit Changes to database
• Good for Read / Query Databases (Few Updates)
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach
to Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

26
THANK YOU

For queries
Email: [email protected]
27
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


Faculty: Ms. Shaveta Jain (13464)

Serializability & Concurrency Control DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different applications
and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup techniques
required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO5 Understand the concept of transaction processing and concurrency control

3
Serializability
• Basic Assumption – Each transaction preserves database
consistency.
• Thus, serial execution of a set of transactions preserves
database consistency.
• A (possibly concurrent) schedule is serializable if it is
equivalent to a serial schedule. Different forms of
schedule equivalence give rise to the notions of:
1. conflict serializability
2. view serializability
Simplified view of transactions
• We ignore operations other than read and write
instructions
• We assume that transactions may perform arbitrary
computations on data in local buffers in between reads
and writes.
• Our simplified schedules consist of only read and write
instructions.
Conflicting Instructions
• Let li and lj be two Instructions of transactions Ti and Tj
respectively. Instructions li and lj conflict if and only if
there exists some item Q accessed by both li and lj, and at
least one of these instructions wrote Q.
1. li = read(Q), lj = read(Q). li and lj don’t conflict.
2. li = read(Q), lj = write(Q). They conflict.
3. li = write(Q), lj = read(Q). They conflict
4. li = write(Q), lj = write(Q). They conflict
• Intuitively, a conflict between li and lj forces a (logical)
temporal order between them.
• If li and lj are consecutive in a schedule and they do not conflict,
their results would remain the same even if they had been
interchanged in the schedule.
Conflict Serializability
• If a schedule S can be transformed into a schedule S´
by a series of swaps of non-conflicting instructions,
we say that S and S´ are conflict equivalent.
• We say that a schedule S is conflict serializable if it
is conflict equivalent to a serial schedule
Conflict Serializability (Cont.)
• Schedule 3 can be transformed into Schedule 6 -- a serial schedule where T2
follows T1, by a series of swaps of non-conflicting instructions. Therefore,
Schedule 3 is conflict serializable.

Schedule 3 Schedule 6
Conflict Serializability (Cont.)
Example of a schedule that is not conflict serializable:

• We are unable to swap instructions in the above schedule


to obtain either the serial schedule < T3, T4 >, or the serial
schedule < T4, T3 >.
Precedence Graph
• Consider some schedule of a set of transactions T1, T2, ...,
Tn
• Precedence graph — a direct graph where the vertices
are the transactions (names).
• We draw an arc from Ti to Tj if the two transaction
conflict, and Ti accessed the data item on which the
conflict arose earlier.
• We may label the arc by the item that was accessed.
• Example
Testing for Conflict Serializability
• A schedule is conflict serializable if and only if its
precedence graph is acyclic.
• Cycle-detection algorithms exist which take order n2
time, where n is the number of vertices in the graph.
• (Better algorithms take order n + e where e is the
number of edges.)
• If precedence graph is acyclic, the serializability order
can be obtained by a topological sorting of the graph.
• That is, a linear order consistent with the partial
order of the graph.
• For example, a serializability order for the schedule
(a) would be one of either (b) or (c)
Recoverable Schedules
• Recoverable schedule — if a transaction Tj reads a data item previously
written by a transaction Ti , then the commit operation of Ti must appear
before the commit operation of Tj.
• The following schedule is not recoverable if T9 commits immediately after the
read(A) operation.

• If T8 should abort, T9 would have read (and possibly shown to the user) an
inconsistent database state. Hence, database must ensure that schedules are
recoverable.
Cascading Rollbacks
• Cascading rollback – a single transaction failure leads to a
series of transaction rollbacks. Consider the following
schedule where none of the transactions has yet committed
(so the schedule is recoverable)

If T10 fails, T11 and T12 must also be rolled back.


Cascadeless Schedules
• Cascadeless schedules — for each pair of transactions Ti
and Tj such that Tj reads a data item previously written by
Ti, the commit operation of Ti appears before the read
operation of Tj.
• Every cascadeless schedule is also recoverable
• It is desirable to restrict the schedules to those that are
cascadeless
• Example of a schedule that is NOT cascadeless
Concurrency Control
• A database must provide a mechanism that will ensure that all
possible schedules are both:
• Conflict serializable.
• Recoverable and preferably cascadeless
• A policy in which only one transaction can execute at a time
generates serial schedules, but provides a poor degree of
concurrency
• Concurrency-control schemes tradeoff between the amount of
concurrency they allow and the amount of overhead that they
incur
• Testing a schedule for serializability after it has executed is a
little too late!
• Tests for serializability help us understand why a concurrency control
protocol is correct
Weak Levels of Consistency
• Some applications are willing to live with weak levels of
consistency, allowing schedules that are not serializable
• E.g., a read-only transaction that wants to get an approximate total
balance of all accounts
• E.g., database statistics computed for query optimization can be
approximate (why?)
• Such transactions need not be serializable with respect to other
transactions
• Tradeoff accuracy for performance
Transaction Definition in SQL
• Data manipulation language must include a construct for
specifying the set of actions that comprise a transaction.
• In SQL, a transaction begins implicitly.
• A transaction in SQL ends by:
• Commit work commits current transaction and begins a new
one.
• Rollback work causes current transaction to abort.
• In almost all database systems, by default, every SQL
statement also commits implicitly if it executes
successfully
• Implicit commit can be turned off by a database directive
• E.g. in JDBC, connection.setAutoCommit(false);
Other Notions of Serializability
View Serializability
• Let S and S´ be two schedules with the same set of transactions. S and S´
are view equivalent if the following three conditions are met, for each data
item Q,
1. If in schedule S, transaction Ti reads the initial value of Q, then in
schedule S’ also transaction Ti must read the initial value of Q.
2. If in schedule S transaction Ti executes read(Q), and that value was
produced by transaction Tj (if any), then in schedule S’ also
transaction Ti must read the value of Q that was produced by the
same write(Q) operation of transaction Tj .
3. The transaction (if any) that performs the final write(Q) operation in
schedule S must also perform the final write(Q) operation in
schedule S’.
• As can be seen, view equivalence is also based purely on reads and writes
alone.
View Serializability (Cont.)
• A schedule S is view serializable if it is view equivalent to a serial schedule.
• Every conflict serializable schedule is also view serializable.
• Below is a schedule which is view-serializable but not conflict serializable.

• What serial schedule is above equivalent to?


• Every view serializable schedule that is not conflict serializable has blind
writes.
Test for View Serializability
• The precedence graph test for conflict serializability cannot be used directly
to test for view serializability.
• Extension to test for view serializability has cost exponential in the size of
the precedence graph.
• The problem of checking if a schedule is view serializable falls in the class of
NP-complete problems.
• Thus, existence of an efficient algorithm is extremely unlikely.
• However ,practical algorithms that just check some sufficient conditions for
view serializability can still be used.
More Complex Notions of Serializability
• The schedule below produces the same outcome as the serial schedule < T1, T5 >,
yet is not conflict equivalent or view equivalent to it.

• If we start with A = 1000 and B = 2000, the final result is 960 and 2040
• Determining such equivalence requires analysis of operations other than read and
write.
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach
to Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

23
THANK YOU

For queries
Email: [email protected]

24
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


Faculty: Ms. Shaveta Jain (13464)

Deadlock & Granularity DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different applications
and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup techniques
required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO5 Understand the concept of transaction processing and concurrency control

3
Outline
• Lock-Based Protocols
• Timestamp-Based Protocols
• Validation-Based Protocols
• Multiple Granularity
• Multiversion Schemes
• Insert and Delete Operations
• Concurrency in Index Structures
Lock-Based Protocols
• A lock is a mechanism to control concurrent access to a data item
• Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both read as well as
written. X-lock is requested using lock-X instruction.
2. shared (S) mode. Data item can only be read. S-lock is
requested using lock-S instruction.
• Lock requests are made to the concurrency-control manager by the
programmer. Transaction can proceed only after request is granted.
Lock-Based Protocols (Cont.)
• Lock-compatibility matrix

• A transaction may be granted a lock on an item if the requested lock is


compatible with locks already held on the item by other transactions
• Any number of transactions can hold shared locks on an item,
• But if any transaction holds an exclusive on the item no other transaction may hold any
lock on the item.
• If a lock cannot be granted, the requesting transaction is made to wait till all
incompatible locks held by other transactions have been released. The lock is
then granted.
Lock-Based Protocols (Cont.)
• Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
• Locking as above is not sufficient to guarantee serializability — if A and B
get updated in-between the read of A and B, the displayed sum would
be wrong.
• A locking protocol is a set of rules followed by all transactions while
requesting and releasing locks. Locking protocols restrict the set of
possible schedules.
The Two-Phase Locking Protocol
• This protocol ensures conflict-serializable schedules.
• Phase 1: Growing Phase
• Transaction may obtain locks
• Transaction may not release locks
• Phase 2: Shrinking Phase
• Transaction may release locks
• Transaction may not obtain locks

• The protocol assures serializability. It can be proved that the transactions


can be serialized in the order of their lock points (i.e., the point where a
transaction acquired its final lock).
The Two-Phase Locking Protocol (Cont.)

• There can be conflict serializable schedules that cannot be obtained if


two-phase locking is used.
• However, in the absence of extra information (e.g., ordering of access
to data), two-phase locking is needed for conflict serializability in the
following sense:
• Given a transaction Ti that does not follow two-phase locking, we can find a
transaction Tj that uses two-phase locking, and a schedule for Ti and Tj that is not
conflict serializable.
Lock Conversions
• Two-phase locking with lock conversions:
– First Phase:
• can acquire a lock-S on item
• can acquire a lock-X on item
• can convert a lock-S to a lock-X (upgrade)
– Second Phase:
• can release a lock-S
• can release a lock-X
• can convert a lock-X to a lock-S (downgrade)
• This protocol assures serializability. But still relies on the
programmer to insert the various locking instructions.
Automatic Acquisition of Locks
• A transaction Ti issues the standard read/write instruction, without
explicit locking calls.
• The operation read(D) is processed as:
if Ti has a lock on D
then
read(D)
else begin
if necessary wait until no other
transaction has a lock-X on D
grant Ti a lock-S on D;
read(D)
end
Automatic Acquisition of Locks (Cont.)
• write(D) is processed as:
if Ti has a lock-X on D
then
write(D)
else begin
if necessary wait until no other transaction has any lock on D,
if Ti has a lock-S on D
then
upgrade lock on D to lock-X
else
grant Ti a lock-X on D
write(D)
end;
• All locks are released after commit or abort
Deadlocks
• Consider the partial schedule

• Neither T3 nor T4 can make progress — executing lock-S(B) causes T4 to wait for T3 to
release its lock on B, while executing lock-X(A) causes T3 to wait for T4 to release its lock on
A.
• Such a situation is called a deadlock.
• To handle a deadlock one of T3 or T4 must be rolled back
and its locks released.
Deadlocks (Cont.)

• Two-phase locking does not ensure freedom from deadlocks.


• In addition to deadlocks, there is a possibility of starvation.
• Starvation occurs if the concurrency control manager is badly
designed. For example:
• A transaction may be waiting for an X-lock on an item, while a sequence of other
transactions request and are granted an S-lock on the same item.
• The same transaction is repeatedly rolled back due to deadlocks.
• Concurrency control manager can be designed to prevent starvation.
Deadlocks (Cont.)
• The potential for deadlock exists in most locking protocols. Deadlocks
are a necessary evil.
• When a deadlock occurs there is a possibility of cascading roll-backs.
• Cascading roll-back is possible under two-phase locking. To avoid this,
follow a modified protocol called strict two-phase locking -- a
transaction must hold all its exclusive locks till it commits/aborts.
• Rigorous two-phase locking is even stricter. Here, all locks are held till
commit/abort. In this protocol transactions can be serialized in the
order in which they commit.
Implementation of Locking
• A lock manager can be implemented as a separate process to which
transactions send lock and unlock requests
• The lock manager replies to a lock request by sending a lock grant
messages (or a message asking the transaction to roll back, in case of a
deadlock)
• The requesting transaction waits until its request is answered
• The lock manager maintains a data-structure called a lock table to
record granted locks and pending requests
• The lock table is usually implemented as an in-memory hash table
indexed on the name of the data item being locked
Lock Table
• Dark blue rectangles indicate granted locks; light
blue indicate waiting requests
• Lock table also records the type of lock granted
or requested
• New request is added to the end of the queue
of requests for the data item, and granted if it is
compatible with all earlier locks
• Unlock requests result in the request being
deleted, and later requests are checked to see if
they can now be granted
• If transaction aborts, all waiting or granted
requests of the transaction are deleted
• lock manager may keep a list of locks held
by each transaction, to implement this
efficiently
Deadlock Handling
• System is deadlocked if there is a set of transactions such that every
transaction in the set is waiting for another transaction in the set.
• Deadlock prevention protocols ensure that the system will never enter
into a deadlock state. Some prevention strategies :
• Require that each transaction locks all its data items before it begins execution
(predeclaration).
• Impose partial ordering of all data items and require that a transaction can lock data
items only in the order specified by the partial order.
More Deadlock Prevention Strategies
• Following schemes use transaction timestamps for the sake of deadlock
prevention alone.
• wait-die scheme — non-preemptive
• older transaction may wait for younger one to release data item. (older means smaller
timestamp) Younger transactions never Younger transactions never wait for older ones;
they are rolled back instead.
• a transaction may die several times before acquiring needed data item
• wound-wait scheme — preemptive
• older transaction wounds (forces rollback) of younger transaction instead of waiting for it.
Younger transactions may wait for older ones.
• may be fewer rollbacks than wait-die scheme.
Deadlock prevention (Cont.)
• Both in wait-die and in wound-wait schemes, a rolled back transactions is
restarted with its original timestamp. Older transactions thus have precedence
over newer ones, and starvation is hence avoided.
• Timeout-Based Schemes:
• a transaction waits for a lock only for a specified amount of time. If the lock has not been
granted within that time, the transaction is rolled back and restarted,
• Thus, deadlocks are not possible
• simple to implement; but starvation is possible. Also difficult to determine good value of
the timeout interval.
Deadlock Detection
• Deadlocks can be described as a wait-for graph, which consists of a pair G =
(V,E),
• V is a set of vertices (all the transactions in the system)
• E is a set of edges; each element is an ordered pair Ti Tj.
• If Ti  Tj is in E, then there is a directed edge from Ti to Tj, implying that Ti is
waiting for Tj to release a data item.
• When Ti requests a data item currently being held by Tj, then the edge Ti  Tj
is inserted in the wait-for graph. This edge is removed only when Tj is no longer
holding a data item needed by Ti.
• The system is in a deadlock state if and only if the wait-for graph has a cycle.
Must invoke a deadlock-detection algorithm periodically to look for cycles.
Deadlock Detection (Cont.)

Wait-for graph without a cycle Wait-for graph with a cycle


Deadlock Recovery
• When deadlock is detected :
• Some transaction will have to rolled back (made a victim) to break deadlock. Select
that transaction as victim that will incur minimum cost.
• Rollback -- determine how far to roll back transaction
• Total rollback: Abort the transaction and then restart it.
• More effective to roll back transaction only as far as necessary to break deadlock.
• Starvation happens if same transaction is always chosen as victim. Include the number
of rollbacks in the cost factor to avoid starvation
Multiple Granularity
• Allow data items to be of various sizes and define a hierarchy of data
granularities, where the small granularities are nested within larger ones
• Can be represented graphically as a tree.
• When a transaction locks a node in the tree explicitly, it implicitly locks all the
node's descendents in the same mode.
• Granularity of locking (level in tree where locking is done):
• fine granularity (lower in tree): high concurrency, high locking overhead
• coarse granularity (higher in tree): low locking overhead, low concurrency
Example of Granularity Hierarchy

The levels, starting from the coarsest (top) level are


• database
• area
• file
• record
Intention Lock Modes
• In addition to S and X lock modes, there are three additional lock modes with
multiple granularity:
• intention-shared (IS): indicates explicit locking at a lower level of the tree but only with
shared locks.
• intention-exclusive (IX): indicates explicit locking at a lower level with exclusive or shared
locks
• shared and intention-exclusive (SIX): the subtree rooted by that node is locked explicitly in
shared mode and explicit locking is being done at a lower level with exclusive-mode locks.
• intention locks allow a higher level node to be locked in S or X mode without
having to check all descendent nodes.
Compatibility Matrix with Intention Lock Modes

• The compatibility matrix for all lock modes is:


Multiple Granularity Locking Scheme
• Transaction Ti can lock a node Q, using the following rules:
1. The lock compatibility matrix must be observed.
2. The root of the tree must be locked first, and may be locked in any mode.
3. A node Q can be locked by Ti in S or IS mode only if the parent of Q is currently locked by Ti
in either IX or IS mode.
4. A node Q can be locked by Ti in X, SIX, or IX mode only if the parent of Q is currently locked
by Ti in either IX or SIX mode.
5. Ti can lock a node only if it has not previously unlocked any node (that is, Ti is two-phase).
6. Ti can unlock a node Q only if none of the children of Q are currently locked by Ti.
• Observe that locks are acquired in root-to-leaf order, whereas they are released in
leaf-to-root order.
• Lock granularity escalation: in case there are too many locks at a particular level,
switch to higher granularity S or X lock
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach to
Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

29
THANK YOU

For queries
Email: [email protected]

30
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


Faculty: Ms. Shaveta Jain (13464)

Database Recovery DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different
applications and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup
techniques required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO5 Understand the concept of transaction processing and concurrency control

3
Database Recovery of database
Focusing on Recovery:
All database reads/writes are within a transaction
• Transactions have the “ACID” properties
• Atomicity - all or nothing
• Consistency - preserves database integrity
• Isolation - execute as if they were run alone
• Durability - results aren’t lost by a failure
• Recovery subsystem guarantees A & D
• Concurrency control guarantees I
• Application program guarantees C
Database Recovery of database
What Makes Transaction Processing (TP) Hard?
• Atomicity - no partial results
• Isolation – appears to run transactions serially
• Durability – results aren’t lost by a failure
• Availability - system must be up all the time
• Reliability - system should rarely fail
• Response time - within 1-2 seconds
• Throughput - thousands of transactions/second
• Scalability - start small, ramp up to Internet-scale
• Configurability - for above requirements + low cost
• Distribution - of users and data
Database Recovery of database
System crash
T1 Crash Transaction
T2 error
T3 System error
Time Local error
• ACID properties of Transaction Disk failure
Database system should guarantee Catastrophe

- Durability : Applied changes by transactions must not be lost. ~


T3
- Atomicity : Transactions can be aborted. ~ T1, T2
Database Recovery of database
Checkpoint
System Log
- keeps info of changes applied
by transactions
Backup
T1
T2 Crash
T3
Time
• Undo/Redo by the Log
 recover Non-catastrophic failure
• Full DB Backup
Catastrophic failure
> Differential Backup
> (Transaction) Log
Database Recovery
Database recovery is the process of restoring the database to the most recent
consistent state that existed just before the failure.
 Three states of database recovery:
• Pre-condition: At any given point in time the database is in a consistent
state.
• Condition: Occurs some kind of system failure.
• Post-condition: Restore the database to the consistent state that existed
before the failure
• Example:
• If the system crashes before a fund transfer transaction completes its
execution, then either one or both accounts may have incorrect value. Thus,
the database must be restored to the state before the transaction modified
any of the accounts.
Outline
Databases Recovery
1 Purpose of Database Recovery
2 Types of Failure
3 Transaction Log
4 Data Updates
5 Data Caching
6 Transaction Roll-back (Undo) and Roll-Forward
7 Checkpointing
8 Recovery schemes
9 ARIES Recovery Scheme
10 Recovery in Multidatabase System

Chapter 19-9
Database Recovery

1 Purpose of Database Recovery


• To bring the database into the last consistent state, which existed prior to the failure.
• To preserve transaction properties (Atomicity, Consistency, Isolation and Durability).

Example: If the system crashes before a fund transfer


transaction completes its execution, then either one or
both accounts may have incorrect value. Thus, the
database must be restored to the state before the
transaction modified any of the accounts.

Chapter 19-10
Database Recovery

2 Types of Failure
The database may become unavailable for use due to
• Transaction failure: Transactions may fail because of
incorrect input, deadlock, incorrect synchronization.
• System failure: System may fail because of addressing
error, application error, operating system fault, RAM
failure, etc.
• Media failure: Disk head crash, power disruption, etc.

Chapter 19-11
Database Recovery
3 Transaction Log
For recovery from any type of failure data values prior to
modification (BFIM - BeFore Image) and the new value after
modification (AFIM – AFter Image) are required. These values and
other information is stored in a sequential file called Transaction
log. A sample log is given below. Back P and Next P point to the
T ID next
previous and Backlog
P Next P Operation
records of the Data
same itemtransaction.
BFIM AFIM
T1 0 1 Begin
T1 1 4 Write X X = 100 X = 200
T2 0 8 Begin
T1 2 5 W Y Y = 50 Y = 100
T1 4 7 R M M = 200 M = 200
T3 0 9 R N N = 400 N = 400
T1 5 nil End

Chapter 19-12
Database Recovery
4 Data Update
• Immediate Update: As soon as a data item is modified in
cache, the disk copy is updated.
• Deferred Update: All modified data items in the cache is
written either after a transaction ends its execution or
after a fixed number of transactions have completed their
execution.
• Shadow update: The modified version of a data item
does not overwrite its disk copy but is written at a
separate disk location.
• In-place update: The disk version of the data item is
overwritten by the cache version.

Chapter 19-13
Database Recovery
5 Data Caching
Data items to be modified are first stored into database
cache by the Cache Manager (CM) and after
modification they are flushed (written) to the disk. The
flushing is controlled by Modified and Pin-Unpin bits.
Pin-Unpin: Instructs the operating system not to flush
the data item.
Modified: Indicates the AFIM of the data item.

Chapter 19-14
Database Recovery
6 Transaction Roll-back (Undo) and Roll-Forward (Redo)
To maintain atomicity, a transaction’s operations are redone
or undone.
Undo: Restore all BFIMs on to disk (Remove all AFIMs).
Redo: Restore all AFIMs on to disk.
Database recovery is achieved either by performing only
Undos or only Redos or by a combination of the two. These
operations are recorded in the log as they happen.

Chapter 19-15
Database Recovery
Roll-back

We show the process of roll-back with the help of the following three transactions T1, and T2 and T3.

T1 T2 T3
read_item (A) read_item (B) read_item (C)
read_item (D) write_item (B) write_item (B)
write_item (D) read_item (D) read_item (A)
write_item (A) write_item (A)

Chapter 19-16
Database Recovery
Roll-back: One execution of T1, T2 and T3 as recorded in the log.
A B C D
30 15 40 20

[start_transaction, T3]
[read_item, T3, C]
* [write_item, T3, B, 15, 12] 12
[start_transaction,T2]
[read_item, T2, B]
** [write_item, T2, B, 12, 18] 18
[start_transaction,T1]
[read_item, T1, A]
[read_item, T1, D]
[write_item, T1, D, 20, 25] 25
[read_item, T2, D]
** [write_item, T2, D, 25, 26] 26
[read_item, T3, A]
---- system crash ----
* T3 is rolled back because it did not reach its commit point.
** T2 is rolled back because it reads the value of item B written by T3.
Chapter 19-17
Database Recovery
Roll-back: One execution of T1, T2 and T3 as recorded in the log.

T3 READ(C) WRITE(B) READ(A)


BEGIN READ(B) WRITE(B) READ(D) WRITE(D)
T2
BEGIN READ(A) READ(D) WRITE(D)
T1
BEGIN
Time
system crash

Illustrating cascading roll-back

Chapter 19-18
Database Recovery
Write-Ahead Logging
When in-place update (immediate or deferred) is used then
log is necessary for recovery and it must be available to
recovery manager. This is achieved by Write-Ahead Logging
(WAL) protocol. WAL states that
For Undo: Before a data item’s AFIM is flushed to the
database disk (overwriting the BFIM) its BFIM must be written
to the log and the log must be saved on a stable store (log
disk).
For Redo: Before a transaction executes its commit operation,
all its AFIMs must be written to the log and the log must be
saved on a stable store.
Chapter 19-19
Database Recovery
7 Checkpointing
Time to time (randomly or under some criteria) the database
flushes its buffer to database disk to minimize the task of recovery.
The following steps defines a checkpoint operation:
1. Suspend execution of transactions temporarily.
2. Force write modified buffer data to disk.
3. Write a [checkpoint] record to the log, save the log to disk.
4. Resume normal transaction execution.
During recovery redo or undo is required to transactions appearing
after [checkpoint] record.

Chapter 19-20
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach to
Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

21
THANK YOU

For queries
Email:[email protected]
22
APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Database Management System (22CSH-243)


Faculty: Ms. Shaveta Jain (13464)

Database Recovery Technuques DISCOVER . LEARN . EMPOWER


1
DBMS: Course Objectives
COURSE OBJECTIVES
The Course aims to:
• Understand database system concepts and design databases for different
applications and to acquire the knowledge on DBMS and RDBMS.
• Implement and understand different types of DDL, DML and DCL statements.
• Understand transaction concepts related to databases and recovery/backup
techniques required for the proper storage of data.

2
COURSE OUTCOMES

On completion of this course, the students shall be able to:-

CO5 Understand the concept of transaction processing and concurrency control

3
Outline
Databases Recovery
1 Purpose of Database Recovery
2 Types of Failure
3 Transaction Log
4 Data Updates
5 Data Caching
6 Transaction Roll-back (Undo) and Roll-Forward
7 Checkpointing
8 Recovery schemes
9 ARIES Recovery Scheme
10 Recovery in Multidatabase System
Chapter 19-4
Database Recovery

1 Purpose of Database Recovery


• To bring the database into the last consistent state, which existed prior to the failure.
• To preserve transaction properties (Atomicity, Consistency, Isolation and Durability).

Example: If the system crashes before a fund transfer


transaction completes its execution, then either one or
both accounts may have incorrect value. Thus, the
database must be restored to the state before the
transaction modified any of the accounts.

Chapter 19-5
Database Recovery

2 Types of Failure
The database may become unavailable for use due to
• Transaction failure: Transactions may fail because of
incorrect input, deadlock, incorrect synchronization.
• System failure: System may fail because of addressing
error, application error, operating system fault, RAM
failure, etc.
• Media failure: Disk head crash, power disruption, etc.

Chapter 19-6
Database Recovery
3 Transaction Log
For recovery from any type of failure data values prior to
modification (BFIM - BeFore Image) and the new value after
modification (AFIM – AFter Image) are required. These values and
other information is stored in a sequential file called Transaction
log. A sample log is given below. Back P and Next P point to the
T ID next
previous and Backlog
P Next P Operation
records of the Data
same itemtransaction.
BFIM AFIM
T1 0 1 Begin
T1 1 4 Write X X = 100 X = 200
T2 0 8 Begin
T1 2 5 W Y Y = 50 Y = 100
T1 4 7 R M M = 200 M = 200
T3 0 9 R N N = 400 N = 400
T1 5 nil End

Chapter 19-7
Database Recovery
4 Data Update
• Immediate Update: As soon as a data item is modified in
cache, the disk copy is updated.
• Deferred Update: All modified data items in the cache is
written either after a transaction ends its execution or
after a fixed number of transactions have completed their
execution.
• Shadow update: The modified version of a data item
does not overwrite its disk copy but is written at a
separate disk location.
• In-place update: The disk version of the data item is
overwritten by the cache version.

Chapter 19-8
Database Recovery
5 Data Caching
Data items to be modified are first stored into database
cache by the Cache Manager (CM) and after
modification they are flushed (written) to the disk. The
flushing is controlled by Modified and Pin-Unpin bits.
Pin-Unpin: Instructs the operating system not to flush
the data item.
Modified: Indicates the AFIM of the data item.

Chapter 19-9
Database Recovery
6 Transaction Roll-back (Undo) and Roll-Forward (Redo)
To maintain atomicity, a transaction’s operations are redone
or undone.
Undo: Restore all BFIMs on to disk (Remove all AFIMs).
Redo: Restore all AFIMs on to disk.
Database recovery is achieved either by performing only
Undos or only Redos or by a combination of the two. These
operations are recorded in the log as they happen.

Chapter 19-10
Database Recovery
Roll-back

We show the process of roll-back with the help of the following three transactions T1, and T2 and T3.

T1 T2 T3
read_item (A) read_item (B) read_item (C)
read_item (D) write_item (B) write_item (B)
write_item (D) read_item (D) read_item (A)
write_item (A) write_item (A)

Chapter 19-11
Database Recovery
Roll-back: One execution of T1, T2 and T3 as recorded in the log.
A B C D
30 15 40 20

[start_transaction, T3]
[read_item, T3, C]
* [write_item, T3, B, 15, 12] 12
[start_transaction,T2]
[read_item, T2, B]
** [write_item, T2, B, 12, 18] 18
[start_transaction,T1]
[read_item, T1, A]
[read_item, T1, D]
[write_item, T1, D, 20, 25] 25
[read_item, T2, D]
** [write_item, T2, D, 25, 26] 26
[read_item, T3, A]
---- system crash ----
* T3 is rolled back because it did not reach its commit point.
** T2 is rolled back because it reads the value of item B written by T3.
Chapter 19-12
Database Recovery
Roll-back: One execution of T1, T2 and T3 as recorded in the log.

T3 READ(C) WRITE(B) READ(A)


BEGIN READ(B) WRITE(B) READ(D) WRITE(D)
T2
BEGIN READ(A) READ(D) WRITE(D)
T1
BEGIN
Time
system crash

Illustrating cascading roll-back

Chapter 19-13
Database Recovery
Write-Ahead Logging
When in-place update (immediate or deferred) is used then
log is necessary for recovery and it must be available to
recovery manager. This is achieved by Write-Ahead Logging
(WAL) protocol. WAL states that
For Undo: Before a data item’s AFIM is flushed to the
database disk (overwriting the BFIM) its BFIM must be written
to the log and the log must be saved on a stable store (log
disk).
For Redo: Before a transaction executes its commit operation,
all its AFIMs must be written to the log and the log must be
saved on a stable store.
Chapter 19-14
Database Recovery
7 Checkpointing
Time to time (randomly or under some criteria) the database
flushes its buffer to database disk to minimize the task of recovery.
The following steps defines a checkpoint operation:
1. Suspend execution of transactions temporarily.
2. Force write modified buffer data to disk.
3. Write a [checkpoint] record to the log, save the log to disk.
4. Resume normal transaction execution.
During recovery redo or undo is required to transactions appearing
after [checkpoint] record.

Chapter 19-15
Database Recovery
Steal/No-Steal and Force/No-Force
Possible ways for flushing database cache to database disk:
Steal: Cache can be flushed before transaction commits.
No-Steal: Cache cannot be flushed before transaction commit.
Force: Cache is immediately flushed (forced) to disk.
No-Force: Cache is deferred until transaction commits.
These give rise to four different ways for handling recovery:
Steal/No-Force (Undo/Redo), Steal/Force (Undo/No-redo), No-
Steal/No-Force (Redo/No-undo) and No-Steal/Force (No-undo/No-
redo).

Chapter 19-16
Database Recovery
8 Recovery Scheme
Deferred Update (No Undo/Redo)
The data update goes as follows:
1. A set of transactions records their updates in the log.
2. At commit point under WAL scheme these updates are
saved on database disk.
After reboot from a failure the log is used to redo all the
transactions affected by this failure. No undo is required
because no AFIM is flushed to the disk before a transaction
commits.
Chapter 19-17
Database Recovery
Deferred Update in a single-user system
There is no concurrent data sharing in a single user
system. The data update goes as follows:
1. A set of transactions records their updates in the log.
2. At commit point under WAL scheme these updates are
saved on database disk.
After reboot from a failure the log is used to redo all the
transactions affected by this failure. No undo is required
because no AFIM is flushed to the disk before a transaction
commits.

Chapter 19-18
Database Recovery
Deferred Update in a single-user system
(a) T1 T2
read_item (A) read_item (B)
read_item (D) write_item (B)
write_item (D) read_item (D)
(b) write_item (A)
[start_transaction, T1]
[write_item, T1, D, 20]
[commit T1]
[start_transaction, T1]
[write_item, T2, B, 10]
[write_item, T2, D, 25]  system crash

The [write_item, …] operations of T1 are redone. Chapter 19-19


Database Recovery
Deferred Update with concurrent users
This environment requires some concurrency control mechanism to
guarantee isolation property of transactions. In a system recovery
transactions which were recorded in the log after the last checkpoint were
T1
redone. The recovery managerT2may scan some of the transactions
T3
recordedT4before the checkpoint to get the AFIMs.
T5
t1 Time t2
checkpoint system crash

Recovery in a concurrent users environment.


Chapter 19-20
Database Recovery
Deferred Update with concurrent users
(a) T1 T2 T3 T4
read_item (A) read_item (B) read_item (A) read_item (B)
read_item (D) write_item (B) write_item (A) write_item (B)
write_item (D) read_item (D) read_item (C) read_item (A)
write_item (D) write_item (C) write_item (A)
(b) [start_transaction, T1]
[write_item, T1, D, 20]
[commit, T1]
[checkpoint]
[start_transaction, T4]
[write_item, T4, B, 15]
[write_item, T4, A, 20]
[commit, T4]
[start_transaction T2]
[write_item, T2, B, 12]
[start_transaction, T3]
[write_item, T3, A, 30]
[write_item, T2, D, 25]  system crash
Chapter 19-21
Database Recovery
Deferred Update with concurrent users
Two tables are required for implementing this protocol:

Active table: All active transactions are entered in this table.


Commit table: Transactions to be committed are entered in this table.

During recovery, all transactions of the commit table are redone and all
transactions of active tables are ignored since none of their AFIMs
reached the database. It is possible that a commit table transaction may
be redone twice but this does not create any inconsistency because of a
redone is “idempotent”, that is, one redone for an AFIM is equivalent to
multiple redone for the same AFIM.
Chapter 19-22
Database Recovery
Recovery Techniques Based on Immediate Update

Undo/No-redo Algorithm
In this algorithm AFIMs of a transaction are flushed to the database disk
under WAL before it commits. For this reason the recovery manager
undoes all transactions during recovery. No transaction is redone. It is
possible that a transaction might have completed execution and ready to
commit but this transaction is also undone.

Chapter 19-23
Database Recovery
Recovery Techniques Based on Immediate Update

Undo/Redo Algorithm (Single-user environment)


Recovery schemes of this category apply undo and also redo for
recovery. In a single-user environment no concurrency control is
required but a log is maintained under WAL. Note that at any time there
will be one transaction in the system and it will be either in the commit
table or in the active table. The recovery manager performs:

1. Undo of a transaction if it is in the active table.


2. Redo of a transaction if it is in the commit table.

Chapter 19-24
Database Recovery
Recovery Techniques Based on Immediate Update
Undo/Redo Algorithm (Concurrent execution)
Recovery schemes of this category applies undo and also redo to recover
the database from failure. In concurrent execution environment a
concurrency control is required and log is maintained under WAL.
Commit table records transactions to be committed and active table
records active transactions. To minimize the work of the recovery
manager checkpointing is used. The recovery performs:

1. Undo of a transaction if it is in the active table.


2. Redo of a transaction if it is in the commit table.

Chapter 19-25
Database Recovery
Shadow Paging
The AFIM does not overwrite its BFIM but recorded at another place on
the disk. Thus, at any time a data item has AFIM and BFIM (Shadow
copy of the data item) at two different places on the disk.
X Y
X' Y'

Database

X and Y: Shadow copies of data items


X` and Y`: Current copies of data items
Chapter 19-26
Database Recovery
Shadow Paging
Current
To manage Directory
access of data items by concurrent Shadow Directory
transactions two
(after updating pages 2, 5)
directories (current and shadow)
Pageare used. The directory (not
5 (old)
updated)is
arrangement
1
illustrated below. Here a pagePage
is a data
1 item. 1
2 Page 4 2
3 Page 2 (old) 3
4 Page 3 4
5 Page 6 5
6 Page 2 (new) 6
Page 5 (new)

Chapter 19-27
Database Recovery
9 The ARIES Recovery Algorithm
The ARIES Recovery Algorithm is based on:
1. WAL (Write Ahead Logging)
2. Repeating history during redo: ARIES will retrace all actions of
the database system prior to the crash to reconstruct the database
state when the crash occurred.
3. Logging changes during undo: It will prevent ARIES from
repeating the completed undo operations if a failure occurs
during recovery, which causes a restart of the recovery process.

Chapter 19-28
Database Recovery
The ARIES Recovery Algorithm
The ARIES recovery algorithm consists of three steps:
1. Analysis: step identifies the dirty (updated) pages in the buffer
and the set of transactions active at the time of crash. The
appropriate point in the log where redo is to start is also
determined.
2. Redo: necessary redo operations are applied.
3. Undo: log is scanned backwards and the operations of
transactions active at the time of crash are undone in reverse
order.
Chapter 19-29
Database Recovery
The ARIES Recovery Algorithm
The Log and Log Sequence Number (LSN)
A log record is written for (a) data update, (b) transaction commit,
(c) transaction abort, (d) undo, and (e) transaction end. In the case
of undo a compensating log record is written.

A unique LSN is associated with every log record. LSN increases


monotonically and indicates the disk address of the log record it is
associated with. In addition, each data page stores the LSN of the
latest log record corresponding to a change for that page.

A log record stores (a) the previous LSN of that transaction, (b) the
transaction ID, and (c) the type of log record.
Chapter 19-30
Database Recovery
The ARIES Recovery Algorithm
The Log and Log Sequence Number (LSN)
A log record stores:
1. Previous LSN of that transaction: It links the log record of each
transaction. It is like a back pointer points to the previous
record of the same transaction.
2. Transaction ID
3.
For aType
writeofoperation
log record.
the following additional information is logged:
4. Page ID for the page that includes the item
5. Length of the updated item
6. Its offset from the beginning of the page
7. BFIM of the item
8. AFIM of the item Chapter 19-31
Database Recovery
The ARIES Recovery Algorithm
The Transaction table and the Dirty Page table
For efficient recovery following tables are also stored in the log during
checkpointing:
Transaction table: Contains an entry for each active transaction,
with information such as transaction ID, transaction status and the
LSN of the most recent log record for the transaction.

Dirty Page table: Contains an entry for each dirty page in the buffer,
which includes the page ID and the LSN corresponding to the earliest
update to that page.

Chapter 19-32
Database Recovery
The ARIES Recovery Algorithm
Checkpointing
A checkpointing does the following:
1. Writes a begin_checkpoint record in the log
2. Writes an end_checkpoint record in the log. With this record the
contents of transaction table and dirty page table are appended to
the end of the log.
3. Writes the LSN of the begin_checkpoint record to a special file.
This special file is accessed during recovery to locate the last
To checkpoint
reduce theinformation.
cost of checkpointing and allow the system to
continue to execute transactions, ARIES uses “fuzzy
checkpointing”.
Chapter 19-33
Database Recovery
The ARIES Recovery Algorithm
The following steps are performed for recovery
1. Analysis phase: Start at the begin_checkpoint record and
proceed to the end_checkpoint record. Access transaction table
and dirty page table are appended to the end of the log. Note that
during this phase some other log records may be written to the
log and transaction table may be modified. The analysis phase
compiles the set of redo and undo to be performed and ends.
2. Redo phase: Starts from the point in the log up to where all dirty
pages have been flushed, and move forward to the end of the log.
Any change that appears in the dirty page table is redone.
3. Undo phase: Starts from the end of the log and proceeds
backward while performing appropriate undo. For each undo it
writes a compensating record in the log. Chapter 19-34
Database Recovery
An example of the working of ARIES scheme
LSN LAST-LSN TRAN-ID TYPE PAGE-ID Other Info.
1 0 T1 update C -----
2 0 T2 update B -----
3 1 T1 commit -----
(a) 4 begin checkpoint
5 end checkpoint
6 0 T3 update A -----
7 2 T2 update C -----
8 7 T2 commit -----

TRANSACTION TABLE DIRTY PAGE TABLE


TRANSACTION ID LAST LSN STATUS PAGE ID LSN
(b) T1 3 commit C 1
T2 2 in progress B 2

TRANSACTION TABLE DIRTY PAGE TABLE


TRANSACTION ID LAST LSN STATUS PAGE ID LSN
T1 3 commit C 1
(c) T2 8 commit B 2
T3 6 in progress A 6

Chapter 19-35
Database Recovery
10 Recovery in multidatabase system
A multidatabase system is a special distributed database system
where one node may be running relational database system under
Unix, another may be running object-oriented system under
window and so on. A transaction may run in a distributed fashion
at multiple nodes. In this execution scenario the transaction
commits only when all these multiple nodes agree to commit
individually the part of the transaction they were executing. This
commit scheme is referred to as “two-phase commit” (2PC). If any
one of these nodes fails or cannot commit the part of the
transaction, then the transaction is aborted. Each node recovers the
transaction under its own recovery protocol.
Chapter 19-36
References
• RamezElmasri and Shamkant B. Navathe, “Fundamentals of Database System”, The
Benjamin / Cummings Publishing Co.
• Korth and Silberschatz Abraham, “Database System Concepts”, McGraw Hall.
• C.J.Date, “An Introduction to Database Systems”, Addison Wesley.
• Thomas M. Connolly, Carolyn & E. Begg, “Database Systems: A Practical Approach to
Design, Implementation and Management”, 5/E, University of Paisley, Addison-
Wesley.

37
THANK YOU

For queries
Email: [email protected]
38

You might also like