TM05 Monitor and Administer Database Hile
TM05 Monitor and Administer Database Hile
Administratio Level-IV
Based on March 2022, Curriculum Version II
1|Page
Structured Query Language (SQL): Use a standardized query language like SQL to interact with and
manipulate the database. SQL provides a set of commands for data definition, data manipulation, and data
control.
Security: Implement security measures to protect sensitive data. This includes user authentication,
authorization, and encryption.
Concurrency Control: Manage multiple users accessing the database simultaneously to prevent conflicts
and ensure data consistency.
Scalability: Design databases to scale with the growth of data and user interactions. This involves
considerations like indexing, partitioning, and clustering.
Backup and Recovery: Regularly back up the database to prevent data loss in the event of system failures
or other disasters. Establish procedures for database recovery.
Data Independence: Separate the logical and physical aspects of the database, allowing changes to one
without affecting the other. This enhances flexibility and maintainability.
Database Software Installation: Install the database software on the server, ensuring that the
installation follows best practices and is compatible with the operating system. Configure the
software with the necessary options and settings.
Memory Allocation: Allocate and configure memory settings for the database system. This
includes setting parameters such as buffer sizes, cache sizes, and other memory-related
configurations to optimize database performance.
Storage Configuration
Configure storage settings, including the location of database files, log files, and backup files. Ensure that
there is adequate disk space, and consider factors like disk speed and RAID configurations for optimal
performance.
Database Instance Configuration
Configure the database instance with parameters specific to the database engine. These parameters may
include settings related to transaction logs, data files, and temporary storage.
Network Configuration
Set up network configurations to enable communication between the database server and clients.
Configure network protocols, firewall settings, and ensure proper connectivity.
Authentication and Authorization
Configure authentication methods and authorization settings to control access to the database. This
involves setting up user accounts, roles, and permissions based on security requirements.
Startup Parameters
2|Page
Define startup parameters for the database system. These parameters may include options related to
recovery, logging, and other critical aspects of the database startup process.
Error Handling and Logging
Configure error handling mechanisms and logging settings to capture information about the startup
process. This is crucial for diagnosing issues and monitoring the health of the database system.
Backup Configuration
Establish backup configurations to ensure that regular backups are scheduled and performed. This
includes specifying backup locations, retention policies, and verification procedures.
Monitoring and Alerting
Set up monitoring tools and configure alerting mechanisms to proactively monitor the database system.
This allows for the early detection of issues and prompt resolution.
Documentation
Document the entire system configuration process. This documentation serves as a reference for future
maintenance, upgrades, and troubleshooting.
The system configuration for database startup is a critical step in ensuring the stability, performance, and
security of the database environment. Proper configuration practices contribute to a well-tuned and
efficiently running database system.
Log Monitoring: Regularly check database logs for any error messages, warnings, or abnormal events
during startup and ongoing operation. Logs provide valuable information about the system's health.
Performance Monitoring: Utilize performance monitoring tools to track database performance metrics.
This includes monitoring CPU usage, memory utilization, disk I/O, and query response times. Deviations
from normal patterns can indicate potential issues.
Alerts and Notifications: Implement alerting mechanisms to notify administrators of any irregularities.
Set up alerts for critical events such as system failures, performance bottlenecks, or security breaches.
Database Health Checks: Conduct regular health checks to assess the overall state of the database. This
involves reviewing system parameters, configurations, and resource utilization to identify any
abnormalities.
Startup Procedures: Establish and document clear procedures for starting up the database system.
Regularly review the startup logs to ensure that the database initializes without errors.
Automated Monitoring Scripts: Develop and implement automated scripts to monitor key database
parameters. These scripts can perform periodic checks and report any deviations from predefined
thresholds.
Security Audits: Conduct regular security audits to identify and address any vulnerabilities or
unauthorized access attempts. Monitor login attempts, privilege changes, and data access patterns.
Backup Verification: Regularly verify the integrity of database backups to ensure they can be
successfully restored. This helps in preparing for potential disasters or data corruption issues.
Resource Utilization: Monitor resource utilization such as CPU, memory, and disk space to ensure that
the database has sufficient resources for normal operation. Plan for scalability if resource demands
increase.
3|Page
Query Performance: Keep an eye on the performance of frequently executed queries. Identify and
optimize poorly performing queries to prevent degradation of overall system performance.
User Activity Monitoring: Track user activity, especially during peak usage periods. Unusual spikes in
activity or unauthorized access attempts should be investigated promptly.
Data Consistency Checks
Implement routines to check the consistency of data stored in the database. Detect and resolve any
discrepancies or anomalies that may arise during normal operation.
By consistently monitoring these aspects, database administrators can proactively identify and address
irregularities, minimizing the risk of downtime, data loss, or security breaches. Regular reviews and
adjustments to monitoring strategies are essential to adapt to changing usage patterns and evolving system
requirements.
A data dictionary is a centralized repository that provides metadata about the data within a database. It
typically includes details such as data definitions, data types, relationships between tables, constraints, and
other essential information.
A data dictionary is a collection of descriptions of the data objects or items in a data model for the benefit of
programmers and others who need to refer to them.
It is a set of information describing the contents, format, and structure of a database and the relationship
between its elements, used to control access to and manipulation of the database.
When developing programs that use the data model, a data dictionary can be consulted to understand where a
data item fits in the structure, what values it may contain, and basically what the data item means in real-
world terms.
Most DBMS keep the data dictionary hidden from users to prevent them from accidentally destroying its
contents.
Example:
In a data dictionary, you might document that the "Customers" table includes fields such as
"CustomerID," "Name," "Email," and "Phone." For each field, you specify the data type, length, and
any constraints.
2.1.2. Structure Verification
Verify that the actual structure of the database matches the information documented in the data dictionary.
This ensures that the database schema aligns with the intended design and that any changes to the database
structure are accurately reflected in the data dictionary.
Example:
If the data dictionary indicates that the "Orders" table should have a foreign key relationship with the
"Customers" table, verify that this relationship exists in the database schema. Check that the data types,
constraints, and relationships match the documented specifications.
4|Page
Consistency Checks: Conduct consistency checks to ensure that the data dictionary is consistent with
other project documentation and requirements. This involves verifying that changes made to the database
structure are appropriately updated in the data dictionary.
Example:
If there's a change in a table structure, such as adding a new field, ensure that the data dictionary is updated to
reflect this change. Consistency checks prevent discrepancies between documentation and the actual database.
Data Type Verification: Check that the data types assigned to each field in the database match the
specifications in the data dictionary. This includes verifying numeric precision, string lengths, and other
data type attributes.
Example
If the data dictionary specifies that the "Price" field should be a numeric data type with two decimal places,
verify that this is accurately implemented in the database.
Constraint Verification: Verify that constraints, such as primary keys, foreign keys, unique constraints,
and check constraints, are implemented correctly in the database according to the data dictionary.
Example
If the data dictionary specifies that the "ProductID" field is the primary key for the "Products" table, verify
that this constraint is enforced in the database schema.
Documentation Updates: If any discrepancies or changes are identified during the verification process,
update the data dictionary and any related documentation accordingly. Keep the documentation
synchronized with the actual database structure.
Example
If a new index is created on a table for performance reasons, update the data dictionary to include information
about this index.
Version Control: Implement version control for the data dictionary to track changes over time. This
ensures that you can trace modifications, additions, or deletions to the data dictionary and understand the
evolution of the database structure.
Example:
Use version control tools to track changes to the data dictionary, providing a history of alterations to the
database structure.
By compiling a comprehensive data dictionary and regularly verifying the database structure against it,
organizations can maintain consistency, accuracy, and documentation integrity, which is crucial for effective
database management and development.
2.2. Data Integrity Constraint Maintenance
Data integrity is a constraint which used to ensure accuracy and consistency of data in a database by
validating the data before getting stored in the columns of the table.
Data integrity refers to the overall completeness, accuracy and consistency of data in according to business
requirements.
Data integrity constraint maintenance is a critical aspect of managing a database. Data integrity constraints
define the rules that must be adhered to when inserting, updating, or deleting data in a database. Here's an
overview of key considerations for maintaining data integrity constraints:
5|Page
Types of integrity constraints
1. Entity integrity
2. Referential integrity
3. Domain integrity
4. User defined integrity
1. Entity integrity
This is concerned with the concept of primary keys. The rule states that every table must have its own primary
key and that each has to be unique and not null.
2. Referential Integrity
This is the concept of foreign keys. The rule states that the foreign key value can be in two states. The first
state is that the foreign key value would refer to a primary key value of another table, or it can be null. Being
null could simply mean that there are no relationships, or that the relationship is unknown.
Referential integrity is a feature provided by relational DBMS that prevents users from entering inconsistent
data.
3. Domain Integrity
This states that all columns in a relational database are in a defined domain.
The concept of data integrity ensures that all data in a database can be traced and connected to other data. This
ensures that everything is recoverable and searchable. Having a single, well defined and well controlled data
integrity system increases stability, performance, reusability and maintainability.
4. User Defined Integrity
User-defined integrity allows you to define specific business rules that do not fall into one of the other
integrity categories. All of the integrity categories support user-defined integrity (all column- and table-level
constraints in CREATE TABLE, stored procedures, and triggers).
Business rules may dictate/state that when a specific action occurs further actions should be triggered.
For example, deletion of a record automatically writes that record to an audit table.
1 2.3. Creation and design of indexes and multiple-field keys
The creation and design of indexes, as well as the use of multiple-field keys (composite keys), are crucial
aspects of database optimization.
6|Page
- You should create indexes on columns that are used frequently in WHERE clauses.
- You should create indexes on columns that are used frequently to join tables.
- You should create indexes on columns that are used frequently in ORDER BY clauses.
- You should create indexes on columns that have few of the same values or unique values in the table.
- You should not create indexes on small tables (tables that use only a few blocks) because a full table scan
may be faster than an indexed query.
- If possible, choose a primary key that orders the rows in the most appropriate order.
Creating indexes
Indexes can be created to order the values in a column in ascending or descending sequence.
You can use the CREATE INDEX statement to create indexes.
The general form of CREATE INDEX statement is:
CREATE INDEX index_name ON table_name (column1 [ASC | DESC] ,...)
Delete an index
Deleting an index means removing one or more relational indexes from the current database.
The DROP INDEX statement is used to delete an index in a table.
Syntax: DROP INDEX index_name ON table_name
To delete an index by using Object Explorer, you can follow the steps as shown below:
In Object Explorer, expand the database that contains the table on which you want to delete an index.
Expand the Tables folder.
Expand the table that contains the index you want to delete.
Expand the Indexes folder.
Right-click the index you want to delete and select Delete.
In the Delete Object dialog box, verify that the correct index is in the Object to be deleted grid and
click OK.
To delete an index using Table Designer
- In Object Explorer, expand the database that contains the table on which you want to delete an index.
- Expand the Tables folder.
- Right-click the table that contains the index you want to delete and click Design.
- On the Table Designer menu, click Indexes/Keys.
- In the Indexes/Keys dialog box, select the index you want to delete.
- Click Delete.
- Click Close.
- On the File menu, select save table_name.
View and edit indexes
To view all indexes in a database
- In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that
instance.
- Expand Databases, expand the database that contains the table with the specified index, and then
expand Tables.
- Expand the table in which the index belongs and then expand Indexes.
To modify an index using wizard
- In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that
instance.
- Expand Databases, expand the database in which the table belongs, and then expand Tables.
7|Page
- Expand the table in which the index belongs and then expand Indexes.
- Right-click the index that you want to modify and then click Properties.
In the Index Properties dialog box, make the desired changes. For example, you can add or remove a column
from the index key, or change the setting of an index option.
8|Page
Isolation Levels: Understand and configure different isolation levels (e.g., READ COMMITTED, REPEATABLE
READ, SERIALIZABLE) based on the application's requirements. Higher isolation levels generally involve more
restrictive locking, which can impact performance.
Row-Level Locking: Consider using row-level locking instead of table-level locking when possible. Row-level locking
allows for more granular control and reduces contention for resources.
Lock Escalation: Monitor and understand lock escalation mechanisms in the database system. Lock escalation occurs
when a lower-level lock (e.g., row-level) is escalated to a higher-level lock (e.g., table-level) to manage resources more
efficiently.
Lock Statistics: Regularly review lock statistics and performance metrics to identify patterns of contention. This
information helps in making informed decisions about indexing, query optimization, and application design.
Database Management System (DBMS) Tools: Utilize built-in tools provided by the DBMS for monitoring locks and
transactions. These tools often provide insights into lock wait times, deadlocks, and other relevant metrics.
Concurrency Control Mechanisms: Understand the concurrency control mechanisms implemented by the DBMS,
such as optimistic or pessimistic concurrency control. Choose the appropriate mechanism based on the application's
requirements.
Alerts and Notifications: Implement alerts and notifications for unusual lock-related activity. This includes
notifications for prolonged lock wait times, frequent deadlocks, or other anomalies.
Documentation: Document lock configurations, monitoring strategies, and any tuning adjustments made to
optimize database performance. This documentation helps in troubleshooting and future optimizations.
Effective monitoring of locks is essential for maintaining a balance between transaction concurrency and data
consistency. Regular analysis of lock-related metrics enables administrators to identify and address issues
proactively, ensuring optimal database performance.
2 2.5 Backup Verification and Retrieval
Backup verification and retrieval are critical components of a robust data management strategy. Ensuring that
backups are created successfully, verifying their integrity, and having a reliable process for retrieval are
essential for data protection and disaster recovery.
Backup Verification: Regularly verify the integrity of database backups to ensure that they are free from
corruption and can be relied upon for recovery. Verification may involve checking backup files, validating
backup processes, and confirming that the backup captures the entire database.
Verifying that recent backups of a database have been stored successfully and can be retrieved as a full
working copy is a critical aspect of database management. Here's a step-by-step guide on how you might
confirm this:
Identify Backup Location: Determine where the recent backups are stored. This could be on local servers,
network-attached storage (NAS), or cloud storage.
Check Backup Logs: Review backup logs to confirm that recent backup operations were successful. Examine
any error messages or warnings that may indicate issues.
Verify Timestamps: Check the timestamps on the backup files to ensure they correspond to recent backup
operations. This helps confirm that the backup files are up-to-date.
Perform a Trial Restoration: Conduct a trial restoration of the database from the backup to ensure that the
process works as expected. This involves restoring the database to a test environment without affecting the
production system.
Test the Restored Database: After the restoration, perform tests on the restored database to ensure that it is
fully functional. Run sample queries, check data integrity, and validate that all necessary components are in
place.
Verify File Integrity: Check the integrity of backup files to ensure they are not corrupted. You can use
checksums or hash functions to verify the integrity of the backup files.
9|Page
Check Backup Retention Policy: Confirm that the backup retention policy aligns with the organization's
requirements. Ensure that backups are retained for an appropriate duration and that old backups are regularly
pruned.
Ensure Accessibility: Verify that the individuals responsible for database recovery have access to the backup
files and the necessary credentials to restore the database.
Test Recovery Scenarios: Simulate various disaster recovery scenarios (e.g., hardware failure, data
corruption) and ensure that the backup and recovery processes can effectively address these situations.
Documentation Review: Refer to documentation related to backup and recovery procedures to confirm that
the documented steps align with the actual processes followed.
Automation Verification: If backups are automated, ensure that the backup automation scripts or tools are
running as scheduled and producing the expected results.
Coordinate with IT Operations: Communicate with IT operations or relevant teams to confirm that the
backup storage infrastructure is operational and that there are no issues with storage devices or cloud services.
Notification Systems: Ensure that notification systems are in place to alert relevant personnel in case of
backup failures or issues.
Regular Backup Schedule: Establish a regular schedule for backups based on the organization's data
retention policies and recovery objectives. This could include daily, weekly, or incremental backups.
Automated Backup Verification: Implement automated processes to verify the success of backup
operations. This involves checking that backups are completed without errors and are consistent t
11 | P a g e
User Training and Awareness: Provide training to users and stakeholders on the data update processes and
guidelines. Increasing awareness helps prevent unintentional or unauthorized updates and promotes a culture
of responsible data management.
Collaboration with Application Teams: Collaborate closely with application development teams to ensure
that data updates align with application requirements and do not negatively impact system functionality.
Compliance with Regulations: Ensure that data updates comply with relevant regulations and industry
standards. This is particularly important in industries with specific data governance and compliance
requirements.
Regular reviews and updates to organizational guidelines contribute to a robust and reliable data management
framework.
4 Unit Three: Database access management
Status
Allocate or remove access privileges according to user status" involves managing user access based on
changes in their status, such as new user onboarding, role changes, or when a user leaves the organization.
Here are examples for this aspect of Access Privilege Management:
12 | P a g e
Example:
-- Granting conditional access based on user status
IF user_status = 'active' THEN
GRANT SELECT ON database.table4 TO 'active_user'@'localhost';
ELSE
REVOKE ALL PRIVILEGES ON database.table4 FROM 'inactive_user'@'localhost';
END IF;
User Access Termination: Ensuring that access privileges are promptly terminated when a user leaves the
organization.
Example:
-- Terminating access for a user who has left the organization
REVOKE ALL PRIVILEGES ON *.* FROM 'former_user'@'localhost';
DROP USER 'former_user'@'localhost';
Access Privileges for Temporary Roles:
Definition: Granting temporary access privileges for users in specific roles or projects.
Example:
-- Granting temporary access for a specific project
GRANT SELECT, INSERT, UPDATE ON project_database.* TO 'temporary_user'@'localhost';
Access Based on User Approval:
Definition: Requiring approval for granting or modifying access privileges.
Example:
-- Granting access after approval
GRANT SELECT, INSERT, UPDATE ON database.table TO 'approved_user'@'localhost';
Access Audit and Logging:
Definition: Logging and auditing access changes to maintain a record of who has been granted or revoked
access privileges.
Example:
-- Logging access changes
-- This could involve triggers or database audit features
INSERT INTO access_log (timestamp, user, action, database_object)
VALUES (CURRENT_TIMESTAMP, 'admin', 'GRANT', 'database.table5');
13 | P a g e
Real-time Monitoring: Implement real-time monitoring to receive immediate alerts for suspicious log-in
activities. Real-time monitoring enhances the ability to respond promptly to security incidents.
Implement Geolocation Analysis: Utilize geolocation analysis to identify log-ins from locations inconsistent
with normal user behavior. This helps detect potential unauthorized access.
Check for Unusual Log-in Times: Investigate log-ins that occur during unusual hours or outside of normal
business hours. This can be an indicator of unauthorized access.
Review Failed Log-ins: Pay close attention to failed log-in attempts. Excessive failed attempts may suggest a
brute-force attack or an attempt to gain unauthorized access.
Track User Accounts: Monitor log-ins for privileged user accounts closely. Unauthorized access to accounts
with elevated privileges poses a significant security risk.
Regular Security Training: Conduct regular security training for users to raise awareness about the
importance of secure log-in practices and to recognize and report suspicious activities.
Incident Response Plan: Have an incident response plan in place to guide actions in the event of a detected
security breach. This plan should include steps for isolating affected systems, notifying relevant parties, and
conducting forensic analysis.
Continuous Improvement: Continuously refine and improve log-in monitoring based on evolving security
threats and the organization's specific requirements.
15 | P a g e