Database Testing for Developers
Database Testing for Developers
LO
Database testing is perhaps the most important test that can check the overall functionality of the software. A complete software system will enhance
functionality of different aspects like stored functions, stored procedures, and triggers. Database testing is almost similar to regression testing as it can run
automatically to ensure the integrity and robustness of the database layer of the software.
There are enough reasons that compel us to conduct database testing on a regular basis. Data sets are an essential feature of a software product. It is very
important to test and validate your data to keep it valid and up-to-date. The testing procedures conducted by different organizations are just not enough,
because they may not ensure the safety and integrity of the database software. Therefore, software developers have the immediate need to create a sturdy
testing method that can inspect and validate data schemes under implementation, so that they can keep their data accurate, safe, secure and precise.
In addition, database testing will also provide accurate feedback on the errors and bugs present in the system. Without an effective database testing method,
you may find it very difficult to determine the quality of data with respect to its accuracy and fineness. Database testing would also support the evolutionary
development of software, so you would know and understand where exactly you will be making changes and modifications.
Testing is very important, when you consider the issue of relational database. You can expect a number of threats and dangers attacking the database you are
creating. You should also know threat boundaries would exist, while you are developing software. For example, there are threats both inside the database and
at its interface.
This will necessitate application of a well-defined testing method to check these unforeseen developments. If you want to test database within its architecture,
clear box testing will come to your help; this test is a preferred testing method, when you want test the interface.
The time to conduct database tests is also vital, when you are developing the software. This will help you prevent any unforeseen errors and bugs. Software
developers will need to write the database test to see whether the codes present in the database will work or not. After conducting the database test, you will
also need to run the test up to the subset level, because of the underlying concerns of speed.
If the database fails to live up to its expected standards, you may need to update the existing codes to make it pass through the written database test. Once you
conduct the database testing, repeating the same test will you detect if the modifications and corrections introduced are effective or not.
Database testing should form a priority testing, when you are developing database software. An efficient database testing will ensure the production of solid,
robust and sturdy software that will never fail at any time.
Database testing involves the tests to check the exact values which have been retrieved from the database by the web or desktop application. Data should be
matched correctly as per the records are stored in the database.
Database testing is one of the major testing which requires tester to expertise in checking tables, writing queries and procedures. Testing can be performed in
web application or desktop and database can be used in the application like SQL or Oracle. There are many projects like banking, finance, health insurance
which requires extensive database testing. Below is the discussed point that how to test database:
First of all, tester should make sure that he understands all the application totally and which database is being used with the testing application.
Figure out all the tables which exist for the application and try to write all the database queries for the tables to execute since there are many things
which are really complex, so you can take the assistance of developers and figure out the queries. Test each and every table carefully for the data
added. This is the best process for the testers to perform the DB testing, it can be done for any application and it does not matter application is small
or big.
If things are really complex then tester can obtain the query from the developer to test the appropriate functionality.
Database is the spine of the application and tester should make sure to test very carefully. It requires skill, proficiency and sound knowledge.
Check all the functionality which is happening on every action performed in the application. Actions can include deletion, addition or save options.
Check whether the added record is added in the DB with the exact value. Check the deleted record gets removed from the database. These are
major roles which need to be monitored seriously.
Nowadays database is getting more complex due to the business logic which plays an important role for the applications. Tester should make sure
that values have been added correctly after the implementation of the business rules.
Hence, these are the above mentioned basic things that how and what to test in a database. Database testing is really a complex task and it should always be
performed if tester is much experienced in this field.
There is a good reason why all the examples on unit testing do not include interactions with the database: these kind of tests are both complex to setup and
maintain. While testing against your database you need to take care of the following variables:
Because many database APIs such as PDO, MySQLi or OCI8 are cumbersome to use and verbose in writing doing these steps manually is an absolute
nightmare.
Test code should be as short and precise as possible for several reasons:
You do not want to modify considerable amount of test code for little changes in your production code.
You want to be able to read and understand the test code easily, even months after writing it.
Additionally you have to realize that the database is essentially a global input variable to your code. Two tests in your test suite could run against the same
database, possibly reusing data multiple times. Failures in one test can easily affect the result of the following tests making your testing experience very
difficult. The previously mentioned cleanup step is of major importance to solve the “database is a global input” problem.
In his book on xUnit Test Patterns Gerard Meszaros lists the four stages of a unit-test:
1. Set up fixture
2. Exercise System Under Test
3. Verify outcome
4. Teardown
A test fixture describes the initial state of the database before entering the testing. After setting fixtures, database behavior is tested for defined test cases.
Depending on the outcome, test cases are either modified or kept as is. The "tear down" stage either results in terminating testing or continuing with other test
cases.
For successful database testing the following workflow executed by each single test is commonly executed:
Clean up the database: If the testable data is already present in the database, the database needs to be emptied.
Set up Fixture: A tool like PHPUnit will then iterate over fixtures and do insertions into the database.
Run test, Verify outcome and then Tear down: After resetting the database to empty and listing the fixtures, the test is run and the output is
verified. If the output is as expected, the tear down process follows, otherwise testing is repeated.
The figure indicates the areas of testing involved during different database testing methods, such as black-box testing and white-box testing.
Black box testing involves testing interfaces and the integration of the database, which includes:
With the help of these techniques, the functionality of the database can be tested thoroughly.
Pros and Cons of black box testing include: Test case generation in black box testing is fairly simple. Their generation is completely independent of software
development and can be done in an early stage of development. As a consequence, the programmer has better knowledge of how to design the database
application and uses less time for debugging. Cost for development of black box test cases is lower than development of white box test cases. The major
drawback of black box testing is that it is unknown how much of the program is being tested. Also, certain errors cannot be detected.
2. White Box Testing in database testing
White box testing mainly deals with the internal structure of the database. The specification details are hidden from the user.
It involves the testing of database triggers and logical views which are going to support database refactoring.
It performs module testing of database functions, triggers, views, SQL queries etc.
It validates database tables, data models, database schema etc.
It checks rules of Referential integrity.
It selects default table values to check on database consistency.
The techniques used in white box testing are condition coverage, decision coverage, statement coverage, cyclomatic complexity.
The main advantage of white box testing in database testing is that coding error are detected, so internal bugs in the database can be eliminated. The
limitation of white box testing is that SQL statements are not covered.
While generating test cases for database testing, the semantics of SQL statement need to be reflected in the test cases. For that purpose, a technique called
WHite bOx Database Application TEchnique "(WHODATE)" is used. As shown in the figure, SQL statements are independently converted into GPL
statements, followed by traditional white box testing to generate test cases which include SQL semantics.
1. The setup for database testing is costly and complex to maintain because database systems are constantly changing with expected insert, delete and
update operations.
2. Extra overhead is involved in order to determine the state of the database transactions.
3. After cleaning up the database, new test cases have to be designed.[citation needed]
4. An SQL generator is required to transform SQL statements in order to include the SQL semantic into database test cases.
1. Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
2. White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal
software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
3. Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. may require developing test driver modules or test harnesses.
4. Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added;
Application functionality and modules should be independent enough to test separately. done by programmers or by testers.
5. Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules,
individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed
systems.
6. Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing
geared to functional requirements of an application.
7. System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers
all combined parts of a system.
8. End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
9. Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is
crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
10. Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in
regression testing so typically automation tools are used for these testing types.
11. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this
testing to determine whether to accept application.
12. Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web
site under a range of loads to determine at what point the system’s response time degrades or fails.
13. Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number
beyond storage capacity, complex database queries, continuous input to system or database load.
14. Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance
requirements. Used different performance and load tools to do this.
15. Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented
whenever user stuck at any point. Basically system navigation is checked in this testing.
16. Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware,
software environment.
17. Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
18. Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external
access. Checked if system, database is safe from external attacks.
19. Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and
different combination s of above.
20. Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.
21. Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor
design changes may be made as a result of such testing.
22. Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
Software testing is an integral part of the software development life cycle (SDLC). Testing a piece of code effectively and efficiently is equally
important, if not more, to writing it. Software testing is nothing but subjecting a piece of code, to both, controlled and uncontrolled operating
conditions, in an attempt to simply observe the output, and then examine whether it is in accordance with certain pre-specified conditions.
Different sets of test cases and testing strategies are prepared, all of which are aimed at achieving one common goal - removing bugs and errors
from the code, and making the software error-free, and capable of providing accurate and optimum output.
Testing Methodology
The commonly used testing methodologies are unit testing, integration testing, acceptance testing, and system testing. Software is subjected to these
tests in a particular order.
Acceptance Testing
System Testing
Integration Testing
Unit Testing
Unit Testing - The first to be carried out is the unit test. As the name suggests, this method tests at the object level. Individual software components are
tested for any errors. Accurate knowledge of the program is needed for this test, as each module is checked. Thus, this testing is done by the
programmers and not the testers. Test codes are created to check if the software behaves as it is intended to.
Integration Testing - Individual modules that are already subjected to unit testing are integrated with one another, and are tested for faults. Such a type
of testing highlights interfacing errors. A 'top-down' approach of integration testing follows the architectural structure of the system. Another approach
taken is the 'bottom-up' approach, which is conducted from the bottom of the control flow.
System Testing - In this testing, the entire system is tested for errors and bugs. This test is carried out by interfacing hardware and software components
of the entire system, and then testing it. This testing is listed under the black-box testing method, where the software is checked for user-expected
working conditions.
Acceptance Testing - This is the last test that is conducted before the software is handed over to the client. It is carried out to ensure that the software
that has been developed meets all customer requirements. There are two types of acceptance testing - one that is carried out by the members of the
development team, known as internal acceptance testing (Alpha testing), and the other that is carried out by the customer, known as external acceptance
testing. If the testing is carried by the intended customers, it is termed as customer acceptance testing. In case the test is performed by the end users of
the software, it is known as user acceptance testing (Beta testing).
There are a few basic testing methods that form a part of the software testing regime. These tests are generally considered to be self-sufficient in finding
out errors and bugs in the entire system.
Black-box Testing - Black-box testing is carried out without any knowledge of the internal working of the system. The tester will stimulate the software
to user environment by providing different inputs and testing the generated outputs. This test is also known as closed-box testing or functional testing.
White-box Testing - White-box testing, unlike the black-box one, takes into account the internal functioning and logic of the code. To carry out this
test, the tester should have knowledge of the code, so as to find out the exact part of the code that is having errors. This test is also known as open-box
testing or glass testing.
Gray-box Testing - The testing where part knowledge of the code is necessary to carry out the test is called gray-box testing. This testing is done by
referring to system documents and data flow diagrams. The testing is conducted by the end users, or users who pose as end users.
Non-Functional Tests
Security Testing - An application's security is one of the main concerns of the developer. Security testing tests the software for confidentiality,
integrity, authentication, availability, and non-repudiation. Individual tests are conducted to prevent any unauthorized access to the software code.
Stress Testing - Software stress testing is a method where the software is subjected to conditions that are beyond the software's normal working
conditions. Once the break-point is reached, the results obtained are tested. This test determines the stability of the entire system.
Compatibility Testing - The software is tested for its compatibility with an external interface, like operating systems, hardware platforms, web
browsers, etc. The non-functional compatibility test checks whether the product is built to suit any software platform.
Efficiency Testing - As the name suggests, this testing technique checks the amount of code or resources that are used by the software while performing
a single operation. It is tested in terms of number of test cases that are executed in a given time frame.
Usability Testing - This testing looks at the usability aspect of the software. The ease with which a user can access the product forms the main testing
point. Usability testing looks at five aspects of testing, - learnability, efficiency, satisfaction, memorability, and errors.
Waterfall Model - The waterfall model adopts a 'top-down' approach, regardless of whether it is being used for software development or testing. The
basic steps involved in this software testing methodology are as follows:
Requirement analysis
Test case design
Test case implementation
Testing, debugging, and validating the code or product
Deployment and maintenance
In this methodology, you move on to the next step only after you have completed the present step. The model follows a non-iterative approach. The
main benefit of this methodology is its simplistic, systematic, and orthodox approach. However, it has many shortcomings, since bugs and errors in the
code are not discovered until and unless the testing stage is reached. This can often lead to wastage of time, money, and other valuable resources.
Agile Model - This methodology follows neither a purely sequential approach nor a purely iterative approach. It is a selective mix of both approaches,
in addition to quite a few and new developmental methods. Fast and incremental development is one of the key principles of this methodology. The
focus is on obtaining quick, practical, and visible outputs, rather than merely following the theoretical processes. Continuous customer interaction and
participation is an integral part of the entire development process.
Rapid Application Development (RAD) - The name says it all. In this case, the methodology adopts a rapid developmental approach, by using the
principle of component-based construction. After understanding the different requirements of the given project, a rapid prototype is prepared, and is then
compared with the expected set of output conditions and standards. Necessary changes and modifications are made following joint discussions with the
customer or development team (in the context of software testing). Though this approach does have its share of advantages, it can be unsuitable if the
project is large, complex, and happens to be extremely dynamic in nature, wherein requirements change constantly.
Spiral Model - As the name implies, the spiral model follows an approach in which there are a number of cycles (or spirals) of all the sequential steps
of the waterfall model. Once the initial cycle gets completed, a thorough analysis and review of the achieved product or output is performed. If it is not
as per the specified requirements or expected standards, a second cycle follows, and so on. This methodology follows an iterative approach, and is
generally suited for large projects, having complex and constantly changing requirements.
Rational Unified Process (RUP) - The RUP methodology is also similar to the spiral model, in the sense that the entire testing procedure is broken up
into multiple cycles or processes. Each cycle consists of four phases - inception, elaboration, construction, and transition. At the end of each cycle, the
product/output is reviewed, and a further cycle (made up of the same four phases) follows if necessary. Today, you will find certain organizations and
companies adopting a slightly modified version of the RUP, which goes by the name Enterprise Unified Process (EUP).
With applications of information technology growing with every passing day, the importance of proper software testing has grown manifold. Many
firms have dedicated teams for this purpose, and the scope for software testers is at par with that of developers.
A testing environment is a setup of software and hardware on which the testing team is going to perform the testing of the newly built software product.
This setup consists of the physical setup which includes hardware, and logical setup that includes Server Operating system, client operating system,
database server, front end running environment, browser (if web application), IIS (version on server side) or any other software components required to
run this software product. This testing setup is to be built on both the ends – i.e. the server and client.
A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements.
A test plan is usually prepared by or with significant input from Test Engineers.
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include one or more of the
following:
Design Verification or Compliance test - to be performed during the development or approval stages of the product, typically on a small
sample of units.
Manufacturing or Production test - to be performed during preparation or assembly of the product in an ongoing manner for purposes of
performance verification and quality control.
Acceptance or Commissioning test - to be performed at the time of delivery or installation of the product.
Service and Repair test - to be performed as required over the service life of the product.
Regression test - to be performed on an existing operational product, to verify that existing functionality didn't get broken when other aspects
of the environment are changed (e.g., upgrading the platform on which an existing application runs).
A complex system may have a high level test plan to address the overall requirements and supporting test plans to address the design details of
subsystems and components.
Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be
described in the test plan: Test Coverage, Test Methods, and Test Responsibilities. These are also used in a formal test strategy.
planning,
acquisition and
Execution& evaluation.
The planning phase provides an opportunity for the tester to determine what to test and how to test it. The acquisition phase is the time during which the
required testing software is manufactured, data sets are defined and collected, and detailed test scripts are written. During the execution and evaluation
phase the test scripts are executed and the results of that execution are evaluated to determine whether the product passed the test.
In this month’s column I will focus on the planning phase. I am going to briefly discuss test strategies and then focus on some techniques for
determining which features or components within a program to test more intensely than others. I will also discuss a technique for incrementally defining
a test suite so that the level of intensity is used to determine the number of test cases to be created.
Test Strategy
The project test plan should describe the overall strategy that the project will follow for testing the final application and the products leading up to the
completed application. Strategic decisions that may be influenced by the choice of development paradigms and process models include:
When to test. The test plan should show how the stages of the testing process, such as component, integration and acceptance, correspond to stages of
the development process. For those of us who have adopted an iterative, incremental development strategy, incremental testing is a natural fit. Testing
can begin as soon as some coherent unit is developed and continues on successively larger units until the complete application is tested. This approach
provides for earlier detection of faults and feedback into development. For projects that do not schedule periodic deliveries of executable units, the big
bang testing strategy, in which the first test is performed on the complete product, is necessary. This strategy often results in costly bottlenecks as small
faults prevent the major system functionality from being exercised.
Who will test. The test plan should clearly assign responsibilities for the various stages of testing to project personnel. The independent tester brings a
fresh perspective to how well the application meets the requirements. Using such a person for the component test requires a long learning curve which
may not be practical in a highly iterative environment. The developer brings a knowledge of the details of the program but also a bias concerning his/her
own work. I favor involving developers in testing as do many others [Beizer], but this only works if there are clear guidelines about what to test and
how. I will discuss one type of guideline below.
What will be tested. The test plan should provide clear objectives for each stage in the testing process. The amount of testing at each stage will be
determined by various factors. For example, the higher the priority of reuse in the project plan, the higher should be the priority of component testing in
the testing strategy. Component testing is a major resource sink, but it can have tremendous impact on quality. I will devote a future column to
techniques for minimizing the resources required for component testing and maximizing its benefits. For the remainder of this column I will discuss
techniques for system testing such as determining how many cases to build.
Test coverage
Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test Coverage is derived from design
specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will
have one or more corresponding means of verification. Test coverage for different product life stages may overlap, but will not necessarily be exactly
the same for all stages. For example, some requirements may be verified during Design Verification test, but not repeated during Acceptance test. Test
coverage also feeds back into the design process, since the product may have to be designed to allow test access (see Design For Test).
Test methods
Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by standards, regulatory agencies, or
contractual agreement, or may have to be created new. Test methods also specify test equipment to be used in the performance of the tests and establish
pass/fail criteria. Test methods used to verify hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test
procedures that are documented separately.
Test responsibilities
Test responsibilities include what organizations will perform the test methods and at each stage of the product life. This allows test organizations to
plan, acquire or develop test equipment and other resources necessary to implement the test methods for which they are responsible. Test responsibilities
also includes, what data will be collected, and how that data will be stored and reported (often referred to as "deliverables"). One outcome of a
successful test plan should be a record or report of the verification of all design specifications and requirements as agreed upon by all parties.
Software developers want to keep their database testing as simple as possible. However, it is almost impossible to have a simple and effective database test,
just because every test is different with its own weaknesses and strengths.
Therefore, you should learn the basics of setting up and creating database test to implement what is written in the test. In addition, you may need to know
about two common technical terms related to database testing. These two terms are database sandbox and database testing tools.
The database sandbox is an environment that simulates the database and its scope. You can use a number of sandboxes. However, the one that you choose
should be appropriate for your needs and requirements. In each of the sandbox used, everyone involved in the testing should have his or her own copies of
the database.
Everyone is free to conduct his or her own experiments to implement the right type of codes. It is also possible to reformat the existing functions in the
database and validate those changes almost quickly during the testing. Once you follow these steps, you can then start working on another phase of project
called integration sandbox.
Here, you will need to rebuild the system you have created and later run a test to confirm that all the codes are in working condition. In case, of any errors in
the project integration sandbox, you may need to start working on it once again.
Developing a formidable database may take a long time, before you can introduce it to an outside environment. Testing the database codes in this sandbox
will provide plenty of benefits to software developers, because it enables them to remove common mistakes or errors that can adversely affect future users.
You will need several tools to build an effective database test. Database tools help you to run a proper database test. You can use two different types of
database tools for both the internal database test and interface database test.
The tools that you use to build database tests should support the language chosen to create the software. For example, if you are developing a database test
from Microsoft SQL, then your procedures should be tested by using the T-SQL framework.
On the other hand, if you are using Oracle for creating database, then you should use the PL-SQL framework. The next step is choosing a database tool that
helps you check whether the test data is under the configuration management control or not.
Database test is one of the true and effective tests that will help you establish the relevancy of the data and related information. However, not many software
developers are using a database procedure, because it is still a relatively new concept. Some of the challenges that database-testing face today include
insufficient knowledge about its benefits, lack of testing skills, and tools to conduct database testing.
Because of these unique challenges, database testing has a long way to go before everyone will use it. Software developers will also need to get started now,
so that they will be able to avoid database related challenges in the future. Database testing shows many potential benefits to major organizations; however, it
still needs a lot of improvement or enhancements to be truly accepted by people from all over the world.
Test Schedule
A test plan should make an estimation of how long it will take to complete the testing phase. There are many requirements to complete testing phases. First,
testers have to execute all test cases at least once. Furthermore, if a defect was found, the developers will need to fix the problem. The testers should then re-
test the failed test case until it is functioning correctly. Last but not the least, the tester need to conduct regression testing towards the end of the cycle to
make sure the developers did not accidentally break parts of the software while fixing another part. This can occur on test cases that were previously
functioning properly.
The test schedule should also document the number of testers available for testing. If possible, assign test cases to each tester.
It is often difficult to make an accurate estimate of the test schedule since the testing phase involves many uncertainties. Planners should take into account the
extra time needed to accommodate contingent issues. One way to make this approximation is to look at the time needed by the previous releases of the
software. If the software is new, multiplying the initial testing schedule approximation by two is a good way to start.
A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.
The major advantage of automated testing is that tests may be executed continuously without the need for a human intervention. Another advantage over
manual testing in that it is faster and easily repeatable. Thus, it is worth considering automating tests if they are to be executed several times, for example as
part of regression testing.
Disadvantages of automated testing are that automated tests can like any piece of software be poorly written or simply break during playback. They also can
only examine what they have been programmed to examine. Since most systems are designed with human interaction in mind, it is good practice that a
human tests the system at some point. A trained manual tester can notice that the system under test is misbehaving without being prompted or directed
however automated tests can only examine what they have been programmed to examine. Therefore, when used in regression testing, manual testers can find
new bugs while ensuring that old bugs do not reappear while an automated test can only ensure the latter. That is why mixed testing with automated and
manual testing can give very good results, automating what needs to be tested often and can be easily checked by a machine, and using manual testing to do
test design to add them to the automated tests suite and to do exploratory testing.
Conduct testing
Perform database testing
You can check the following items in database testing :
1. Data integrity
The complete data belonging to each entity should be stored in the database. Depending on the database design, the data may be present in a single table or
multiple related tables. Parent-child relationships should exist in the data. There should not be any missing data.
The data should be present in the correct table and correct field within the table.
3. Correctness and completeness of data migration (in case some or all the original data has come from another source)
4. Functionality and performance of user objects e.g. functions, procedures, triggers, jobs
You may perform API testing on the user objects to test them.
5. Database performance (query execution times, throughput etc.) and locking problems
You may identify the main queries (or procedures) that are used in the application and time them with sample data. Locking problems may become apparent
when multiple inserts/ updates are being made to the same entity simultaneously.
6. Data security
You may check if any data that should be encrypted e.g. passwords, credit card numbers is in plain text or not. The database should not have the default
passwords. Even application accounts should have passwords that are complex and not easily guessed.
1. Database fields (if they meet the specifications e.g. width, data type etc. as given in the design documentation/ data dictionary etc.)
2. Normalization level
In order to perform database testing, you should be good at using the database tools and writing SQL queries.
In engineering and its various sub disciplines, acceptance testing is a test conducted to determine if the requirements of a specification or
contract are met. It may involve chemical tests, physical tests, or performance tests.
In systems engineering it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured
mechanical parts, or batches of chemical products) prior to its delivery.
Software developers often distinguish acceptance testing by the system provider from acceptance testing by the customer (the user or client)
prior to accepting transfer of ownership. In the case of software, acceptance testing performed by the customer is known as user acceptance
testing (UAT), end-user testing, site (acceptance) testing, or field (acceptance) testing.
A smoke test is used as an acceptance test prior to introducing a build to the main testing process.
In software project management, software testing, and software engineering, verification and validation (V&V) is the process of checking
that a software system meets specifications and that it fulfills its intended purpose. It may also be referred to as software quality control. It is
normally the responsibility of software testers as part of the software development lifecycle.
Definitions
Validation checks that the product design satisfies or fits the intended use (high-level checking), i.e., the software meets the user
requirements. This is done through dynamic testing and other forms of review.
Verification and validation are not the same thing, although they are often confused. Boehm succinctly expressed the difference between
them:
Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the
conditions imposed at the start of that phase.
Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies
specified requirements. [IEEE-STD-610]
In other words, validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place,
while verification is ensuring that the product has been built according to the requirements and design specifications. Validation ensures that
"you built the right thing". Verification ensures that "you built it right". Validation confirms that the product, as provided, will fulfill its
intended use.
Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar:
Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their
associated data are accurate representations of the real world from the perspective of the intended use(s).
Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose.
Verification is the process of determining that a computer model, simulation, or federation of models and simulations
implementations and their associated data accurately represents the developer's conceptual description and specifications.
Walkthrough: Method of conducting informal group/individual review is called walkthrough, in which a designer or programmer leads
members of the development team and other interested parties through a software product, and the participants ask questions and make
comments about possible errors, violation of development standards, and other problems or may suggest improvement on the article,
walkthrough can be pre planned or can be conducted at need basis and generally people working on the work product are involved in the
walkthrough process.
The Purpose of walkthrough is to:
Find problems
Discuss alternative solutions
Focusing on demonstrating how work product meets all requirements.
Leader: who conducts the walkthrough, handles administrative tasks, and ensures orderly conduct (and who is often the Author)
Recorder: who notes all anomalies (potential defects), decisions, and action items identified during the walkthrough meeting, normally
generate minutes of meeting at the end of walkthrough session.
Author: who presents the software product in step-by-step manner at the walk-through meeting, and is probably responsible for completing
most action items.
Walkthrough Process: Author describes the artifact to be reviewed to reviewers during the meeting. Reviewers present comments, possible
defects, and improvement suggestions to the author. Recorder records all defect, suggestion during walkthrough meeting. Based on reviewer
comments, author performs any necessary rework of the work product if required. Recorder prepares minutes of meeting and sends the
relevant stakeholders and leader is normally to monitor overall walkthrough meeting activities as per the defined company process or
responsibilities for conducting the reviews, generally performs monitoring activities, commitment against action items etc.
Inspection: An inspection is a formal, rigorous, in-depth group review designed to identify problems as close to their point of origin as
possible., Inspection is a recognized industry best practice to improve the quality of a product and to improve productivity, Inspections is a
formal review and generally need is predefined at the start of the product planning, The objectives of the inspection process are to
Find problems at the earliest possible point in the software development process
Verify that the work product meets its requirement
Ensure that work product has been presented according to predefined standards
Provide data on product quality and process effectiveness
Inspection advantages are to build technical knowledge and skill among team members by reviewing the output of other
people
Increase the effectiveness of software testing.
Inspector Leader: The inspection leader shall be responsible for administrative tasks pertaining to the inspection, shall be responsible for
planning and preparation, shall ensure that the inspection is conducted in an orderly manner and meets its objectives, should be responsible
for collecting inspection data
Inspection Walkthrough
Formal Informal
Initiated by the project team Initiated by the author
Planned meeting with fixed roles assigned to all the Unplanned.
members involved
Reader reads the product code. Everyone inspects it and Author reads the product code and his team mate comes
comes up with defects. up with defects or suggestions
Recorder records the defects Author makes a note of defects and suggestions offered
by team mate
Moderator has a role in making sure that the discussions Informal, so there is no moderator
proceed on the productive lines
Self-Check 1