Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
40 views17 pages

Chapter 5 of Software Engineering

Unit 5 Software Testing covers the fundamentals and principles of software testing, emphasizing the importance of designing effective test cases to uncover errors through both white-box and black-box testing techniques. It outlines the characteristics and principles of good testing, including verification and validation processes, as well as the various types of testing such as functional, non-functional, and regression testing. The document also discusses the differences between black-box and white-box testing, detailing their methodologies, advantages, and disadvantages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views17 pages

Chapter 5 of Software Engineering

Unit 5 Software Testing covers the fundamentals and principles of software testing, emphasizing the importance of designing effective test cases to uncover errors through both white-box and black-box testing techniques. It outlines the characteristics and principles of good testing, including verification and validation processes, as well as the various types of testing such as functional, non-functional, and regression testing. The document also discusses the differences between black-box and white-box testing, detailing their methodologies, advantages, and disadvantages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit 5 Software Testing

5.1 Testing Fundamentals and Principles

What is testing?

Once source code has been generated, software must be tested to uncover (and correct) as
many errors as possible before delivery to your customer. Your goal is to design a series of
test cases that have a high likelihood of finding errors.

These techniques provide systematic guidance for designing tests that (1) exercise the
internal logic and interfaces of every software component and (2) exercise the input and
output domains of the program to uncover errors in program function, behaviour, and
performance.

software is tested from two different perspectives: (1) internal program logic is exercised
using “white box” test-case design techniques and (2) software requirements are exercised
using “black box” test-case design techniques. Use cases assist in the design of tests to
uncover errors at the software validation level. In every case, the intent is to fi nd the
maximum number of errors with the minimum amount of effort and time.

Software testability is simply how easily a computer program can be tested

A strategy for software testing provides a road map that describes the steps to be
conducted as part of testing, when these steps are planned and then undertaken, and how
much effort, time, and resources will be required. Therefore, any testing strategy must
incorporate test planning, test-case design, test execution, and resultant data collection and
evaluation. A software testing strategy should be flexible enough to promote a customized
testing approach.

Software testing can be divided into two steps:


1. Verification: it refers to the set of tasks that ensure that software correctly implements a
specific function.

2. Validation: it refers to a different set of tasks that ensure that the software that has been
built is traceable to customer requirements.

Verification: “Are we building the product right?”


Validation: “Are we building the right product?”

Test characteristics:

 A good test has a high probability of finding an error. To achieve this goal, the
tester must understand the software and attempt to develop a mental picture of
how the software might fail.
 A good test is not redundant. Testing time and resources are limited. There is no
point in conducting a test that has the same purpose as another test. Every test
should have a different purpose (even if it is subtly different).
 A good test should be “best of breed”. In a group of tests that have a similar intent,
time and resource limitations may dictate the execution of only those tests that has
the highest likelihood of uncovering a whole class of errors.
 A good test should be neither too simple nor too complex. Although it is sometimes
possible to combine a series of tests into one test case, the possible side effects
associated with this approach may mask errors. In general, each test should be
executed separately.

Testing principles:

There are seven principles in software testing:


1. Testing shows presence of defects
2. Exhaustive testing is not possible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context dependent
7. Absence of errors fallacy
 Testing shows presence of defects: The goal of software testing is to make the
software fail. Software testing reduces the presence of defects. Software testing talks
about the presence of defects and doesn’t talk about the absence of defects. Software
testing can ensure that defects are present but it can not prove that software is defects
free. Even multiple testing can never ensure that software is 100% bug-free. Testing
can reduce the number of defects but not removes all defects.
 Exhaustive testing is not possible: It is the process of testing the functionality of a
software in all possible inputs (valid or invalid) and pre-conditions is known as
exhaustive testing. Exhaustive testing is impossible means the software can never test
at every test cases. It can test only some test cases and assume that software is correct
and it will produce the correct output in every test cases. If the software will test every
test cases then it will take more cost, effort, etc. and which is impractical.
 Early Testing: To find the defect in the software, early test activity shall be started. The
defect detected in early phases of SDLC will very less expensive. For better
performance of software, software testing will start at initial phase i.e. testing will
perform at the requirement analysis phase.
 Defect clustering: In a project, a small number of the module can contain most of the
defects. Pareto Principle to software testing state that 80% of software defect comes
from 20% of modules.
 Pesticide paradox: Repeating the same test cases again and again will not find new
bugs. So it is necessary to review the test cases and add or update test cases to find
new bugs.
 Testing is context dependent: Testing approach depends on context of software
developed. Different types of software need to perform different types of testing. For
example, The testing of the e-commerce site is different from the testing of the
Android application.
 Absence of errors fallacy: If a built software is 99% bug-free but it does not follow the
user requirement then it is unusable. It is not only necessary that software is 99% bug-
free but it also mandatory to fulfill all the customer requirements.

5.2 Types of Testing

Any engineered product (and most other things) can be tested in one of two ways:

(1) Knowing the specified function that a product has been designed to perform, tests can
be conducted that demonstrate each function is fully operational while at the same time
searching for errors in each function.

(2) Knowing the internal workings of a product, tests can be conducted to ensure that all
function works properly. that is, internal operations are performed according to
specifications and all internal components have been adequately exercised. The first test
approach takes an external view and is called black-box testing. The second requires an
internal view and is termed white-box testing

5.2.1 Black box and white box testing

Black Box Testing

Black Box Testing is a software testing method in which the functionalities of software
applications are tested without having knowledge of internal code structure,
implementation details and internal paths. Black Box Testing mainly focuses on input and
output of software applications and it is entirely based on software requirements and
specifications. It is also known as Behavioral Testing.

Black-box testing attempts to find errors in the following categories:

(1) Incorrect or missing functions,

(2) Interface errors,

(3) Errors in data structures or external database access,

(4) Behavior or performance errors,

(5) Initialization and termination errors.


The above Black-Box can be any software system you want to test. For Example, an
operating system like Windows, a website like Google, a database like Oracle or even your
own custom application. Under Black Box Testing, you can test these applications by just
focusing on the inputs and outputs without knowing their internal code implementation.

How to do Black Box Testing

 Here are the generic steps followed to carry out any type of Black Box Testing.
 Initially, the requirements and specifications of the system are examined.
 Tester chooses valid inputs (positive test scenario) to check whether software
processes them correctly. Also, some invalid inputs (negative test scenario) are
chosen to verify that the software is able to detect them.
 Tester determines expected outputs for all those inputs.
 Software tester constructs test cases with the selected inputs.
 The test cases are executed.
 Software tester compares the actual outputs with the expected outputs.
 Defects if any are fixed and re-tested.

Types of Black Box Testing

There are many types of Black Box Testing but the following are the prominent ones -

Functional testing - This black box testing type is related to the functional requirements of a
system; it is done by software testers.

Non-functional testing - This type of black box testing is not related to testing of specific
functionality, but non-functional requirements such as performance, scalability, usability.

Regression testing - Regression Testing is done after code fixes, upgrades or any other
system maintenance to check the new code has not affected the existing code.

Black Box Testing Techniques

Following are the prominent Test Strategy amongst the many used in Black box Testing

Equivalence Class Testing: It is used to minimize the number of possible test cases to an
optimum level while maintains reasonable test coverage.
Boundary Value Testing: Boundary value testing is focused on the values at boundaries. This
technique determines whether a certain range of values are acceptable by the system or
not. It is very useful in reducing the number of test cases. It is most suitable for the systems
where an input is within certain ranges.

Decision Table Testing: A decision table puts causes and their effects in a matrix. There is a
unique combination in each column.

White-box Testing

White-box testing, sometimes called glass-box testing or structural testing.

Using white-box testing methods, you can derive test cases that

(1) guarantee that all independent paths within a module have been exercised at
least once,

(2) exercise all logical decisions on their true and false sides,

(3) execute all loops at their boundaries and within their operational bounds, and

(4) exercise internal data structures to ensure their validity.

White box testing techniques analyze the internal structures the used data structures,
internal design, code structure and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box testing or
structural testing.
White box Testing techniques:
 Statement coverage: In this technique, the aim is to traverse all statement at least
once. Hence, each line of code is tested. In case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, helps in pointing out faulty
code.
Statement Coverage Example

 Branch Coverge: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed
at least once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of

flowchart are covered

 Condition Coverage: In this technique, all individual conditions must be covered as


shown in the following example:

INPUT X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’

In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions
get TRUE and FALSE as their values. One possible example would be:
 #TC1 – X = 0, Y = 55
 #TC2 – X = 5, Y = 0
Multiple Condition Coverage: In this technique, all the possible combinations of
the possible outcomes of conditions are tested at least once. Let’s consider the
following example:
INPUT X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
 #TC1: X = 0, Y = 0
 #TC2: X = 0, Y = 5
 #TC3: X = 55, Y = 0
 #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2 n test cases would be required.

Basis Path Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path.
Steps:
0. Make the corresponding control flow graph
1. Calculate the cyclomatic complexity
2. Find the independent paths
3. Design test cases corresponding to each independent path

Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one
that represents a decision point that contains a condition after which the graph splits.
Regions are bounded by nodes and edges.

Cyclomatic Complexity: It is a measure of the logical complexity of the software and is


used to define the number of independent paths. For a graph G, V(G) is its cyclomatic
complexity.
Calculating V(G):
4. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
5. V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
6. V(G) = Number of non-overlapping regions in the graph
Example:

V(G) = 4 (Using any of the above formulae)


No of independent paths = 4
 #P1: 1 – 2 – 4 – 7 – 8
 #P2: 1 – 2 – 3 – 5 – 7 – 8
 #P3: 1 – 2 – 3 – 6 – 7 – 8
 #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8

 Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
 Skip the loop entirely
 Only one pass through the loop
 2 passes
 m passes, where m < n
 n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and we
start from the innermost loop. Simple loop tests are conducted for the innermost
loop and this is worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each.
If they’re not independent, treat them like nesting.

Advantages:

1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines
of code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.

Disadvantages:

1. Main disadvantage is that it is very expensive.


2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming
language as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

Differences between Black Box Testing vs White Box Testing:

Black Box Testing White Box Testing

It is a way of software testing in which the It is a way of testing the software in which the
internal structure or the program or the tester has knowledge about the internal
code is hidden and nothing is known about structure or the code or the program of the
it. software.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is
needed. Knowledge of implementation is required.

It can be referred as outer or external


software testing. It is the inner or the internal software testing.

It is functional test of the software. It is structural test of the software.

This testing can be initiated on the basis of This type of testing of software is started after
requirement specifications document. detail design document.

It is mandatory to have knowledge of


No knowledge of programming is required. programming.

It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of It is generally applicable to the lower levels of


testing of software. software testing.

It is also called closed testing. It is also called as clear box testing.


Black Box Testing White Box Testing

It is least time consuming. It is most time consuming.

It is not suitable or preferred for algorithm


testing. It is suitable for algorithm testing.

Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.

Example: search something on google by


using keywords Example: by input to check and verify loops

Types of Black Box Testing:

 A. Functional Testing
Types of White Box Testing:
 B. Non-functional testing
 A. Path Testing
 C. Regression Testing
 B. Loop Testing

 C. Condition testing

What are different levels of software testing?


Software level testing can be majorly classified into 4 levels:
1. Unit Testing: A level of the software testing process where individual units/components
of a software/system are tested. The purpose is to validate that each unit of the software
performs as designed.
2. Integration Testing: A level of the software testing process where individual units are
combined and tested as a group. The purpose of this level of testing is to expose faults in
the interaction between integrated units.
3. System Testing: A level of the software testing process where a complete, integrated
system/software is tested. The purpose of this test is to evaluate the system’s compliance
with the specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested for
acceptability. The purpose of this test is to evaluate the system’s compliance with the
business requirements and assess whether it is acceptable for delivery.
5.2.2 Unit Testing

Unit Testing Unit testing focuses verification effort on the smallest unit of software design—
the software component or module. Using the component-level design description as a
guide, important control paths are tested to uncover errors within the boundary of the
module. The relative complexity of tests and the errors those tests uncover is limited by the
constrained scope established for unit testing. The unit test focuses on the internal
processing logic and data structures within the boundaries of a component. This type of
testing can be conducted in parallel for multiple components.

How unit testing is performed?

 First The module interface is tested to ensure that information properly flows into
and out of the program unit under test.
 Local data structures are examined to ensure that data stored temporarily maintains
its integrity during all steps in an algorithm’s execution.
 All independent paths through the control structure are tested to ensure that all
statements in a module have been executed at least once.
 Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.
 And finally, all error-handling paths are tested. Data flow across a component
interface is tested before any other testing is initiated. If data do not enter and exit
properly, all other tests are doubtful. In addition, local data structures should be
tested and the local impact on global data should be ascertained (if possible) during
unit testing.
 Here execution path is selected for testing and it very essential task during the unit
test. Test cases should be designed to uncover errors due to erroneous
computations, incorrect comparisons, or improper control flow.
 Boundary testing is one of the most important unit testing tasks. Software often fails
at its boundaries. Example while executing array of 10 elements, we should check
values for 9th and 10th element.
 when the maximum or minimum allowable value is there, Test cases should check
data structure, control flow, and data values just below, at, and just above maximum
and minimum value.
 A good design anticipates error conditions and establishes error-handling paths to
reroute or cleanly terminate processing when an error does occur . If error-handling
paths are implemented, they must be tested.
 The design of unit tests can occur before coding begins or after source code has
been generated. A review of design information provides guidance for establishing
test cases that are likely to uncover errors.
 Because a component is not a stand-alone program, driver and/or stub software
must often be developed for each unit test. In most applications a driver is nothing
more than a “main program” that accepts test-case data, passes such data to the
component (to be tested), and prints relevant results. Stubs serve to replace
modules that are subordinate (invoked by) the component to be tested. A stub or
“dummy subprogram” uses the subordinate module’s interface, may do minimal
data manipulation, prints verification of entry, and returns control to the module
undergoing testing.

5.2.3 Integration Testing

When all modules are unit tested, then it is not necessary that they work properly when
combined- interfacing.Data can be lost across an interface; one component can have an
inadvertent, adverse effect on another.

when units are combined, they may not produce the desired major function. Global data
structures can present problems.

Integration testing is a systematic technique for constructing the software architecture


while at the same time conducting tests to uncover errors associated with interfacing. The
objective is to take unit-tested components and build a program structure that has been
given by design.

Integration testing is the process of testing the interface between two software units or
module. It’s focus on determining the correctness of the interface. The purpose of the
integration testing is to expose faults in the interaction between integrated units. Once all
the modules have been unit tested, integration testing is performed.

Integration test approaches –


There are four types of integration testing approaches. Those approaches are the
following:
1. Big-Bang Integration Testing –

It is the simplest integration testing approach, where all the modules are combining and
verifying the functionality after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach is
practicable only for very small systems. If once an error is found during the integration
testing, it is very difficult to localize the error as the error may potentially belong to any of
the modules being integrated. So, debugging errors reported during big bang integration
testing are very expensive to fix.

Advantages:
 It is convenient for small systems.

Disadvantages:
 There will be quite a lot of delay because you would have to wait for all the modules to
be integrated.
 High risk critical modules are not isolated and tested on priority since all modules are
tested at once.

2. Bottom-Up Integration Testing –

In bottom-up testing, each module at lower levels is tested with higher modules until all
modules are tested. The primary purpose of this integration testing is, each subsystem is
to test the interfaces among various modules making up the subsystem. This integration
testing uses test drivers to drive and pass appropriate data to the lower level modules.

Advantages:
 In bottom-up testing, no stubs are required.
 A principle advantage of this integration testing is that several disjoint subsystems can
be tested simultaneously.
Disadvantages:
 Driver modules must be produced.
 In this testing, the complexity that occurs when the system is made up of a large
number of small subsystem.

3. Top-Down Integration Testing –

Top-down integration testing technique used in order to simulate the behaviour of the
lower-level modules that are not yet integrated. In this integration testing, testing takes
place from top to bottom. First high-level modules are tested and then low-level modules
and finally integrating the low-level modules to a high level to ensure the system is
working as intended.
Advantages:
 Separately debugged module.
 Few or no drivers needed.
 It is more stable and accurate at the aggregate level.
Disadvantages:
 Needs many Stubs.
 Modules at lower level are tested inadequately.

4. Mixed Integration Testing –


A mixed integration testing is also called sandwiched integration testing. A mixed
integration testing follows a combination of top down and bottom-up testing approaches.
In top-down approach, testing can start only after the top-level module have been coded
and unit tested. In bottom-up approach, testing can start only after the bottom level
modules are ready. This sandwich or mixed approach overcomes this shortcoming of the
top-down and bottom-up approaches. A mixed integration testing is also called
sandwiched integration testing.

Advantages:
 Mixed approach is useful for very large projects having several sub projects.
 This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.

Disadvantages:
 For mixed integration testing, require very high cost because one part has Top-down
approach while another part has bottom-up approach.
 This integration testing cannot be used for smaller system with huge interdependence
between different modules.

5.2.4 System Testing

System Testing is a type of software testing that is performed on a complete integrated


system to evaluate the compliance of the system with the corresponding requirements.
In system testing, integration testing passed components are taken as input. The goal of
integration testing is to detect any irregularity between the units that are integrated
together. System testing detects defects within both the integrated units and the whole
system. The result of system testing is the observed behavior of a component or a system
when it is tested.
System Testing is carried out on the whole system in the context of either system
requirement specifications or functional requirement specifications or in the context of
both. System testing tests the design and behavior of the system and also the
expectations of the customer. It is performed to test the system beyond the bounds
mentioned in the software requirements specification (SRS).
System Testing is basically performed by a testing team that is independent of the
development team that helps to test the quality of the system impartial. It has both
functional and non-functional testing.
System Testing is performed after the integration testing and before the acceptance
testing.
System Testing Process:
System Testing is performed in the following steps:
 Test Environment Setup:
Create testing environment for the better quality testing.
 Create Test Case:
Generate test case for the testing process.
 Create Test Data:
Generate the data that is to be tested.
 Execute Test Case:
After the generation of the test case and the test data, test cases are executed.
 Defect Reporting:
Defects in the system are detected.
 Regression Testing:
It is carried out to test the side effects of the testing process.
 Log Defects:
Defects are fixed in this step.
 Retest:
If the test is not successful then again test is performed.

Types of System Testing:


 Performance Testing:
Performance Testing is a type of software testing that is carried out to test the speed,
scalability, stability and reliability of the software product or application.
 Load Testing:
Load Testing is a type of software Testing which is carried out to determine the
behavior of a system or software product under extreme load.
 Stress Testing:
Stress Testing is a type of software testing performed to check the robustness of the
system under the varying loads.
 Scalability Testing:
Scalability Testing is a type of software testing which is carried out to check the
performance of a software application or system in terms of its capability to scale up or
scale down the number of user request load.

You might also like