Software Testing Full
Software Testing Full
Functional Testing
ThePurpose of Testing
Productivity is measured by the sum of the costs of the material, the rework, and the
discarded components, and the cost of quality assurance and testing.
The biggest part of software cost is the cost of bugs: the cost of detecting them, the
cost of correcting them, the cost of designing tests that discover them, and the cost
of running thosetests.
Phases in testing
Testable code has fewer bugs than the code that's hard to test. Identifying the testing
techniques to test the code is the main key here.
Test Design:
The software code must be designed and tested, but many appear to be unaware that
tests themselves must be designed and tested. Tests should be properly designed and
tested before applying it to the actual code.
There are approaches other than testing to create better software. Methods other than
testing include:
2. Design Style: While designing the software itself, adopting objectives such as
testability, openness and clarity can do much to prevent bugs.
4. Languages: The source language can help reduce certain kinds of bugs.
Programmers find new bugs while using new languages.
5. Development Methodologies and Development Environment: The development
process and the environment in which that methodology is embedded can prevent
many kinds of bugs.
Dichotomies
Many people consider both as same. Purpose of testing is to show that a program
has bugs. The purpose of testing is to find the error or misconception that led to the
program's failure and to design and implement the program changes that correct the
error. Debugging usually follows testing, but they differ as to goals, methods. The
below table shows few important differences between testing and debugging.
2. Buyer: Who pays for the system in the hope of profits from providing services
3. Control Bug Dominance: The belief those errors in the control structures (if,
switch etc) of programs dominate the bugs.
4. Code / Data Separation: The belief that bugs respect the separation of code and
data.
5. Lingua Salvatore Est.: The belief that the language syntax and semantics (e.g.
Structured Coding, Strong typing, etc) eliminates most bugs.
6. Corrections Abide: The mistaken belief that a corrected bug remains corrected.
9. Angelic Testers: The belief that testers are better at test design than programmers
is at code design.
Tests: Tests are formal procedures, Inputs must be prepared, Outcomes should
predict, tests should be documented, commands need to be executed, and results are
to be observed. All these errors are subjected to error. We do three distinct kinds of
testing on a typical software system.
They are:
1. Unit / Component Testing: A Unit is the smallest testable piece of software that
can be compiled, assembled, linked, loaded etc. A unit is usually the work of one
programmer and consists of several hundred or fewer lines of code. Unit Testing is
the testing we do to show that the unit does not satisfy its functional specification or
that its implementation structure does not match the intended design structure. A
Component is an integrated aggregate of one or more units.
Component Testing is the testing we do to show that the component does not
satisfy its functional specification or that its implementation structure does not match
the intended design structure.
Role of Models: The art of testing consists of creating, selecting, exploring, and
revising models. Our ability to go through this process depends on the number of
different models we have at hand and their ability to express a program'sbehavior
Importance of bugs: The importance of bugs depends on frequency, correction
cost, installation cost, and consequences.
1. Frequency: How often does that kind of bug occur? Pay more attention to the
more frequent bug types.
2. Correction Cost: What does it cost to correct the bug after it is found? The cost
is the sum of 2 factors:
These costs go up dramatically later in the development cycle when the bug is
discovered. Correction cost also depends on system size.
4. Consequences: What are the consequences of the bug? Bug consequences can
range from mild to catastrophic.
2 Moderate: Outputs are misleading or redundant. The bug impacts the system's
performance.
3 Annoying: The system's behavior because of the bug is dehumanizing. E.g. Names
are modified.
6 Very Serious: The bug causes the system to do the wrong transactions. Instead of
losing your paycheck, the system credits it to another account or converts deposits
to withdrawals.
7 Extreme: The problems aren't limited to a few users or to few transaction types.
They are frequent for unusual cases.
8 Intolerable: Long term unrecoverable corruption of the database occurs and the
corruption is not easily discovered. Serious consideration is given to shutting the
system down.
9 Catastrophic: The decision to shut down is taken out of our hands because the
system fails.
10 Infectious: One that corrupt other systems even though it does not fall in itself ;
that impacts the social physical environment.
A given bug can be put into one or another category depending on its history and
the programmer's state of mind.
3. Long term Support: Assume that we have a great specification language and
that can be used to create unambiguous, complete specifications with
unambiguous complete tests and consistent test criteria.
Testing Techniques for functional bugs: Most functional test techniques- that is
those techniques which are based on a behavioral description of software, such as
transaction flow testing, syntax testing, domain testing, logic testing and state testing
are useful in testing functional bugs.
¸
2. Structural bugs:
Control and sequence bugs include paths left out, unreachable code, improper
nesting of loops, loop-back or loop termination criteria incorrect, missing
process steps, duplicated processing, unnecessary processing, GOTO's, ill-
conceived (not properly planned) switches. Most of the control flow bugs are
easily tested and caught in unit testing.
2. Logic Bugs: Bugs in logic, especially those related to misunderstanding how case
statements and logic operators behave singly and combinations .Also includes
evaluation of boolean expressions in deeply nested IF-THEN-ELSE constructs. If
the bugs are parts of logical (i.e. boolean) processing not related to control flow, they
are characterized as processing bugs.
5. Data-Flow Bugs and Anomalies: Most initialization bugs are special case of data
flow anomalies. A data flow anomaly occurs where there is a path along which we
expect to do something unreasonable with data, such as using an uninitialized
variable, attempting to use a variable before it exists, modifying and then not storing
or using the result, or initializing twice without an intermediate use.
3. Data bugs: Data bugs include all bugs that arise from the specification of data
objects, their formats, the number of such objects, and their initial values. Data Bugs
are at least as common as bugs in code, but they are often treated as if they did not
exist at all. Software is evolving towards programs in which more and more of the
control and processing functions are stored in tables. Because of this, there is an
increasing awareness that bugs in the data problems should be given equal attention.
Static Data are fixed in form and content. They appear in the source code or
database directly or indirectly, for example a number, a string of characters, or a bit
pattern. Compile time processing will solve the bugs caused by static data.
Information, parameter, and control: Static or dynamic data can serve in one of
three roles, or in combination of roles: as a parameter, for control, or for information.
Content can be an actual bit pattern, character string, or number put into a data
structure. All data bugs result in the corruption or misinterpretation of content.
Structure relates to the size, shape and numbers that describe the data object, which
is memory location used to store the content. (E.g. A two dimensional array).
Attributes relates to the specification meaning that is the semantics associated with
the contents of a data object. (E.g. an integer, an alpha numeric string, a subroutine).
4.Coding bugs: Coding errors of all kinds can create any of the other kind of bugs.
Syntax errors are generally not important in the scheme of things if the source
language translator has adequate syntax checking. If a program has many syntax
errors, then we should expect many logic and coding bugs. The documentation bugs
are also considered as coding bugs which may mislead the programmers.
1. External Interfaces:
The external interfaces are the means used to communicate with the world. These
include devices, sensors, input terminals, printers, and communication lines. All
external interfaces or machine should employ a protocol. The protocol may be wrong
or incorrectly implemented. Other external interface bugs are: invalid timing or
sequence assumptions related to external signals .Misunderstanding external input
or output formats.
(4. Operating System Bugs: Program bugs related to the operating system are a
combination of hardware architecture and interface bugs mostly caused by a
misunderstanding of what it is the operating system does. Use operating system
interface specialists, and use explicit interface modules for all operating system
calls. This approach may not eliminate the bugs but at least will localize them and
make testing easier.
5. Software Architecture: Software architecture bugs are the kind that called -
interactive. Routines can pass unit and integration testing without revealing such
bugs. Many of them depend on load, and their symptoms emerge only when the
system is stressed. Sample for such bugs: Assumption that there will be no interrupts,
Failure to block or un block interrupts, Assumption that memory and registers were
initialized or not initialized etc.
6.Control and Sequence Bugs (Systems Level): These bugs include: Ignored
timing, Assuming that events occur in a specified sequence, Working on data before
all the data have arrived from disc, Missing, wrong, redundant or superfluous
process steps.
8. Integration Bugs: Integration bugs are bugs having to do with the integration of,
and with the interfaces between, working and tested components. These bugs result
from inconsistencies or incompatibilities between components. The communication
methods include data structures, call sequences, registers, communication links and
protocols results in integration bugs.
9.System Bugs: System bugs covering all kinds of bugs that cannot be assigned to
a component or to their simple interactions, but result from the totality of interactions
between many components such as programs, data, hardware, and the operating
systems.
6. TEST AND TEST DESIGN BUGS: Testers have no immunity to bugs. Tests
require complicated scenarios and databases. They require code or the equivalent to
execute and consequently they can have bug.
Test criteria: If the specification is correct, it is correctly interpreted and
implemented, and a proper test has been designed; but the criterion by which the
software's behavior is judged may be incorrect or impossible. So, a proper test
criteria has to be designed. The more complicated the criteria, the likelier they are to
have bugs.
Remedies: The remedies of test bugs are:
1. Test Debugging: The first remedy for test bugs is testing and debugging the tests.
Test debugging, when compared to program debugging, is easier because tests, when
properly designed are simpler than programs.
2. Test Quality Assurance: Programmers have the right to ask how quality in
independent testing is monitored.
3. Test Execution Automation:
Assemblers, loaders, compilers are developed to reduce the incidence of
programming and operation errors. Test execution bugs are virtually eliminated by
various test execution automation tools.
4. Test Design Automation: Just as much of software development has been
automated, much test design can be and has been automated.
Types of Testing
Manual Testing
This type includes the testing of the Software manually i.e. without using any
automated tool or any script. In this type the tester takes over the role of an end user
and test the Software to identify any un-expected behavior or bug. There are different
stages for manual testing like unit testing, Integration testing, System testing and
User Acceptance testing. Testers use test plan, test cases or test scenarios to test the
Software to ensure the completeness of testing. Manual testing also includes
exploratory testing as testers explore to identify errors in it.
Automation Testing
Automation testing which is also known as “Test Automation”, is when the tester
writes scripts and uses another software to test the software. This Automation
Testing is used to re-run the test scenarios that were performed manually, quickly
and repeatedly. Automation testing is also used to test the application from load,
performance and stress point of view. It increases the test coverage; improve
accuracy, saves time and money in comparison to manual testing.
How to Automate:
Before mentioning the tools lets identify the process which can be used to automate
the testing:
• Execution of scripts
• Create result reports.
Following are the tools which can be used for Automation testing:
• Selenium
• SilkTest
• TestComplete
• Testing Anywhere
• WinRunner
• LaodRunner
Testing Methods
There are different methods which can be used for Software testing.
The technique of testing without having any knowledge of the interior workings of
the application is Black Box testing. The tester is unknown to the system architecture
and does not have access to the source code. Typically, when performing a black
box test, a tester will interact with the system’s user interface by providing inputs
and examining outputs without knowing how and where the inputs are worked upon.
White box testing is the detailed investigation of internal logic and structure of the
code. White box testing is also called glass testing or open box testing. In order to
perform white box testing on an application, the tester needs to possess knowledge
of the internal working of the code. The tester needs to have a look inside the source
code and find out which unit/chunk of the code is behaving inappropriately.
Grey Box testing is a technique to test the application with limited knowledge of the
internal workings of an application. Unlike black box testing, where the tester only
tests the application’s user interface, in grey box testing, the tester has access to
design documents and the database. Having this knowledge, the tester is able to
better prepare test data and test scenarios when making the test plan.
Following are the main levels of Software Testing:
Functional Testing.
.• Functional Testing
This is a type of black box testing that is based on the specifications of the software
that is to be tested. The application is tested by providing input and then the results
are examined that need to conform to the functionality it was intended for. There are
five steps that are involved when testing an application for functionality.
Step II - The creation of test data based on the specifications of the application.
Step III - The output based on the test data and the specifications of the application.
Step IV - The writing of Test Scenarios and the execution of test cases.
Steps V - The comparison of actual and expected results based on the executed test
cases.
Unit Testing
Top-Down integration testing, the highest-level modules are tested first and
progressively lower-level modules are tested after that. In a comprehensive software
development environment, bottom-up testing is usually done first, followed by top-
down testing.
System Testing
This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested to see that it meets
Quality Standards. This type of testing is performed by a specialized testing team.
System Testing where the application is tested as a whole. The application is tested
thoroughly to verify that it meets the functional and technical specification.
Regression Testing
Testing the new changes to verify that the change made did not affect any other
area of the application.
Acceptance Testing
Alpha Testing
This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined are known as alpha testing. During this phase,the following will be tested
in the application:
Spelling Mistakes
• Broken Links
• Cloudy Directions
Beta Testing
This test is performed after Alpha testing has been successfully performed. In beta
testing a sample of the intended audience tests the application. Beta testing is also
known as pre-release testing. Beta test versions of software are ideally distributed to
a wide audience on the Web, partly to give the program a "real-world" test and partly
to provide a preview of the next release.
Users will install, run the application and send their feedback to the projectteam.
Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
The more issues you fix that solve real user problems, the higher the quality of
application will be.
This section is based upon the testing of the application from its non-functional attributes. Non-
functional testing of Software involves testing the Software from the requirements which are non-
functional in nature related such as performance, security, user interface etc. Some of the important
and commonly used non-functional testing types are mentioned as follows.
Performance Testing
It is mostly used to identify any performance issues rather than finding the bugs in software. There
are different causes which contribute in lowering the performance of software:
• Network delay.
• Client side processing
Performance testing is considered as one of the important and mandatory testing type in terms of
following aspects:
Capacity
Stability
Itcan be divided into different sub types such as Load testing and Stress testing.
Load Testing
A process of testing the behavior of the Software by applying maximum load in terms of Software
accessing and manipulating large input data. It can be done at both normal and peak load
conditions. This type of testing identifies the maximum capacity of Software and its behavior at
peak time. Most of the time, Load testing is performed with the help of automated tools such as
Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer,
Visual Studio Load Test etc.
Stress Testing
This testing type includes the testing of Software behavior under abnormal conditions. Taking
away the resources, applying load beyond the actual load limit is Stress testing. The main intent is
to test the Software by applying the load to the system and taking over the resources used by the
Software to identify the breaking point. This testing can be performed by testing different scenarios
such as: Shutdown or restart .
Running different processes that consume resources such as CPU, Memory, server etc.
Usability Testing
This section includes different concepts and definitions of Usability testing from Software point
of view. It is a black box technique and is used to identify any error(s) and improvements in the
Software by observing the users through their usage and operation.
Usability Testing involves the testing of Graphical User Interface of the Software. This testing
ensures that the GUI should be according to requirements in terms of color, alignment, size and
other properties. On the other hand Usability testing ensures that a good and user friendly GUI is
designed and is easy to use for the end user.
Security Testing
Security testing involves the testing of Software in order to identify any flaws ad gaps from security
and vulnerability point of view.
Following are the main aspects which Security testing should ensure:
• Confidentiality.
• Integrity.
• Authentication.
• Authorization.
• Input checking and validation.
• SQL insertion attacks.
Portability Testing
Portability testing includes the testing of Software with intend that it should be reusable and can
be moved from another Software as well. Following are the strategies that can be used for
Portability testing.
Portability testing can be considered as one of the sub parts of System testing, as this testing type
includes the overall testing of Software with respect to its usage over different environments.
Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing.
Software should be designed and coded, keeping in mind Portability Requirements. Unit testing
has been performed on the associated components.
Path Testing:
• Path Testing is the name given to a family of test techniques based on selecting a set of test
paths through the program.
• If the set of paths are properly chosen then we have achieved some measure of test
thoroughness.
• For example, pick enough paths to assure that every source statement has been executed at
least once.
• Path testing techniques are the oldest of all structural test techniques. o Path testing is most
applicable to new software for unit testing.
• It is a structural technique. It requires complete knowledge of the program's structure.
• It is most often used by programmers to unit test their own code.
• The effectiveness of path testing rapidly deteriorates as the size of the software aggregate
under test increases.
The Bug Assumption:
• The bug assumption for the path testing strategies is that it takes a different path than
intended. o
• As an example "GOTO X" where "GOTO Y" had been intended.
• Structured programming languages prevent many of the bugs targeted by path testing:
• For old code in COBOL, ALP, FORTRAN and Basic, the path testing is indispensable.
Control Flow Graphs:
The control flow graph is a graphical representation of a program's control structure. It uses the
elements named process blocks, decisions, and junctions. The flow graph is similar to the earlier
flowchart.
(2) Decisions
(3) Junctions
1. Process Block:
• A junction is a point in the program where the control flow can merge.
• Examples of junctions are: the target of a jump or skip instruction in ALP, a label that is a
target of GOTO.
Nested Loop
Testing performed in a nested loop in known as Nested loop testing. Nested loop is basically
one loop inside the another loop. In nested loop there can be finite number of loops inside a
loop and there a nest is made. It may be either of any of three loops i.e., for, while or do-
while.
1.Set all the other loops to minimum value and start at the innermost loop
2. For the innermost loop, perform a simple loop test and hold the outer loops at their
minimum iteration parameter value
In the concatenated loops, if two loops are independent of each other then they are tested
using simple loops or else test them as nested loops.
However if the loop counter for one loop is used as the initial value for the others, then it will
not be considered as an independent loops.
Unstructured Loops
Testing performed in an unstructured loop is known as Unstructured loop testing.
Unstructured loop is the combination of nested and concatenated loops. It is basically a group
of loops that are in no order.
For unstructured loops, it requires restructuring of the design to reflect the use of the
structured programming constructs.
Summary:
• In Software Engineering, Loop testing is a White Box Testing. This technique is used to test
loops in the program.
• Loops testing can reveal performance/capacity bottlenecks
• Loop bugs show up mostly in low-level software
PREDICATES, PATH PREDICATES AND ACHIEVABLE PATHS:
PREDICATE:
The logical function evaluated at a decision is called Predicate. The direction taken
at a decision depends on the value of decision variable.
MULTIWAY BRANCHES:
The path taken through a multiway branch such as a computed GOTO's, case
statement, or jump statements cannot be directly expressed in TRUE/FALSE terms.
Although, it is possible to describe such alternatives by using multi valued logic,
practical approach is to express multiway branches as an equivalent set of
if..then..else statements. For example a three way case statement can be written as:
If case=1
DO A1
ELSE IF Case=2 DO A2
ELSE DO A3
ENDIF
INPUTS:
In testing, the word input is not restricted to direct inputs, such as variables in a
subroutine call, but includes all data objects referenced by the routine whose values
are fixed prior to entering it. For example, inputs in a calling sequence, objects in a
data structure, values left in registers, or any combination of object types. The input
for a particular test is mapped as a one dimensional array called as an Input Vector.
PREDICATE INTERPRETATION:
The simplest predicate depends only on input variables. For example if x1,x2
are inputs, the predicate might be x1+x2>=7, given the values of x1 and x2 the
direction taken through the decision is based on the predicate is determined at input
time and does not depend on processing.
for example,
INPUT X
ON X GOTO A, B, C, ...
A: Z := 7 @ GOTO HN
B: Z := - 7 @ GOTO HN
C: Z := 0 @ GOTO HN
HN: IF Y + Z > 0
GOTO ELL
ELSE
GOTO EMM
The first predicate if y=2 forces the rest of the path, so that for any positive
value of x. the path taken at the second predicate will be the same for the
correct and buggy version.
Self Blindness:
Self blindness occurs when the buggy predicate is a multiple of the correct
predicate and as a result is indistinguishable along that path.
For Example:
1. The assignment (x=a) makes the predicates multiples of each other, so the
direction taken is the same for the correct and buggy version.
PATH SENSITIZING:
Achievable and unachievable paths:
1. select and test enough paths to achieve a satisfactory notion of test
completeness such as C1+C2.
2. Extract the programs control flow graph and select a set of covering paths.
3. For any path in that set, interpret the predicates along the path as needed to
express them in terms of the input vector. In general individual predicates are
compound or may become compound as a result of interpretation.
4. Trace the path through, multiplying the individual compound predicates to
achieve a boolean expression such as
(A+BC) (D+E) (FGH) (IJ) (K) (l) (L).
5. Multiply out the expression to achieve a sum of products form:
ADFGHIJKL+AEFGHIJKL+BCDFGHIJKL+BCEFGHIJKL
6. Each product term denotes a set of inequalities that if solved will yield an input
vector that will drive the routine along the designated path.
7. Solve any one of the inequality sets for the chosen path and you have found a
set of input values for the path.
8. If you can find a solution, then the path is achievable.
9. If you can’t find a solution to any of the sets of inequalities, the path is un
achievable.
10. The act of finding a set of solutions to the path predicate expression is
called PATH SENSITIZATION.
HEURISTIC PROCEDURES FOR SENSITIZING PATHS:
1. This is a workable approach, instead of selecting the paths without considering
how to sensitize, attempt to choose a covering path set that is easy to sensitize and
pick hard to sensitize paths only as you must to achieve coverage.
2. Identify all variables that affect the decision.
3. Classify the predicates as dependent or independent.
4. Start the path selection with un correlated, independent predicates.
5. If coverage has not been achieved using independent uncorrelated predicates,
extend the path set using correlated predicates.
6. If coverage has not been achieved extend the cases to those that involve
dependent predicates.
7. Last, use correlated, dependent predicates.
PATH INSTRUMENTATION:
Path instrumentation is what we have to do to confirm that the outcome was
achieved by the intended path.
Co-incidental Correctness:
The coincidental correctness stands for achieving the desired outcome for wrong
reason
The above figure is an example of a routine that, for the (unfortunately) chosen input
value (X = 16), yields the same outcome (Y = 2) no matter which case we select.
Therefore, the tests chosen this way will not tell us whether we have achieved
coverage. For example, the five cases could be totally confused and still the outcome
would be the same. Path Instrumentation is what we have to do to confirm that the
outcome was achieved by the intended path.
The types of instrumentation methods include:
1.Interpretive Trace Program: An interpretive trace program is one that executes
every statement in order and records the intermediate values of all calculations, the
statement labels traversed etc. If we run the tested routine under a trace, then we
have all the information we need to confirm the outcome and, furthermore, to
confirm that it was achieved by the intended path. The trouble with traces is that
they give us far more information than we need.
WHITE BOX TESTING is testing of a software solution's internal structure, design, and
coding. In this type of testing, the code is visible to the tester. It focuses primarily on verifying
the flow of inputs and outputs through the application, improving design and usability,
strengthening security. White box testing is also known as Clear Box testing, Open Box
testing, Structural testing, Transparent Box testing, Code-Based testing, and Glass Box
testing. It is usually performed by developers.
It is one of two parts of the Box Testing approach to software testing. Its counterpart, Blackbox
testing, involves testing from an external or end-user type perspective. On the other hand,
Whitebox testing is based on the inner workings of an application and revolves around internal
testing.
The term "WhiteBox" was used because of the see-through box concept. The clear box or
WhiteBox name symbolizes the ability to see through the software's outer shell into its inner
workings. Likewise, the "black box" in "Black Box Testing" symbolizes not being able to see
the inner workings of the software so that only the end-user experience can be tested.
White box testing involves the testing of the software code for the following:
• Expected output
The testing can be done at system, integration and unit levels of software development. One of
the basic goals of whitebox testing is to verify a working flow for an application. It involves
testing a series of predefined inputs against expected or desired outputs so that when a specific
input does not result in the expected output, you have encountered a bug.
How to perform White Box Testing?
To understand white box testing, we have divided it into two basic steps. This is what testers
do when testing an application using the white box testing technique:
The first thing a tester will often do is learn and understand the source code of the application.
Since white box testing involves the testing of the inner workings of an application, the tester
must be very knowledgeable in the programming languages used in the applications they are
testing. Also, the testing person must be highly aware of secure coding practices. Security is
often one of the primary objectives of testing software. The tester should be able to find security
issues and prevent attacks from hackers and naive users who might inject malicious code into
the application either knowingly or unknowingly.
The second basic step to white box testing involves testing the application's source code for
proper flow and structure. One way is by writing more code to test the application's source
code. The tester will develop little tests for each process or series of processes in the
application. This method requires that the tester must have intimate knowledge of the code and
is often done by the developer. Other methods include Manual Testing, trial, and error testing
and the use of testing tools.
int result = a+ b;
If (result> 0)
Else
To exercise the statements in the above code, WhiteBox test cases would be
• A = 1, B = 1
• A = -1, B = -3
A major White box testing technique is Code Coverage analysis. Code Coverage analysis
eliminates gaps in a Test Case suite. It identifies areas of a program that are not exercised by a
set of test cases. Once gaps are identified, you create test cases to verify untested parts of the
code, thereby increasing the quality of the software product.
There are automated tools available to perform Code coverage analysis. Below are a few
coverage analysis techniques
Statement Coverage:- This technique requires every possible statement in the code to be tested
at least once during the testing process of software engineering.
Branch Coverage - This technique checks every possible path (if-else and other conditional
loops) of a software application.
Apart from above, there are numerous coverage types such as Condition Coverage, Multiple
Condition Coverage, Path Coverage, Function Coverage etc. Each technique has its own
merits and attempts to test (cover) all parts of software code. Using Statement and Branch
coverage you generally attain 80-90% code coverage which is sufficient.
White box testing encompasses several testing types used to evaluate the usability of an
application, block of code or specific software package. There are listed below --
• Unit Testing: It is often the first type of testing done on an application. Unit Testing is
performed on each unit or block of code as it is developed. Unit Testing is essentially
done by the programmer. As a software developer, you develop a few lines of code, a
single function or an object and test it to make sure it works before continuing Unit
Testing helps identify a majority of bugs, early in the software development lifecycle.
Bugs identified in this stage are cheaper and easy to fix.
• Testing for Memory Leaks: Memory leaks are leading causes of slower running
applications. A QA specialist who is experienced at detecting memory leaks is essential
in cases where you have a slow running software application.
• Parasoft Jtest
• EclEmma
• NUnit
• PyUnit
• HTMLUnit
• CppUnit
Static testing is a type of testing which requires only the source code of the product, not the
binaries or executables. Static testing does not involve executing the programs on computers
but involves select people going through the code to find out whether the code works
• according to the functional requirement;
• the code has been written in accordance with the design developed earlier in the project
life cycle;
• the code for any functionality has been missed out; the code handles errors properly.
Static testing can be done by humans or with the help of specialized tools. Static Testing by
Humans
These methods rely on the principle of humans
• reading the program code to detect errors rather than computers executing the code to
find errors. This process has several advantages. Sometimes humans can find errors that
computers cannot.
• For example, when there are two variables with similar names and the programmer
used a “wrong” variable by mistake in an expression, the computer will not detect the
error but execute the statement and produce incorrect results, whereas a human being
can spot such an error.
• By making multiple humans read and evaluate the program, we can get multiple
perspectives and therefore have more problems identified than a computer could.
• A human evaluation of the code can compare it against the specifications or design
and thus ensure that it does what is intended to do.
• A human evaluation can detect many problems at one go and can even try to identify
the root causes of the problems., such testing only reveals the symptoms rather than the
root causes. Thus, the overall time required to fix all the problems can be reduced
substantially by a human evaluation.
• By making humans test the code before execution, computer resources can be saved.
Of course, this comes at the expense of human resources.
• A proactive method of testing like static testing minimizes the delay in identification of
the problems. The sooner a defect is identified and corrected, lesser is the cost of fixing
the defect.
• From a psychological point of view, finding defects later in the cycle (for example, after
the code is compiled and the system is being put together) creates immense pressure
on programmers. They have to fix defects with less time to spare. With this kind of
pressure, there are higher chances of other defects creeping in.
There are multiple methods to achieve static testing by humans. They are as follows.
1. Desk checking of the code
2. Code walkthrough
3. Code review
4. Code inspection
Since static testing by humans is done before the code is compiled and executed, some
of these methods can be viewed as process-oriented or defect prevention-oriented or
quality assurance-oriented activities rather than pure testing activities. “Testing” as
anything that furthers the quality of a product. These methods have been included in this
chapter because they have visibility into the program code.
Desk checking
Normally done manually by the author of the code, desk checking is a method to verify
the portions of the code for correctness. Such verification is done by comparing the code
with the design or specifications to make sure that the code does what it is supposed to do
and effectively. This is the desk checking that most programmers do before compiling and
executing the code. Whenever errors are found, the author applies the corrections for errors
on the spot.`
This method of catching and correcting errors is characterized by:
No structured method or formalism to ensure completeness
No maintaining of a log or checklist.
In effect, this method relies completely on the author's thoroughness, diligence, and skills.
There is no process or structure that guarantees or verifies the effectiveness of desk checking.
This method is not effective in detecting errors that arise due to incorrect understanding of
requirements or incomplete requirements. This is because developers may not have the domain
knowledge required to understand the requirements fully.
the defects are detected and corrected with minimum time delay.
Some of the disadvantages of this method of testing are as follows.
• A developer is not the best person to detect problems in his or her own code.
• He or she may be tunnel visioned and have blind spots to certain types of problems.
• Developers generally prefer to write new code rather than do any form of testing!
• This method is essentially person-dependent and informal and thus may not work
consistently across all developers.
• Owing to these disadvantages, the next two types of proactive methods are introduced.
The basic principle of walkthroughs and formal inspections is to involve multiple
people in the review process.
Some of the disadvantages of this method of testing are as follows.
• A developer is not the best person to detect problems in his or her own code.
• He or she may be tunnel visioned and have blind spots to certain types of
problems.
• Developers generally prefer to write new code rather than do any form of
testing!
• This method is essentially person-dependent and informal and thus may not
work consistently across all developers.
• Owing to these disadvantages, the next two types of proactive methods are
introduced. The basic principle of walkthroughs and formal inspections is to
involve multiple people in the review process.
Code walkthrough
• This method and formal inspection (described in the next section) are group-
oriented methods. Walkthroughs are less formal than inspections.
• The advantage that walkthrough has over desk checking is that it brings
multiple perspectives.
• In walkthroughs, a set of people look at the program code and raise questions
for the author.
• The author explains the logic of the code, and answers the questions. If the
author is unable to answer some questions, he or she then takes those questions
and finds their answers.
• Completeness is limited to the area where questions are raised by the team.
Formal inspection
Code inspection—also called Fagan Inspection (named after the original
formulator)—is a method, normally with a high degree of formalism. The focus of
this method is to detect all faults, violations, and other side-effects.
This method increases the number of defects detected by
1. preparation before an inspection/review;
2. enlisting multiple diverse views;
3. assigning specific roles to the multiple participants;
4. and going sequentially through the code in a structured manner.
. When the code is in such a reasonable state of readiness, an inspection meeting is
arranged.
There are four roles in inspection.
• First is the author of the code.
• Second is a moderator who is expected to formally run the inspection
according to the process.
• Third are the inspectors. These are the people who actually provides, review
comments for the code. There are typically multiple inspectors.
• Finally, there is a scribe, who takes detailed notes during the inspection
meeting and circulates them to the inspection team after the meeting.
The author or the moderator selects the review team. The chosen members have the
skill sets to uncover as many defects as possible. In an introductory meeting, the
inspectors get copies (These can be hard copies or soft copies) of the code to be
inspected along with other supporting documents such as the design document,
requirements document, and any documentation of applicable standards.
The moderator informs the team about the date, time, and venue of the inspection
meeting. The inspectors get adequate time to go through the documents and
program and ascertain their compliance to the requirements, design, and standards.
The moderator takes the team sequentially through the program code, asking each
inspector if there are any defects in that part of the code. If any of the inspectors
raises a defect, then the inspection team deliberates on the defect and, when agreed
that there is a defect,
classifies it in two dimensions––minor/major and systemic/mis-execution.
A mis-execution defect is one which, as the name suggests, happens because of an
error or slip on the part of the author. It is unlikely to be repeated later, either in this
work product or in other work products. An example of this is using a wrong variable
in a statement.
Systemic defects, on the other hand, can require correction at a different level..
Similarly, minor defects are defects that may not substantially affect a program,
whereas major defects need immediate attention.
A scribe formally documents the defects found in the inspection meeting and the
author takes care of fixing these defects. In case the defects are severe, the team may
optionally call for a review meeting to inspect the fixes to ensure that they address
the problems. In any case, defects found through inspection need to be tracked till
completion and someone in the team has to verify that the problems have been fixed
properly.
. Some of the challenges to watch out for in conducting formal inspections are as
follows.
• These are time consuming.
• Since the process calls for preparation as well as formal meetings, these can
take time.
• scheduling can become an issue since multiple people are involved.
• It may also not be necessary to subject the entire code to formal inspection.
In order to overcome the above challenges, it is necessary to identify, during the
planning stages, which parts of the code will be subject to formal inspections.
Portions of code can be classified on the basis of their criticality or complexity as
“high,” “medium,” and “low.”
High or medium complex critical code should be subject to formal inspections,
while those classified as “low” can be subject to either walkthroughs or even desk
checking.
.
Static Analysis Tools
The review and inspection mechanisms described above involve significant
amount of manual work. There are several static analysis tools available in the
market that can reduce the manual work and perform analysis of the code to find
out errors such as those listed below.
• whether there are unreachable codes (usage of GOTO statements)
• variables declared but not used
• mismatch in definition
• assignment of values to variables illegal
• error prone typecasting of variables
• use of non-portable or architecture-dependent programming constructs
• memory allocated but not having corresponding
stmt1;
stmt2;
Stmt3;
stmt4;
Stmt5;
stmt6;
Stmt7;
}
Else
percent = value/Total*100; /* divide by zero */
In the above program, when we test with code=“M,” we will get 80 percent code
coverage. But if the value of code is not=“M,” then, the program will fail 90 percent
of the time (because of the divide by zero). Thus, even with a code coverage of 80
percent, we are left with a defect that hits the users 90 percent of the time.
Path coverage
In path coverage, we split a program into a number of distinct paths. A program (or
a part of a program) can start from the beginning and take any of the paths to its
completion.
Path Coverage=(Total paths exercised/ Total number of paths in program) *
100
Let us take an example of a date validation routine. The date is accepted as three
fields mm, dd and yyyy. We have assumed that prior to entering this routine, the
values are checked to be numeric. To simplify the discussion, we have assumed the
existence of a function called leapyear which will return TRUE if the given year is
a leap year. There is an array called DayofMonth which contains the number of days
in each month.A simplified flow chart for this is given in Figure below.
As can be seen from the figure, there are different paths that can be taken through
the program
A B-D-G
B-D-H
B-C-E-G
B-C-E-H
B-C-F-G
B-C-F-H
Regardless of the number of statements in each of these paths, if we can execute
these paths, then we would have covered most of the typical scenarios. Path
coverage provides a stronger condition of coverage than statement coverage as
it relates to the various logical paths in the program rather than just program
statements.
Condition coverage
In the above example, even if we have covered all the paths possible, it would not mean that
the program is fully tested. For example, we can make the program take the path A by giving
a value less than 1 (for example, 0) to mm and find that we have covered the path A and the
program has detected that the month is invalid. But, the program may still not be correctly
testing for the other condition namely mm > 12.
Most compliers perform optimizations to minimize the number of Boolean operations and
all the conditions may not get evaluated, even though the right path is chosen.
For example, when there is an OR condition (as in the first IF statement above), once the first
part of the IF (for example, mm < 1) is found to be true, the second part will not be evaluated
at all as the overall value of the Boolean is TRUE.
Similarly, when there is an AND condition in a Boolean expression, when the first condition
evaluates to FALSE, the rest of the expression need not be evaluated at all. For all these reasons,
path testing may not be sufficient. It is necessary to have test cases that exercise each Boolean
expression and have test cases test produce the TRUE as well as FALSE paths.
This will mean more test cases and the number of test cases will rise exponentially with the
number of conditions and Boolean expressions..
The condition coverage, as defined by the formula alongside in the margin gives an indication
of the percentage of conditions covered by a set of test cases. Condition coverage is a much
stronger criteria than path coverage, which in turn is a much stronger criteria than statement
coverage.
Function coverage
This is to identify how many program functions (similar to functions in “C” language) are
covered by test cases. The requirements of a product are mapped into functions during the
design phase and each of the functions form a logical unit.
For example, in a database software, “inserting a row into the database” could be a function.
Or, in a payroll application, “calculate tax” could be a function. Each function could, in turn,
be implemented using other functions. While providing function coverage, test cases can be
written so as to exercise each of the different functions in the code.
The advantages that function coverage provides over the other types of coverage are as follows.
• Functions are easier to identify in a program and hence it is easier to write test cases to
provide function coverage.
• Since functions are at higher level of abstraction than code, it is easier to achieve 100
percent function coverage than 100 percent coverage in any of the earlier methods.
• Functions have a more logical mapping to requirements and hence can provide a more
direct correlation to the test coverage of the product.
• Function coverage provides a natural transition to black box testing. We can also
measure how many times a given function is called. This will indicate which functions
are used most often and hence these functions become the target of any performance
testing and optimization.
Function coverage can help in improving the performance as well as quality of the product.
Summary
Code coverage testing involves “dynamic testing” methods of executing the product with pre-
written test cases, and finding out how much of code has been covered.
Code coverage up to 40-50 percent is usually achievable. Code coverage of more than 80
percent requires enormous amount of effort and understanding of the code. Coverage provide
more confidence by exercising various logical paths and functions. Code coverage tests can
identify the areas of a code that are executed most frequently. Extra attention can then be paid
to these sections of the code. Code coverage testing provides information that is useful in
making such performance-oriented decisions.
Function coverage
This is to identify how many program functions (similar to functions in “C” language) are
covered by test cases. The requirements of a product are mapped into functions during the
design phase and each of the functions form a logical unit.
For example, in a database software, “inserting a row into the database” could be a function.
Or, in a payroll application, “calculate tax” could be a function. Each function could, in turn,
be implemented using other functions. While providing function coverage, test cases can be
written so as to exercise each of the different functions in the code.
The advantages that function coverage provides over the other types of coverage are as follows.
• Functions are easier to identify in a program and hence it is easier to write test cases to
provide function coverage.
• Since functions are at higher level of abstraction than code, it is easier to achieve 100
percent function coverage than 100 percent coverage in any of the earlier methods.
• Functions have a more logical mapping to requirements and hence can provide a more
direct correlation to the test coverage of the product.
• Function coverage provides a natural transition to black box testing. We can also
measure how many times a given function is called. This will indicate which functions
are used most often and hence these functions become the target of any performance
testing and optimization.
Function coverage can help in improving the performance as well as quality of the product.
Summary
Code coverage testing involves “dynamic testing” methods of executing the product with pre-
written test cases, and finding out how much of code has been covered.
Code coverage up to 40-50 percent is usually achievable. Code coverage of more than 80
percent requires enormous amount of effort and understanding of the code. Coverage provide
more confidence by exercising various logical paths and functions. Code coverage tests can
identify the areas of a code that are executed most frequently. Extra attention can then be paid
to these sections of the code. Code coverage testing provides information that is useful in
making such performance-oriented decisions.
Code Complexity Testing
The different types of coverage that can be provided to test a program.
Two questions that come to mind while using these coverage are:
• Which of the paths are independent?
• If two paths are not independent, then we may be able to minimize the number
of tests.
• Is there an upper bound on the number of tests that must be run to ensure that
all the statements have been executed at least once?
Cyclomatic complexity is a metric that quantifies the complexity of a program and
thus provides answers to the above questions. A program is represented in the form
of a flow graph. A flow graph consists of nodes and edges.
In order to convert a standard flow chart into a flow graph to compute cyclomatic
complexity, the following steps can be taken.
• Identify the predicates or decision points (typically the Boolean conditions or
conditional statements) in the program.
• Ensure that the predicates are simple (that is, no and/or, and so on in each
predicate).
• Figure 3.3 shows how to break up a condition having or into simple predicates.
Similarly, if there are loop constructs, break the loop termination checks into
simple predicates. Combine all sequential statements into a single node. The
reasoning here is that these statements all get executed, once started.
• When a set of sequential statements are followed by a simple predicate (as
simplified in (2) above), combine all the sequential statements and the
predicate check into one node and have two edges emanating from this one
node. Such nodes with two edges emanating from them are called predicate
nodes.
• Make sure that all the edges terminate at some node; add a node to represent
all the sets of sequential statements at the end of the program.
Figure 3.3 Flow graph translation of an OR to a simple predicate.
We have illustrated the above transformation rules of a
conventional flow chart to a flow diagram in Figure 3.4.
a flow graph and the cyclomatic complexity provide indicators to the complexity of
the logic flow in a program and to the number of independent paths in a program.
The primary contributors to both the complexity and independent paths are the
decision points in the program.
Consider a hypothetical program with no decision points. The flow graph of such a
program (shown in Figure 3.5 above) would have two nodes, one for the code and
one for the termination node. Since all the sequential steps are combined into one
node (the first node), there is only one edge, which connects the two nodes. This
edge is the only independent path. Hence, for this flow graph, cyclomatic complexity
is equal to one.
This graph has no predicate nodes because there are no decision points.
Hence,
the cyclomatic complexity is also equal to the number of
predicate nodes (0) + 1.
Note that in this flow graph,
the edges (E) = 1;
nodes (N) = 2.
The cyclomatic complexity is also equal to 1 = 1 - 2 +2 = E -N +2.
When a predicate node is added to the flow graph (shown in Figure 3.6 above),
there are obviously two independent paths, one following the path when the Boolean
condition is TRUE and one when the Boolean condition is FALSE. Thus, the
cyclomatic complexity of the graph is 2.
• Black box testing involves looking at the specifications and does not
require examining the code of a program. Black box testing is done from
the customer's viewpoint.
• The test engineer engaged in black box testing only knows the set of inputs
and expected outputs and is unaware of how those inputs are transformed
into outputs by the software.
• Black box testing is done without the knowledge of the internals of the
system under test. They do not require any knowledge of its construction.
Let us take a lock and key. We do not know how the levers in the lock work, but
we only know the set of inputs (the number of keys, specific sequence of using
the keys and the direction of turn of each key) and the expected outcome (locking
and unlocking). For example, if a key is turned clockwise it should unlock and if
turned anticlockwise it should lock. To use the lock one need not understand how
the levers inside the lock are constructed or how they work. However, it is
essential to know the external functionality of the lock and key system. Some of
the functionality that you need to know to use the lock are given below.
Black box testing thus requires a functional knowledge of the product to be tested.
It does not mandate the knowledge of the internal logic of the system nor does it
mandate the knowledge of the programming language used to build the product.
Our tests in the above example were focused towards testing the features of the
product (lock and key), the different states, we already knew the expected
outcome.
Black box testing helps in the overall functionality verification of the system
under test.
Users want to test the behavior of a product from an external perspective, end-
user perspectives are an integral part of black box testing.
Various techniques to be used to generate test scenarios for effective black box
testing.
• In the above table, the naming convention uses a prefix “BR” followed by
a two-digit number. BR indicates the type of testing—"Black box-
requirements testing.”
• Tests for higher priority requirements will get precedence over tests for
lower priority requirements.
• This ensures that the functionality that has the highest risk is tested earlier
in the cycle. Defects reported by such testing can then be fixed as early as
possible.
• The “test conditions” column lists the different ways of testing the
requirement. These conditions can be grouped together to form a single test
case. Alternatively, each test condition can be mapped to one test case.
• The “test case IDs” column can be used to complete the mapping between
test cases and the requirement. Test case IDs should follow naming
conventions so as to enhance their usability.
• For example, in Table 4.2, test cases are serially numbered and prefixed
with the name of the product.
Positive testing can thus be said to check the product's behavior for positive and
negative conditions as stated in the requirement. For the lock and key example, a
set of positive test cases are given below. (Please refer to Table 4.2 for
requirements specifications.)
Let us take the first row in the below table. When the lock is in an unlocked state
and you use key 123—456 and turn it clockwise, the expected outcome is to get
it locked. During test execution, if the test results in locking, then the test is
passed. This is an example of “positive test condition” for positive testing.
In the fifth row of the table, the lock is in locked state. Using a hairpin and turning
it clockwise should not cause a change in state or cause any damage to the lock.
On test execution, if there are no changes, then this positive test case is passed.
This is an example of a “negative test condition” for positive testing.
Positive testing is done to verify the known test conditions and negative testing
is done to break the product with unknowns.
Negative testing
Negative testing is done to show that the product does not fail when an
unexpected input is given. The purpose of negative testing is to try and break
the system. In other words, the input values may not have been represented in the
specification of the product. These test conditions can be termed as unknown
conditions for the product as far as the specifications are concerned.
But, at the end-user level, there are multiple scenarios that are encountered and
that need to be taken care of by the product. It becomes even more important for
the tester to know the negative situations that may occur at the end-user level so
that the application can be tested and made reliable.
Table 4.5 gives some of the negative test cases for the lock and key example.
Whenever we do the testing by boundary value analysis, the tester focuses on, while entering
boundary value whether the software is producing correct output or not.
Boundary values are those that contain the upper and lower limit of a variable. Assume that,
age is a variable of any function, and its minimum value is 18 and the maximum value is 30,
both 18 and 30 will be considered as boundary values.
The basic assumption of boundary value analysis is, the test cases that are created using
boundary values are most likely to cause an error.
There is 18 and 30 are the boundary values that's why tester pays more attention to these values,
but this doesn't mean that the middle values like 19, 20, 21, 27, 29 are ignored. Test cases are
developed for each and every value of the range.
Testing of boundary values is done by making valid and invalid partitions. Invalid partitions
are tested because testing of output in adverse condition is also essential.
Imagine, there is a function that accepts a number between 18 to 30, where 18 is the minimum
and 30 is the maximum value of valid partition, the other values of this partition are 19, 20, 21,
22, 23, 24, 25, 26, 27, 28 and 29. The invalid partition consists of the numbers which are less
than 18 such as 12, 14, 15, 16 and 17, and more than 30 such as 31, 32, 34, 36 and 40. Tester
develops test cases for both valid and invalid partitions to capture the behavior of the system
on different input conditions.
The software system will be passed in the test if it accepts a valid number and gives the desired
output, if it is not, then it is unsuccessful. In another scenario, the software system should not
accept invalid numbers, and if the entered number is invalid, then it should display error
message.
If the software which is under test, follows all the testing guidelines and specifications then it
is sent to the releasing team otherwise to the development team to fix the defects.
The condition is simple if the user provides correct username and password the user will be
redirected to the homepage. If any of the input is wrong, an error message will be displayed.
• T – Correct username/password
• F – Wrong username/password
• E – Error message is displayed
• H – Home screen is displayed
Interpretation:
• Case 1 – Username and password both were wrong. The user is shown an error message.
• Case 2 – Username was correct, but the password was wrong. The user is shown an
error message.
• Case 3 – Username was wrong, but the password was correct. The user is shown an
error message.
• Case 4 – Username and password both were correct, and the user navigated to
homepage
Decision tables act as invaluable tools for designing black boxtests to examine the behavior of
the product under various logical conditions of input variables.
due to time and budget considerations, it is not possible to perform exhausting testing for each
set of test data, especially when there is a large pool of input combinations.
• We need an easy way or special techniques that can select test cases intelligently from
the pool of test-case, such that all test scenarios are covered.
• We use two techniques - Equivalence Partitioning & Boundary Value Analysis
testing techniques to achieve this.
Boundary testing is the process of testing between extreme ends or boundaries between
partitions of the input values.
• So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just Inside-
Just Outside values are called boundary values and the testing is called "boundary
testing".
• The basic idea in boundary value testing is to select input variable values at their:
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
• In Boundary Testing, Equivalence Class Partitioning performss a good role
• Boundary Testing comes after the Equivalence Class Partitioning.
•
• It divides the input data of software into different equivalence data classes.
• You can apply this technique, where there is a range in the input field. Equivalence
partitioning is a software testing technique that involves identifyingall partitions for
the complete set of input.
• The set of input values that generate one single expected output is called a partition.
When the behavior of the software is the same for a set of values, then the set is termed
as an equivalence class or a partition.
• In this case, one representative sample from each partition (also called the member of
the equivalance class) is picked up for testing. One sample from the partition is enough
for testing as the result of picking up some more values from the set will be the same
and will not yield any additional defects. Since all the values produce equal and same
output they are termed as equivalance partition. Testing by this technique involves (a)
identifying all partitions for the complete set of input, output values
• This reduces the number of combinations of input, output values used for testing,
thereby increasing the coverage and reducing the effort involved in testing.
Submit
Order Pizza:
Here is the test condition
1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is considered
invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.
We cannot test all the possible values because if done, the number of test cases will be more
than 100. To address this problem, we use equivalence partitioning hypothesis where we divide
the possible values of tickets into groups or sets as shown below where the system behavior
can be considered the same.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick only
one value from each partition for testing. The hypothesis behind this technique is that if one
condition/value in a partition passes all others will also pass. Likewise, if one condition in
a partition fails, all other conditions in that partition will fail.
Boundary Value Analysis- in Boundary Value Analysis, you test boundaries between
equivalence partitions
In our earlier example instead of checking, one value for each partition you will check the
values at the partitions like 0, 1, 10, 11 and so on. As you may observe, you test values
at both valid and invalid boundaries. Boundary Value Analysis is also called range
checking.
Equivalence partitioning and boundary value analysis(BVA) are closely related and can be
used together at all levels of testing.
1. This testing is used to reduce a very large number of test cases to manageable chunks.
2. Very clear guidelines on determining test cases without compromising on the
effectiveness of testing.
3. Appropriate for calculation-intensive applications with a large number of
variables/inputs
State or graph based testing
State or graph based testing is very useful in situations where
• .
The state transition diagram can be converted to a state transition table (Table
4.10), which lists the current state, the inputs allowed in the current state, and
for each such input, the next state.
The above state transition table can be used to derive test cases to test valid and invalid numbers.
Valid test cases can be generated by: Start from the Start State (State #1 in the
example).
Choose a path that leads to the next state (for example, +/-/digit to go from State
1 to State 2).
Repeat the process till you reach the final state (State 6 in this example).
Graph based testing methods are applicable to generate test cases for state
machines such as language translators, workflows, transaction flows, and data
flows.
• The employee fills up a leave application, giving his or her employee ID,
and start date and end date of leave required.
The above flow of transactions can again be visualized as a simple state based
graph as given in Figure
• The data values (screens, mouse clicks, and so on) that cause the transition
from one state to another is well understood.
In the above sections, we looked at several techniques to test product features and
requirements. It was also mentioned that the test case results are compared with expected
results to conclude whether the test is successful or not. The test case results not only depend
on the product for proper functioning; they depend equally on the infrastructure for delivering
functionality. When infrastructure parameters are changed, the product is expected to still
behave correctly and produce the desired or expected results. The infrastructure parameters
could be of hardware, software, or other components. These parameters are different for
different customers. A black box testing, not considering the effects of these parameters on the
test case results, will necessarily be incomplete and ineffective.
Hence, there is a need for compatibility testing. This testing ensures the working of the product
with different infrastructure components. The techniques used for compatibility testing are
explained in this section. Testing done to ensure that the product features work
consistently with different infrastructure components is called compatibility testing.
The parameters that generally affect the compatibility of the product are
• Processor (CPU) (Pentium III, Pentium IV, Xeon, SPARC, and so on) and the number
of processors in the machine
• Architecture and characterstics of the machine (32 bit, 64 bit, and so on)
• Equipment that the product is expected to work with (printers, modems, routers, and so
on)
• Operating system (Windows, Linux, and so on and their variants) and operating system
services (DNS, NIS, FTP, and so on)
• Any software used to generate product binaries (compiler, linker, and so on and their
appropriate versions)
• Various technological components used to generate components (SDK, JDK, and so on
and their appropriate different versions)
• Some of the common techniques that are used for performing compatibility testing,
using a compatibility table are
• Horizontal combination
All values of parameters that can coexist with the product for executing the set test
cases are grouped together as a row in the compatibility matrix. The values of
parameters that can coexist generally belong to different layers/types of infrastructure
pieces such as operating system, web server, and so on. Machines or environments are
set up for each row and the set of product features are tested using each of these
environments.
• Intelligent sampling
The selection of intelligent samples is based on information collected on the set of
dependencies of the product with the parameters. If the product results are less dependent
on a set of parameters, then they are removed from the list of intelligent samples. All other
parameters are combined and tested.
• The compatibility testing of a product involving parts of itself can be further classified
into two types.
Testing these documents attain importance due to the fact that the users will have to refer to these
manuals, installation, and setup guides when they start using the software at their locations. Most
often the users are not aware of the software and need hand holding until they feel comfortable.
Since these documents are the first interactions the users have with the product, they tend to create
lasting impressions.
A badly written installation document can affect the product quality, even if the product offers rich
functionality.
Some of the benefits that ensure from user documentation testing are:
• User documentation testing aids in highlighting problems over looked during reviews.
• High quality user documentation ensures consistency of documentation and product, thus
minimizing possible defects reported by customers. It also reduces the time taken for each
support call—sometimes the best way to handle a call is to alert the customer to the relevant
section of the manual. Thus the overall support cost is minimized.
• When a customer faithfully follows the instructions given in a document but is unable to
achieve the desired (or promised) results, it is frustrating and often this frustration shows
up on the support staff.
• It contributes to better customer satisfaction and better morale of support staff.
• New programmers and testers who join a project group can use the documentation to learn
the external functionality of the product.
• Customers need less training and can proceed more quickly to advanced training and
product usage if the documentation is of high quality and is consistent with the product.
• Thus high-quality user documentation can result in a reduction of overall training costs for
user organizations.
• Defects found in user documentation need to be tracked to closure like any regular software
defect.
• In order to enable an author to close a documentation defect information about the
defect/comment description, paragraph/page number reference, document version number
reference, name of reviewer, reviewer's contact number, priority, and severity of the
comment need to be passed to the author.
• Since good user documentation aids in reducing customer support calls, it is a major
contibutor to the bottomline of the organization.
• The effort and money spent on this effort would form a valuable investment in the long run
for the organization.
DOMAIN TESTING
White box testing required looking at the program code. Black box testing performs testing without
looking at the program code but looking at the specifications. Domain testing can be considered
as the next level of testing in which we do not look even at the specifications of a software product
but are testing the product, purely based on domain knowledge and expertise in the domain of
application. This testing approach requires critical understanding of the day-to-day business
activities for which the software is written. This type of testing requires business domain
knowledge rather than the knowledge of what the software specification contains or how the
software is written. Thus domain testing can be considered as an extension of black box testing. It
focus more on its external behavior.
The test engineers performing this type of testing are selected because they have in-depth
knowledge of the business domain. Since the depth in business domain is a prerequisite for this
type of testing, sometimes it is easier to hire testers from the domain area (such as banking,
insurance, and so on) and train them in software, rather than take software professionals and train
them in the business domain. This reduces the effort and time required for training the testers in
domain testing and also increases the effectivenes of domain testing.
Knowing the account opening process in a bank enables a tester to test that functionality better.
In this case, the bank official who deals with account opening knows the attributes of the people
opening the account, the common problems faced, and the solutions in practice. To take this
example further, the bank official might have encountered cases where the person opening the
account might not have come with the required supporting documents or might be unable to fill
up the required forms correctly. In such cases, the bank official may have to engage in a different
set of activities to open the account.
Though most of these may be stated in the business requirements explicitly, there will be cases
that the bank official would have observed while testing the software that are not captured in the
requirement specifications explicitly. Hence, when he or she tests the software, the test cases are
likely to be more thorough and realistic. Domain testing is the ability to design and execute test
cases that relate to the people who will buy and use the software.
It is also characterized by how well an individual test engineer understands the operation of the
system and the business processes that system is supposed to support. If a tester does not
understand the system or the business processes, it would be very difficult for him or her to use,
let alone test, the application without the aid of test scripts and cases. Domain testing exploits the
tester's domain knowledge to test the suitability of the product to what the users do on a typical
day.
Domain testing involves testing the product, not by going through the logic built into the product.
The business flow determines the steps, not the software under test. This is also called “business
vertical testing.” Test cases are written based on what the users of the software do on a typical
day.
Let us further understand this testing using an example of cash withdrawal functionality in an
ATM, extending the earlier example on banking software. The user performs the following actions.
In the above example, a domain tester is not concerned about testing everything in the design;
rather, he or she is interested in testing everything in the business flow.
Generally, domain testing is done after all components are integrated and after the product has
been tested using other black box approaches (such as equivalence partitioning and boundary value
analysis). Hence the focus of domain testing has to be more on the business domain to ensure that
the software is written with the intelligence needed for the domain. To test the software for a
particular “domain intelligence,” the tester is expected to have the intelligence and knowledge of
the practical aspects of business flow. This will reflect in better and more effective test cases which
examine realistic business scenarios, thus meeting the objective and purpose of domain testing.