Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
61 views115 pages

Software Testing Full

Software testing is an activity that verifies if actual results match expected results and ensures software is defect-free, involving both manual and automated methods. It is categorized into functional, non-functional, and maintenance testing, with various phases of evolution in testing practices. The document also discusses the importance of bugs, their consequences, and different types of bugs, emphasizing the need for effective testing techniques and methodologies.

Uploaded by

cocfalco81
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views115 pages

Software Testing Full

Software testing is an activity that verifies if actual results match expected results and ensures software is defect-free, involving both manual and automated methods. It is categorized into functional, non-functional, and maintenance testing, with various phases of evolution in testing practices. The document also discusses the importance of bugs, their consequences, and different types of bugs, emphasizing the need for effective testing techniques and methodologies.

Uploaded by

cocfalco81
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 115

What is Software Testing?

SOFTWARE TESTING is defined as an activity to check whether the actual results


match the expected results and to ensure that the software system is Defect free. It
involves execution of a software component or system component to evaluate one
or more properties of interest. Software testing also helps to identify errors, missing
requirements in contrary to the actual requirements. It can be either done manually
or using automated tools.

Types of Software Testing

Typically Testing is classified into three categories.

Functional Testing

Non-Functional Testing or Performance Testing

Maintenance (Regression and Maintenance)

ThePurpose of Testing

Productivity is measured by the sum of the costs of the material, the rework, and the
discarded components, and the cost of quality assurance and testing.

The biggest part of software cost is the cost of bugs: the cost of detecting them, the
cost of correcting them, the cost of designing tests that discover them, and the cost
of running thosetests.

Phases in testing

Phases in testing can be categorized into the following 5 phases:

1. Phase 0: (Until 1956: Debugging Oriented) There is no difference between testing


and debugging.

2. Phase 1: (1957-1978: Demonstration Oriented) the purpose of testing is to show


that software works. This failed because the probability of showing that software
works 'decreases' as testing increases. i.e. the more you test, the more likely you will
find a bug.

3. Phase 2: (1979-1982: Destruction Oriented) the purpose of testing is to show that


software doesn’t work. This also failed because the software will never get released
as you will find one bug .

4. Phase 3: (1983-1987: Evaluation Oriented) the purpose of testing is that testing


does improve the product to the extent that testing catches bugs and to the extent that
those bugs are fixed. The product is released when the confidence on that product
is high enough.

5. Phase 4: (1988-2000: Prevention Oriented) Testability is the factor considered


here. One reason is to reduce the labor of testing. Other reason is to check the testable
and nontestable code.

Testable code has fewer bugs than the code that's hard to test. Identifying the testing
techniques to test the code is the main key here.

Test Design:

The software code must be designed and tested, but many appear to be unaware that
tests themselves must be designed and tested. Tests should be properly designed and
tested before applying it to the actual code.

There are approaches other than testing to create better software. Methods other than
testing include:

1. Inspection Methods: Methods like walkthroughs, desk checking, formal


inspections and code reading appear to be as effective as testing.

2. Design Style: While designing the software itself, adopting objectives such as
testability, openness and clarity can do much to prevent bugs.

3. Static Analysis Methods: Includes formal analysis of source code during


compilation. In earlier days, it is a routine job of the programmer to do that. Now,
the compilers have taken over that job.

4. Languages: The source language can help reduce certain kinds of bugs.
Programmers find new bugs while using new languages.
5. Development Methodologies and Development Environment: The development
process and the environment in which that methodology is embedded can prevent
many kinds of bugs.
Dichotomies

Testing Versus Debugging:

Many people consider both as same. Purpose of testing is to show that a program
has bugs. The purpose of testing is to find the error or misconception that led to the
program's failure and to design and implement the program changes that correct the
error. Debugging usually follows testing, but they differ as to goals, methods. The
below table shows few important differences between testing and debugging.

Function versus Structure:


Tests can be designed from a functional or a structural point ofview. In Functional
testing, the program or system is treated as a black box. It is subjected to inputs, and
its outputs are verified for conformance to specified behavior.

In Structural testing does look at the implementation details. Things such as


programming style, control method, source language, database design, and coding
details dominate structural testing. Both Structural and functional tests are useful,
both have limitations, and both target different kinds of bugs. Functional tests can
detect all bugs but would take infinite time to do so. Structural tests cannot detect all
errors even if completely executed.

Designer versus Tester:


Test designer is the person who designs the tests where as the tester is the one
actually tests the code. During functional testing, the designer and tester are probably
different persons. During unit testing, the tester and the programmer merge into one
person. Tests designed and executed by the software designers are by nature biased
towards structural consideration.

Modularity versus Efficiency:


A module is a discrete, well-defined, small component of a system. Smaller the
modules, difficult to integrate; larger the modules, difficult to understand. Both tests
and systems can be modular.

Builder versus Buyer:


Most software is written and used by the same organization.

The different roles/ users in a system include:

1. Builder: Who designs the system and is accountable to the buyer.

2. Buyer: Who pays for the system in the hope of profits from providing services

3. User: Ultimate beneficiary or victim of the system.

4. Tester: Who is dedicated to find out errors.

MODEL FOR TESTING


Above figure is a model of testing process. It includes three models: A model of the
environment, a model of the program and a model of the expected bugs.
Environment: A Program's environment is the hardware and software required to
make it run. For online systems, the environment may include communication lines,
other systems, terminals and operators. The environment also includes all programs
that interact with and are used to create the program under test - such as OS, linkage
editor, loader, compiler, utility routines.
Program: The concept of the program is to be simplified in order to test it. If simple
model of the program doesn’t explain the unexpected behavior, we may have to
modify that model to include more facts and details. And if that fails, we may have
to modify the program.
Bugs: Bugs are more insidious (harmful) than ever we expect them to be. An
unexpected test result may lead us to change our notion of what a bug is and our
model of bugs.
Some optimistic notions that many programmers or testers have about bugs are
usually unable to test effectively and unable to justify the tests most programs need.
Optimistic notions about bugs:
1. Benign Bug Hypothesis: The belief that bugs are nice, tame and logical.
2. Bug Locality Hypothesis: The belief that a bug discovered with in a component
affects only that component's behavior.

3. Control Bug Dominance: The belief those errors in the control structures (if,
switch etc) of programs dominate the bugs.

4. Code / Data Separation: The belief that bugs respect the separation of code and
data.

5. Lingua Salvatore Est.: The belief that the language syntax and semantics (e.g.
Structured Coding, Strong typing, etc) eliminates most bugs.

6. Corrections Abide: The mistaken belief that a corrected bug remains corrected.

7. Silver Bullets: The mistaken belief that X (Language, Design method,


representation, environment) grants immunity from bugs.

8. Sadism Suffices: Tough bugs need methodology and techniques.

9. Angelic Testers: The belief that testers are better at test design than programmers
is at code design.

Tests: Tests are formal procedures, Inputs must be prepared, Outcomes should
predict, tests should be documented, commands need to be executed, and results are
to be observed. All these errors are subjected to error. We do three distinct kinds of
testing on a typical software system.

They are:

1. Unit / Component Testing: A Unit is the smallest testable piece of software that
can be compiled, assembled, linked, loaded etc. A unit is usually the work of one
programmer and consists of several hundred or fewer lines of code. Unit Testing is
the testing we do to show that the unit does not satisfy its functional specification or
that its implementation structure does not match the intended design structure. A
Component is an integrated aggregate of one or more units.

Component Testing is the testing we do to show that the component does not
satisfy its functional specification or that its implementation structure does not match
the intended design structure.

2. Integration Testing: Integration is the process by which components are


aggregated to create larger components. Integration Testing is testing done to show
that even though the components were individually satisfactory (after passing
component testing), checks the combination of components are incorrect or
inconsistent.

3. System Testing: A System is a big component. System Testing is aimed at


revealing bugs that cannot be attributed to components. It includes testing for
performance, security, configuration sensitivity, startup and recovery.

Role of Models: The art of testing consists of creating, selecting, exploring, and
revising models. Our ability to go through this process depends on the number of
different models we have at hand and their ability to express a program'sbehavior
Importance of bugs: The importance of bugs depends on frequency, correction
cost, installation cost, and consequences.

1. Frequency: How often does that kind of bug occur? Pay more attention to the
more frequent bug types.

2. Correction Cost: What does it cost to correct the bug after it is found? The cost
is the sum of 2 factors:

(1) the cost of discovery

(2) the cost of correction.

These costs go up dramatically later in the development cycle when the bug is
discovered. Correction cost also depends on system size.

3. Installation Cost: Installation cost depends on the number of installations: small


for a single user program but more for distributed systems.

4. Consequences: What are the consequences of the bug? Bug consequences can
range from mild to catastrophic.

A reasonable metric for bug importance is

Importance= ($) = Frequency * (Correction cost + Installation cost + Consequential


cost)

Consequences of bugs: The consequences of a bug can be measure in terms of


human rather than machine. Some consequences of a bug are

1 Mild: The symptoms of the bug impact us gently); a misspelled output or a


misaligned printout.

2 Moderate: Outputs are misleading or redundant. The bug impacts the system's
performance.

3 Annoying: The system's behavior because of the bug is dehumanizing. E.g. Names
are modified.

4 Disturbing: It refuses to handle legitimate (authorized / legal) transactions. The


ATM won’t give you money. My credit card is declared invalid.
5 Serious: It loses track of its transactions. Not just the transaction itself but the fact
that the transaction occurred. Accountability is lost.

6 Very Serious: The bug causes the system to do the wrong transactions. Instead of
losing your paycheck, the system credits it to another account or converts deposits
to withdrawals.

7 Extreme: The problems aren't limited to a few users or to few transaction types.
They are frequent for unusual cases.

8 Intolerable: Long term unrecoverable corruption of the database occurs and the
corruption is not easily discovered. Serious consideration is given to shutting the
system down.

9 Catastrophic: The decision to shut down is taken out of our hands because the
system fails.

10 Infectious: One that corrupt other systems even though it does not fall in itself ;
that impacts the social physical environment.

TAXONOMY OF BUGS OR TYPES OF BUGS:

A given bug can be put into one or another category depending on its history and
the programmer's state of mind.

The major categories are:

(1) Requirements, Features and Functionality Bugs

(2) Structural Bugs

(3) Data Bugs

(4) Coding Bugs

(5) Interface, Integration and System Bugs

(6) Test and Test Design Bugs.

Requirements, Features and Functionality Bugs:


Various categories in Requirements, Features and Functionality bugs include:

1. Requirements and Specifications Bugs:

Requirements and specifications developed from them can be incomplete,


ambiguous, or self-contradictory. They can be misunderstood or impossible
to understand. The specifications may change while the design is in progress.
The features are added, modified and deleted.

2. Feature Bugs: Specification problems usually create corresponding feature


problems. A feature can be wrong, missing, or superfluous (serving no useful
purpose). A missing feature or case is easier to detect and correct. Removing
the features might complicate the software, consume more resources, and
increase more bugs.

3. Feature Interaction Bugs: Providing correct, clear, implementable and testable


feature specifications is not enough. Features usually come in groups or related
features. The features of each group and the interaction of features within the group
are usually well tested. The problem is unpredictable interactions between feature
groups or even between individual features. For example, your telephone is provided
with call holding and call forwarding. The interactions between these two features
may have bugs.

Specification and Feature Bug Remedies:

1. Most feature bugs are rooted in human to human communication problems.


One solution is to use high-level, formal specification languages or systems.

2. Short term Support: Specification languages facilitate formalization of


requirements and inconsistency and ambiguity analysis.

3. Long term Support: Assume that we have a great specification language and
that can be used to create unambiguous, complete specifications with
unambiguous complete tests and consistent test criteria.

Testing Techniques for functional bugs: Most functional test techniques- that is
those techniques which are based on a behavioral description of software, such as
transaction flow testing, syntax testing, domain testing, logic testing and state testing
are useful in testing functional bugs.
¸
2. Structural bugs:

Various categories in Structural bugs include:

1. Control and Sequence Bugs:

Control and sequence bugs include paths left out, unreachable code, improper
nesting of loops, loop-back or loop termination criteria incorrect, missing
process steps, duplicated processing, unnecessary processing, GOTO's, ill-
conceived (not properly planned) switches. Most of the control flow bugs are
easily tested and caught in unit testing.

2. Logic Bugs: Bugs in logic, especially those related to misunderstanding how case
statements and logic operators behave singly and combinations .Also includes
evaluation of boolean expressions in deeply nested IF-THEN-ELSE constructs. If
the bugs are parts of logical (i.e. boolean) processing not related to control flow, they
are characterized as processing bugs.

3. Processing Bugs: Processing bugs include arithmetic bugs, algebraic,


mathematical function evaluation, algorithm selection and general processing.
Examples of Processing bugs include: Incorrect conversion from one data
representation to other, ignoring overflow, improper use of greater-than-or-equal etc.
Although these bugs are frequent (12%), they tend to be caught in unit testing.

4. Initialization Bugs: Initialization bugs are common. Initialization bugs can be


improper and superfluous. Superfluous bugs can affect performance. Typical
initialization bugs include: Forgetting to initialize the variables before first use,
assuming that they are initialized elsewhere, initializing to the wrong format,
representation or type etc.

5. Data-Flow Bugs and Anomalies: Most initialization bugs are special case of data
flow anomalies. A data flow anomaly occurs where there is a path along which we
expect to do something unreasonable with data, such as using an uninitialized
variable, attempting to use a variable before it exists, modifying and then not storing
or using the result, or initializing twice without an intermediate use.

3. Data bugs: Data bugs include all bugs that arise from the specification of data
objects, their formats, the number of such objects, and their initial values. Data Bugs
are at least as common as bugs in code, but they are often treated as if they did not
exist at all. Software is evolving towards programs in which more and more of the
control and processing functions are stored in tables. Because of this, there is an
increasing awareness that bugs in the data problems should be given equal attention.

Dynamic Data Vs Static data:

Dynamic data are transitory(not permanent). Their lifetime is relatively short. A


storage object may be used to hold dynamic data of different types, with different
formats, attributes . Dynamic data bugs are due to left over garbage in a shared
resource. This can be handled in one of the three ways:

(1) Clean up after the use by the user

(2) Common Cleanup by the resource manager

Static Data are fixed in form and content. They appear in the source code or
database directly or indirectly, for example a number, a string of characters, or a bit
pattern. Compile time processing will solve the bugs caused by static data.

Information, parameter, and control: Static or dynamic data can serve in one of
three roles, or in combination of roles: as a parameter, for control, or for information.

Content, Structure and Attributes:

Content can be an actual bit pattern, character string, or number put into a data
structure. All data bugs result in the corruption or misinterpretation of content.
Structure relates to the size, shape and numbers that describe the data object, which
is memory location used to store the content. (E.g. A two dimensional array).
Attributes relates to the specification meaning that is the semantics associated with
the contents of a data object. (E.g. an integer, an alpha numeric string, a subroutine).
4.Coding bugs: Coding errors of all kinds can create any of the other kind of bugs.
Syntax errors are generally not important in the scheme of things if the source
language translator has adequate syntax checking. If a program has many syntax
errors, then we should expect many logic and coding bugs. The documentation bugs
are also considered as coding bugs which may mislead the programmers.

5.Interface, integration, and system bugs: Various categories of bugs in Interface,


Integration, and System Bugs are:

1. External Interfaces:

The external interfaces are the means used to communicate with the world. These
include devices, sensors, input terminals, printers, and communication lines. All
external interfaces or machine should employ a protocol. The protocol may be wrong
or incorrectly implemented. Other external interface bugs are: invalid timing or
sequence assumptions related to external signals .Misunderstanding external input
or output formats.

2. Internal Interfaces: A best example for internal interfaces is communicating


routines. The external environment is fixed and the system must adapt to it but the
internal environment, which consists of interfaces with other components. Internal
interfaces have the same problem as external interfaces.

3. Hardware Architecture: Bugs related to hardware architecture originate mostly


from misunderstanding how the hardware works. Examples of hardware architecture
bugs: address generation error, i/o device operation / instruction error, waiting too
long for a response, incorrect interrupt handling etc.The remedy for hardware
architecture and interface problems is twofold:

(1) Good Programming and Testing

(4. Operating System Bugs: Program bugs related to the operating system are a
combination of hardware architecture and interface bugs mostly caused by a
misunderstanding of what it is the operating system does. Use operating system
interface specialists, and use explicit interface modules for all operating system
calls. This approach may not eliminate the bugs but at least will localize them and
make testing easier.
5. Software Architecture: Software architecture bugs are the kind that called -
interactive. Routines can pass unit and integration testing without revealing such
bugs. Many of them depend on load, and their symptoms emerge only when the
system is stressed. Sample for such bugs: Assumption that there will be no interrupts,
Failure to block or un block interrupts, Assumption that memory and registers were
initialized or not initialized etc.

6.Control and Sequence Bugs (Systems Level): These bugs include: Ignored
timing, Assuming that events occur in a specified sequence, Working on data before
all the data have arrived from disc, Missing, wrong, redundant or superfluous
process steps.

7. Resource Management Problems: Memory is subdivided into dynamically


allocated resources such as buffer blocks, queue blocks, task control blocks. External
mass storage units such as discs, are subdivided into memory resource pools.

8. Integration Bugs: Integration bugs are bugs having to do with the integration of,
and with the interfaces between, working and tested components. These bugs result
from inconsistencies or incompatibilities between components. The communication
methods include data structures, call sequences, registers, communication links and
protocols results in integration bugs.
9.System Bugs: System bugs covering all kinds of bugs that cannot be assigned to
a component or to their simple interactions, but result from the totality of interactions
between many components such as programs, data, hardware, and the operating
systems.
6. TEST AND TEST DESIGN BUGS: Testers have no immunity to bugs. Tests
require complicated scenarios and databases. They require code or the equivalent to
execute and consequently they can have bug.
Test criteria: If the specification is correct, it is correctly interpreted and
implemented, and a proper test has been designed; but the criterion by which the
software's behavior is judged may be incorrect or impossible. So, a proper test
criteria has to be designed. The more complicated the criteria, the likelier they are to
have bugs.
Remedies: The remedies of test bugs are:
1. Test Debugging: The first remedy for test bugs is testing and debugging the tests.
Test debugging, when compared to program debugging, is easier because tests, when
properly designed are simpler than programs.
2. Test Quality Assurance: Programmers have the right to ask how quality in
independent testing is monitored.
3. Test Execution Automation:
Assemblers, loaders, compilers are developed to reduce the incidence of
programming and operation errors. Test execution bugs are virtually eliminated by
various test execution automation tools.
4. Test Design Automation: Just as much of software development has been
automated, much test design can be and has been automated.
Types of Testing
Manual Testing

This type includes the testing of the Software manually i.e. without using any
automated tool or any script. In this type the tester takes over the role of an end user
and test the Software to identify any un-expected behavior or bug. There are different
stages for manual testing like unit testing, Integration testing, System testing and
User Acceptance testing. Testers use test plan, test cases or test scenarios to test the
Software to ensure the completeness of testing. Manual testing also includes
exploratory testing as testers explore to identify errors in it.

Automation Testing

Automation testing which is also known as “Test Automation”, is when the tester
writes scripts and uses another software to test the software. This Automation
Testing is used to re-run the test scenarios that were performed manually, quickly
and repeatedly. Automation testing is also used to test the application from load,
performance and stress point of view. It increases the test coverage; improve
accuracy, saves time and money in comparison to manual testing.

How to Automate:

Automation is done by using a supportive computer language like VB scripting and


an automated software application. There are a lot of tools available which can be
used to write automation scripts.

Before mentioning the tools lets identify the process which can be used to automate
the testing:

Identifying areas within a software for automation.

• Selection of appropriate tool for Test automation.

• Writing Test scripts.

• Development of Test suits.

• Execution of scripts
• Create result reports.

• Identify any potential bug or performance issue.•

Following are the tools which can be used for Automation testing:

HP Quick Test Professional

• Selenium

• IBM Rational Functional Tester

• SilkTest

• TestComplete

• Testing Anywhere

• WinRunner

• LaodRunner

• Visual Studio Test Professional

Testing Methods

There are different methods which can be used for Software testing.

Black Box Testing

The technique of testing without having any knowledge of the interior workings of
the application is Black Box testing. The tester is unknown to the system architecture
and does not have access to the source code. Typically, when performing a black
box test, a tester will interact with the system’s user interface by providing inputs
and examining outputs without knowing how and where the inputs are worked upon.

White Box Testing

White box testing is the detailed investigation of internal logic and structure of the
code. White box testing is also called glass testing or open box testing. In order to
perform white box testing on an application, the tester needs to possess knowledge
of the internal working of the code. The tester needs to have a look inside the source
code and find out which unit/chunk of the code is behaving inappropriately.

Grey Box Testing

Grey Box testing is a technique to test the application with limited knowledge of the
internal workings of an application. Unlike black box testing, where the tester only
tests the application’s user interface, in grey box testing, the tester has access to
design documents and the database. Having this knowledge, the tester is able to
better prepare test data and test scenarios when making the test plan.
Following are the main levels of Software Testing:

Functional Testing.

Non- functional Testing

.• Functional Testing
This is a type of black box testing that is based on the specifications of the software
that is to be tested. The application is tested by providing input and then the results
are examined that need to conform to the functionality it was intended for. There are
five steps that are involved when testing an application for functionality.

Step I - The determination of the functionality that the intended application is


meant to perform.

Step II - The creation of test data based on the specifications of the application.

Step III - The output based on the test data and the specifications of the application.

Step IV - The writing of Test Scenarios and the execution of test cases.

Steps V - The comparison of actual and expected results based on the executed test
cases.

Unit Testing

Unit testing is performed by the respective developers on the individual units of


source code assigned areas. The goal of unit testing is to isolate each part of the
program and show that individual parts are correct in terms of requirements and
functionality.

Limitations of Unit Testing

Testing cannot catch each and every bug in an application. It is impossible to


evaluate every execution path in every software application. The same is the case
with unit testing. There is a limit to the number of scenarios and test data that the
developer can use to verify the source code.
Integration Testing

The testing of combined parts of an application to determine if they function


correctly together is Integration testing. There are two methods of doing Integration
Testing Bottom-up Integration testing and Top Down Integration testing.

Bottom-up integration testing begins with unit testing, followed by tests of


progressively higher-level combinations of units called modules or builds.

Top-Down integration testing, the highest-level modules are tested first and
progressively lower-level modules are tested after that. In a comprehensive software
development environment, bottom-up testing is usually done first, followed by top-
down testing.

System Testing

This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested to see that it meets
Quality Standards. This type of testing is performed by a specialized testing team.

System Testing where the application is tested as a whole. The application is tested
thoroughly to verify that it meets the functional and technical specification.

Regression Testing

Whenever a change in a software application is made it is quite possible that other


areas within the application have been affected by this change. The intent of
Regression testing is to ensure that a change, such as a bug fix did not result in
another fault being uncovered in the application.

When to do regression testing?


• When a new functionality is added to the system and the code has been
modified to absorb and integrate that functionality with the existing code.
• When some defect has been identified in the software and the code is
debugged to fix it.

Testing the new changes to verify that the change made did not affect any other
area of the application.
Acceptance Testing

This is the most importance type of testing as it is conducted by the Quality


Assurance Team who will find whether the application meets the intended
specifications and satisfies the client’s requirements. The QA team will have a set
of pre written scenarios and Test Cases that will be used to test the application.
Acceptance tests are not only intended to point out simple spelling mistakes,
Interface gaps, but also to point out any bugs in the application that will result in
system crashers or major errors in the application.

There are requirements for acceptance of the system.

Alpha Testing

This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined are known as alpha testing. During this phase,the following will be tested
in the application:

Spelling Mistakes

• Broken Links

• Cloudy Directions

Beta Testing

This test is performed after Alpha testing has been successfully performed. In beta
testing a sample of the intended audience tests the application. Beta testing is also
known as pre-release testing. Beta test versions of software are ideally distributed to
a wide audience on the Web, partly to give the program a "real-world" test and partly
to provide a preview of the next release.

In this phase the audience will be testing the following:

Users will install, run the application and send their feedback to the projectteam.

Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
The more issues you fix that solve real user problems, the higher the quality of
application will be.

This willincrease customer satisfaction.


Non-Functional Testing

This section is based upon the testing of the application from its non-functional attributes. Non-
functional testing of Software involves testing the Software from the requirements which are non-
functional in nature related such as performance, security, user interface etc. Some of the important
and commonly used non-functional testing types are mentioned as follows.

Performance Testing

It is mostly used to identify any performance issues rather than finding the bugs in software. There
are different causes which contribute in lowering the performance of software:

• Network delay.
• Client side processing

.• Database transaction processing.

• Load balancing between servers.

Performance testing is considered as one of the important and mandatory testing type in terms of
following aspects:

Speed (i.e. Response Time, accessing)

Capacity

Stability

Itcan be divided into different sub types such as Load testing and Stress testing.

Load Testing

A process of testing the behavior of the Software by applying maximum load in terms of Software
accessing and manipulating large input data. It can be done at both normal and peak load
conditions. This type of testing identifies the maximum capacity of Software and its behavior at
peak time. Most of the time, Load testing is performed with the help of automated tools such as
Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer,
Visual Studio Load Test etc.

Stress Testing

This testing type includes the testing of Software behavior under abnormal conditions. Taking
away the resources, applying load beyond the actual load limit is Stress testing. The main intent is
to test the Software by applying the load to the system and taking over the resources used by the
Software to identify the breaking point. This testing can be performed by testing different scenarios
such as: Shutdown or restart .

Turning the database on or off.

Running different processes that consume resources such as CPU, Memory, server etc.

Usability Testing

This section includes different concepts and definitions of Usability testing from Software point
of view. It is a black box technique and is used to identify any error(s) and improvements in the
Software by observing the users through their usage and operation.

Usability Testing involves the testing of Graphical User Interface of the Software. This testing
ensures that the GUI should be according to requirements in terms of color, alignment, size and
other properties. On the other hand Usability testing ensures that a good and user friendly GUI is
designed and is easy to use for the end user.

Security Testing

Security testing involves the testing of Software in order to identify any flaws ad gaps from security
and vulnerability point of view.

Following are the main aspects which Security testing should ensure:

• Confidentiality.
• Integrity.
• Authentication.
• Authorization.
• Input checking and validation.
• SQL insertion attacks.

Portability Testing

Portability testing includes the testing of Software with intend that it should be reusable and can
be moved from another Software as well. Following are the strategies that can be used for
Portability testing.

Portability testing can be considered as one of the sub parts of System testing, as this testing type
includes the overall testing of Software with respect to its usage over different environments.
Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing.

Following are some pre-conditions for Portability testing:

Software should be designed and coded, keeping in mind Portability Requirements. Unit testing
has been performed on the associated components.

Integration testing has been performed.

Test environment has been established.

BASICS OF PATH TESTING:

Path Testing:

• Path Testing is the name given to a family of test techniques based on selecting a set of test
paths through the program.
• If the set of paths are properly chosen then we have achieved some measure of test
thoroughness.
• For example, pick enough paths to assure that every source statement has been executed at
least once.
• Path testing techniques are the oldest of all structural test techniques. o Path testing is most
applicable to new software for unit testing.
• It is a structural technique. It requires complete knowledge of the program's structure.
• It is most often used by programmers to unit test their own code.
• The effectiveness of path testing rapidly deteriorates as the size of the software aggregate
under test increases.
The Bug Assumption:

• The bug assumption for the path testing strategies is that it takes a different path than
intended. o
• As an example "GOTO X" where "GOTO Y" had been intended.
• Structured programming languages prevent many of the bugs targeted by path testing:
• For old code in COBOL, ALP, FORTRAN and Basic, the path testing is indispensable.
Control Flow Graphs:

The control flow graph is a graphical representation of a program's control structure. It uses the
elements named process blocks, decisions, and junctions. The flow graph is similar to the earlier
flowchart.

Flow Graph Elements:

A flow graph contains four different types of elements.


(1) Process Block

(2) Decisions

(3) Junctions

(4) Case Statements

1. Process Block:

• A process block is a sequence of program statements uninterrupted by either decisions or


junctions.
• It is a sequence of statements such that if any one of statement of the block is executed,
then all statement there of are executed.
• Formally, a process block is a piece of straight line code of one statement or hundreds of
statements.
• A process has one entry and one exit. It can consists of a single statement or instruction, a
sequence of statements or instructions
2. Decisions:

• A decision is a program point at which the control flow can diverge.


• Machine language conditional branch and conditional skip instructions are examples of
decisions.
• Most of the decisions are two-way but some are three way branches in control flow.
3. Case Statements:

• A case statement is a multi-way branch or decisions.


• Examples of case statement are a jump table in assembly language, and the PASCAL case
statement.
• From the point of view of test design, there are no differences between Decisions and Case
Statements
4. Junctions:

• A junction is a point in the program where the control flow can merge.
• Examples of junctions are: the target of a jump or skip instruction in ALP, a label that is a
target of GOTO.

Control Flow Graphs Vs Flowcharts:

• A program's flow chart resembles a control flow graph.


• In flow graphs, we don't show the details of what is in a process block.
• In flow charts every part of the process block is drawn.
• The flowchart focuses on process steps, where as the flow graph focuses on control flow
of the program.
• The act of drawing a control flow graph is a useful tool that can help us clarify the control
flow and data flow issues.
One-to-one flowchart for example program
The final transformationof flow graph is shown , where we've dropped the node
labels to achieve an even simpler representation. The way to work with control flow
graphs is to use the simplest possible representation -

FLOWGRAPH AND FLOWCHART GENERATION:


Flowcharts can be
1. Handwritten by the programmer.
2. Automatically produced by a flowcharting program based on a mechanical
analysis of the source code.
3. Semi automatically produced by a flow charting program based in part on
structural analysis of the source code and in part on directions given by the
programmer.
There are relatively few control flow graph generators.
PATH TESTING - PATHS, NODES AND LINKS:
Path: A path through a program is a sequence of instructions or statements that starts
at an entry, junction, or decision and ends at another, or possibly the junction,
decision, or exit.
A path may go through several junctions, processes, or decisions, one or more times.
Paths consist of segments.
The segment is a link - a single process that lies between two nodes.
A path segment is succession of consecutive links that belongs to some path. The
length of path measured by the number of links in it and not by the number of the
instructions or statements executed along that path. The name of a path is the name
of the nodes along the path.
FUNDAMENTAL PATH SELECTION CRITERIA:
There are many paths between the entry and exit of a typical routine
Every decision doubles the number of potential paths.
And every loop multiplies the number of potential paths by the number of different
iteration values possible for the loop.
Defining complete testing:
1. Exercise every path from entry to exit.
2. Exercise every statement or instruction at least once.
3. Exercise every branch and case statement, in each direction at least once.
Path testing criteria
Any testing strategy based on paths must at least both exercise every instruction
and take branches in all directions. So we have used three different testing criteria
or strategies out of a potentially infinite family of strategies.
i. Path Testing (Pinf):
1. Execute all possible control flow paths through the program: typically, this is
restricted to all possible entry/exit paths through the program.
2. If we achieve this prescription, we are said to have achieved 100% path
coverage. This is the strongest criterion in the path testing strategy family: it is
generally impossible to achieve.
ii. Statement Testing (P1):
1. Execute all statements in the program at least once under some test. If we do
enough tests to achieve this, we are said to have achieved 100% statement
coverage.
2. We can say that we have achieved 100% node coverage. We denote this by C1.
iii. Branch Testing (P2): 1. Execute enough tests to assure that every branch
alternative has been exercised at least once under some test.
2. If we do enough tests to achieve this prescription, then we have achieved 100%
branch coverage.
3. An alternative characterization is to say that we have achieved 100% link
coverage.
4. For structured software, branch testing and therefore branch coverage strictly
includes statement coverage.
5. We denote branch coverage by C2.
Which paths to be tested?
You must pick enough paths to achieve C1+C2. It is better to make many simple
paths than a few complicated paths.
Practical Suggestions in Path Testing:
1. Draw the control flow graph on a single sheet of paper.
2. Make several copies - as many as you will need for coverage (C1+C2) and
several more.
3. Use a yellow highlighting marker to trace paths. Copy the paths onto master
sheets.
4. Continue tracing paths until all lines on the master sheet are covered, indicating
that you appear to have achieved C1+C2.
5. As you trace the paths, create a table that shows the paths, the coverage status of
each process, and each decision.
6. The above paths lead to the following table considering Figure 2.9:
7. After you have traced a covering path set on the master sheet and filled in the
table for every path, check the following:
1. Does every decision have a YES and a NO in its column? (C2)
2. Has every case of all case statements been marked? (C2)
3. Is every three - way branch (less, equal, greater) covered? (C2)
4. Is every link (process) covered at least once? (C1)

Loop Software Testing


Loop Testing is a type of software testing type that is performed to validate the loops. It is one
of the type of Control Structure Testing. Loop testing is a white box testing technique and is used
to test loops in the program.

Objectives of Loop Testing:


The objective of Loop Testing is:

• To fix the infinite loop repetition problem.


• To know the performance.
• To identify the loop initialization problems.
• To determine the uninitialized variables.
Types of Loop Testing:
Loop testing is classified on the basis of the types of the loops:

Simple Loop Testing:


Testing performed in a simple loop is known as Simple loop testing. Simple loop is basically
a normal “for”, “while” or “do-while” in which a condition is given and loop runs and
terminates according to true and false occurrence of the condition respectively. This type of
testing is performed basically to test the condition of the loop whether the condition is
sufficient to terminate loop after some point of time.

A simple loop is tested in the following way:

1. Skip the entire loop


2. Make 1 passes through the loop
3. Make 2 passes through the loop
4. Make n passes through the loop where a<b, n is the maximum number of passes
through the loop
5. Make b, b-1; b+1 passes through the loop where "b" is the maximum number of
allowable passes through the loop.

Nested Loop

Testing performed in a nested loop in known as Nested loop testing. Nested loop is basically
one loop inside the another loop. In nested loop there can be finite number of loops inside a
loop and there a nest is made. It may be either of any of three loops i.e., for, while or do-
while.

For nested loop, you need to follow the following steps.

1.Set all the other loops to minimum value and start at the innermost loop
2. For the innermost loop, perform a simple loop test and hold the outer loops at their
minimum iteration parameter value

3. Perform test for the next loop and work outward.

4. Continue until the outermost loop has been tested.

Concatenated Loops Testing performed in a concatenated loop is known as Concatenated


loop testing. It is performed on the concatenated loops. Concatenated loops are loops after the
loop. It is a series of loops. Difference between nested and concatenated is that in nested loop
is inside the loop but here loop is after the loop.

In the concatenated loops, if two loops are independent of each other then they are tested
using simple loops or else test them as nested loops.
However if the loop counter for one loop is used as the initial value for the others, then it will
not be considered as an independent loops.

Unstructured Loops
Testing performed in an unstructured loop is known as Unstructured loop testing.
Unstructured loop is the combination of nested and concatenated loops. It is basically a group
of loops that are in no order.

For unstructured loops, it requires restructuring of the design to reflect the use of the
structured programming constructs.

Limitation in Loop testing

• Loop bugs show up mostly in low-level software.


• The bugs identified during loop testing are not very subtle.
• Many of the bugs might be detected by the operating system as such they will cause memory
boundary violations, detectable pointer errors, etc.

Summary:
• In Software Engineering, Loop testing is a White Box Testing. This technique is used to test
loops in the program.
• Loops testing can reveal performance/capacity bottlenecks
• Loop bugs show up mostly in low-level software
PREDICATES, PATH PREDICATES AND ACHIEVABLE PATHS:
PREDICATE:

The logical function evaluated at a decision is called Predicate. The direction taken
at a decision depends on the value of decision variable.

Some examples are: A>0, x+y>=90.......

PATH PREDICATE: A predicate associated with a path is called a Path Predicate.


For example, "x is greater than zero", "x+y>=90", a sequence of predicates whose
truth values will cause the routine to take a specific path.

MULTIWAY BRANCHES:

The path taken through a multiway branch such as a computed GOTO's, case
statement, or jump statements cannot be directly expressed in TRUE/FALSE terms.
Although, it is possible to describe such alternatives by using multi valued logic,
practical approach is to express multiway branches as an equivalent set of
if..then..else statements. For example a three way case statement can be written as:
If case=1

DO A1

ELSE IF Case=2 DO A2

ELSE DO A3

ENDIF

INPUTS:

In testing, the word input is not restricted to direct inputs, such as variables in a
subroutine call, but includes all data objects referenced by the routine whose values
are fixed prior to entering it. For example, inputs in a calling sequence, objects in a
data structure, values left in registers, or any combination of object types. The input
for a particular test is mapped as a one dimensional array called as an Input Vector.

PREDICATE INTERPRETATION:
The simplest predicate depends only on input variables. For example if x1,x2
are inputs, the predicate might be x1+x2>=7, given the values of x1 and x2 the
direction taken through the decision is based on the predicate is determined at input
time and does not depend on processing.

Another example, assume a predicate x1+y>=0 that along a path prior to


reaching this predicate we had the assignment statement y=x2+7. Although our
predicate depends on processing, we can substitute the symbolic expression for y to
obtain an equivalent predicate x1+x2+7>=0.

The act of symbolic substitution of operations along the path in order to


express the predicate solely in terms of the input vector is called predicate
interpretation.

Sometimes the interpretation may depend on the path;

for example,

INPUT X

ON X GOTO A, B, C, ...

A: Z := 7 @ GOTO HN

B: Z := - 7 @ GOTO HN

C: Z := 0 @ GOTO HN

HN: IF Y + Z > 0

GOTO ELL

ELSE

GOTO EMM

The predicate interpretation at HN depends on the path we took through the


first multiway branch. It yields for the three cases respectively, if Y+7>0, Y-7>0,
Y>0. The path predicates are the specific form of the predicates of the decisions
along the selected path after interpretation.
INDEPENDENCE OF VARIABLES AND PREDICATES:
The path predicates take on truth values based on the values of input
variables, either directly or indirectly. If a variable's value does not change
as a result of processing, that variable is independent of the processing. If
the variable's value can change as a result of the processing, the variable
is process dependent.
CORELATION OF VARIABLES AND PREDICATES:
A pair of predicates whose outcomes depend on one or more variables in
common are said to be corelated predicates. For example, the predicate
X==Y is followed by another predicate X+Y == 8. If we select X and Y
values to satisfy the first predicate, we might have forced the 2nd
predicate's truth value to change.
PATH PREDICATE EXPRESSIONS:
A path predicate expression is a set of boolean expressions, all of which
must be satisfied to achieve the selected path.
Example: X1+3X2+17>=0
X3=17
X4-X1>=14X2
Any set of input values that satisfy all of the conditions of the path
predicate expression will force the routine to the path.
Sometimes a predicate can have an OR in it.
Example:
A: X5 > 0
B: X1 + 3X2 + 17 >= 0
C: X3 = 17
D: X4 - X1 >= 14X2
E: X6 < 0
B: X1 + 3X2 + 17 >= 0
C: X3 = 17
D: X4 - X1 >= 14X2
Boolean algebra notation to denote the boolean expression:
ABCD+EBCD=(A+E)BCD
Compound Predicate: Predicates of the form A OR B, A AND B
and more complicated Boolean expressions are called as compound
predicates. Sometimes even a simple predicate becomes compound
after interpretation. Example: the predicate if (x=17) whose opposite
branch is if x.NE.17 which is equivalent to x>17. Or. X

Predicate coverage is being the achieving of all possible combinations


of truth values corresponding to the selected path have been explored
under some test.
TESTING BLINDNESS: Testing Blindness is a pathological(harmful)
situation in which the desired path is achieved for the wrong reason.
There are three types of Testing Blindness:
Assignment Blindness:
Assignment blindness occurs when the buggy predicate appears to work
correctly because the specific value chosen for an assignment statement works
with both the correct and incorrect predicate.
For Example:
If the test case sets Y=1 the desired path is taken in either case, but there is
still a bug.
Equality Blindness: Equality blindness occurs when the path selected by a
prior predicate results in a value that works both for the correct and buggy
predicate.
For Example:

The first predicate if y=2 forces the rest of the path, so that for any positive
value of x. the path taken at the second predicate will be the same for the
correct and buggy version.
Self Blindness:
Self blindness occurs when the buggy predicate is a multiple of the correct
predicate and as a result is indistinguishable along that path.

For Example:

1. The assignment (x=a) makes the predicates multiples of each other, so the
direction taken is the same for the correct and buggy version.

PATH SENSITIZING:
Achievable and unachievable paths:
1. select and test enough paths to achieve a satisfactory notion of test
completeness such as C1+C2.
2. Extract the programs control flow graph and select a set of covering paths.
3. For any path in that set, interpret the predicates along the path as needed to
express them in terms of the input vector. In general individual predicates are
compound or may become compound as a result of interpretation.
4. Trace the path through, multiplying the individual compound predicates to
achieve a boolean expression such as
(A+BC) (D+E) (FGH) (IJ) (K) (l) (L).
5. Multiply out the expression to achieve a sum of products form:
ADFGHIJKL+AEFGHIJKL+BCDFGHIJKL+BCEFGHIJKL
6. Each product term denotes a set of inequalities that if solved will yield an input
vector that will drive the routine along the designated path.
7. Solve any one of the inequality sets for the chosen path and you have found a
set of input values for the path.
8. If you can find a solution, then the path is achievable.
9. If you can’t find a solution to any of the sets of inequalities, the path is un
achievable.
10. The act of finding a set of solutions to the path predicate expression is
called PATH SENSITIZATION.
HEURISTIC PROCEDURES FOR SENSITIZING PATHS:
1. This is a workable approach, instead of selecting the paths without considering
how to sensitize, attempt to choose a covering path set that is easy to sensitize and
pick hard to sensitize paths only as you must to achieve coverage.
2. Identify all variables that affect the decision.
3. Classify the predicates as dependent or independent.
4. Start the path selection with un correlated, independent predicates.
5. If coverage has not been achieved using independent uncorrelated predicates,
extend the path set using correlated predicates.
6. If coverage has not been achieved extend the cases to those that involve
dependent predicates.
7. Last, use correlated, dependent predicates.
PATH INSTRUMENTATION:
Path instrumentation is what we have to do to confirm that the outcome was
achieved by the intended path.
Co-incidental Correctness:
The coincidental correctness stands for achieving the desired outcome for wrong
reason
The above figure is an example of a routine that, for the (unfortunately) chosen input
value (X = 16), yields the same outcome (Y = 2) no matter which case we select.
Therefore, the tests chosen this way will not tell us whether we have achieved
coverage. For example, the five cases could be totally confused and still the outcome
would be the same. Path Instrumentation is what we have to do to confirm that the
outcome was achieved by the intended path.
The types of instrumentation methods include:
1.Interpretive Trace Program: An interpretive trace program is one that executes
every statement in order and records the intermediate values of all calculations, the
statement labels traversed etc. If we run the tested routine under a trace, then we
have all the information we need to confirm the outcome and, furthermore, to
confirm that it was achieved by the intended path. The trouble with traces is that
they give us far more information than we need.

2. Traversal Marker or Link Marker: A simple and effective form of


instrumentation is called a traversal marker or link marker. Name every link by
a lower case letter. Instrument the links so that the link's name is recorded
when the link is executed. The succession of letters produced in going from the
routine's entry to its exit should, if there are no bugs, exactly correspond to the
path name at confirming the path
Link Counter:
A less disruptive (and less informative) instrumentation method is based on
counters. Instead of a unique link name to be pushed into a string when the link
is traversed, we simply increment a link counter. We now confirm that the path
length is as expected. The same problem that led us to double link markers also
leads us to double link counters.
MODULE 2

White Box Testing

WHITE BOX TESTING is testing of a software solution's internal structure, design, and
coding. In this type of testing, the code is visible to the tester. It focuses primarily on verifying
the flow of inputs and outputs through the application, improving design and usability,
strengthening security. White box testing is also known as Clear Box testing, Open Box
testing, Structural testing, Transparent Box testing, Code-Based testing, and Glass Box
testing. It is usually performed by developers.

It is one of two parts of the Box Testing approach to software testing. Its counterpart, Blackbox
testing, involves testing from an external or end-user type perspective. On the other hand,
Whitebox testing is based on the inner workings of an application and revolves around internal
testing.

The term "WhiteBox" was used because of the see-through box concept. The clear box or
WhiteBox name symbolizes the ability to see through the software's outer shell into its inner
workings. Likewise, the "black box" in "Black Box Testing" symbolizes not being able to see
the inner workings of the software so that only the end-user experience can be tested.

White box testing involves the testing of the software code for the following:

• Internal security holes

• Broken or poorly structured paths in the coding processes

• The flow of specific inputs through the code

• Expected output

• The functionality of conditional loops

• Testing of each statement, object, and function on an individual basis

The testing can be done at system, integration and unit levels of software development. One of
the basic goals of whitebox testing is to verify a working flow for an application. It involves
testing a series of predefined inputs against expected or desired outputs so that when a specific
input does not result in the expected output, you have encountered a bug.
How to perform White Box Testing?

To understand white box testing, we have divided it into two basic steps. This is what testers
do when testing an application using the white box testing technique:

STEP 1) UNDERSTAND THE SOURCE CODE

The first thing a tester will often do is learn and understand the source code of the application.
Since white box testing involves the testing of the inner workings of an application, the tester
must be very knowledgeable in the programming languages used in the applications they are
testing. Also, the testing person must be highly aware of secure coding practices. Security is
often one of the primary objectives of testing software. The tester should be able to find security
issues and prevent attacks from hackers and naive users who might inject malicious code into
the application either knowingly or unknowingly.

Step 2) CREATE TEST CASES AND EXECUTE

The second basic step to white box testing involves testing the application's source code for
proper flow and structure. One way is by writing more code to test the application's source
code. The tester will develop little tests for each process or series of processes in the
application. This method requires that the tester must have intimate knowledge of the code and
is often done by the developer. Other methods include Manual Testing, trial, and error testing
and the use of testing tools.

WhiteBox Testing Example

Consider the following piece of code

Printme (int a, int b) { ------------ Printme is a function

int result = a+ b;

If (result> 0)

Print ("Positive", result)

Else

Print ("Negative", result)

} ----------- End of the source code


The goal of WhiteBox testing is to verify all the decision branches, loops, statements in the
code.

To exercise the statements in the above code, WhiteBox test cases would be

• A = 1, B = 1

• A = -1, B = -3

White Box Testing Techniques

A major White box testing technique is Code Coverage analysis. Code Coverage analysis
eliminates gaps in a Test Case suite. It identifies areas of a program that are not exercised by a
set of test cases. Once gaps are identified, you create test cases to verify untested parts of the
code, thereby increasing the quality of the software product.

There are automated tools available to perform Code coverage analysis. Below are a few
coverage analysis techniques

Statement Coverage:- This technique requires every possible statement in the code to be tested
at least once during the testing process of software engineering.

Branch Coverage - This technique checks every possible path (if-else and other conditional
loops) of a software application.

Apart from above, there are numerous coverage types such as Condition Coverage, Multiple
Condition Coverage, Path Coverage, Function Coverage etc. Each technique has its own
merits and attempts to test (cover) all parts of software code. Using Statement and Branch
coverage you generally attain 80-90% code coverage which is sufficient.

Types of White Box Testing

White box testing encompasses several testing types used to evaluate the usability of an
application, block of code or specific software package. There are listed below --
• Unit Testing: It is often the first type of testing done on an application. Unit Testing is
performed on each unit or block of code as it is developed. Unit Testing is essentially
done by the programmer. As a software developer, you develop a few lines of code, a
single function or an object and test it to make sure it works before continuing Unit
Testing helps identify a majority of bugs, early in the software development lifecycle.
Bugs identified in this stage are cheaper and easy to fix.

• Testing for Memory Leaks: Memory leaks are leading causes of slower running
applications. A QA specialist who is experienced at detecting memory leaks is essential
in cases where you have a slow running software application.

White Box Testing Tools

Below is a list of top white box testing tools.

• Parasoft Jtest
• EclEmma
• NUnit
• PyUnit
• HTMLUnit
• CppUnit

Advantages of White Box Testing

• Code optimization by finding hidden errors.


• White box tests cases can be easily automated.
• Testing is more thorough as all code paths are usually covered.
• Testing can start early in SDLC even if GUI is not available.

Disadvantages of WhiteBox Testing

• White box testing can be quite complex and expensive.


• White box testing requires professional resources, with a detailed understanding of
programming and implementation.
• White-box testing is time-consuming, bigger programming applications take the time
to test fully.
White box testing is classified into “static” and “structural” testing.

Static testing is a type of testing which requires only the source code of the product, not the
binaries or executables. Static testing does not involve executing the programs on computers
but involves select people going through the code to find out whether the code works
• according to the functional requirement;
• the code has been written in accordance with the design developed earlier in the project
life cycle;
• the code for any functionality has been missed out; the code handles errors properly.
Static testing can be done by humans or with the help of specialized tools. Static Testing by
Humans
These methods rely on the principle of humans
• reading the program code to detect errors rather than computers executing the code to
find errors. This process has several advantages. Sometimes humans can find errors that
computers cannot.
• For example, when there are two variables with similar names and the programmer
used a “wrong” variable by mistake in an expression, the computer will not detect the
error but execute the statement and produce incorrect results, whereas a human being
can spot such an error.
• By making multiple humans read and evaluate the program, we can get multiple
perspectives and therefore have more problems identified than a computer could.
• A human evaluation of the code can compare it against the specifications or design
and thus ensure that it does what is intended to do.
• A human evaluation can detect many problems at one go and can even try to identify
the root causes of the problems., such testing only reveals the symptoms rather than the
root causes. Thus, the overall time required to fix all the problems can be reduced
substantially by a human evaluation.
• By making humans test the code before execution, computer resources can be saved.
Of course, this comes at the expense of human resources.
• A proactive method of testing like static testing minimizes the delay in identification of
the problems. The sooner a defect is identified and corrected, lesser is the cost of fixing
the defect.
• From a psychological point of view, finding defects later in the cycle (for example, after
the code is compiled and the system is being put together) creates immense pressure
on programmers. They have to fix defects with less time to spare. With this kind of
pressure, there are higher chances of other defects creeping in.
There are multiple methods to achieve static testing by humans. They are as follows.
1. Desk checking of the code
2. Code walkthrough
3. Code review
4. Code inspection
Since static testing by humans is done before the code is compiled and executed, some
of these methods can be viewed as process-oriented or defect prevention-oriented or
quality assurance-oriented activities rather than pure testing activities. “Testing” as
anything that furthers the quality of a product. These methods have been included in this
chapter because they have visibility into the program code.
Desk checking
Normally done manually by the author of the code, desk checking is a method to verify
the portions of the code for correctness. Such verification is done by comparing the code
with the design or specifications to make sure that the code does what it is supposed to do
and effectively. This is the desk checking that most programmers do before compiling and
executing the code. Whenever errors are found, the author applies the corrections for errors
on the spot.`
This method of catching and correcting errors is characterized by:
No structured method or formalism to ensure completeness
No maintaining of a log or checklist.
In effect, this method relies completely on the author's thoroughness, diligence, and skills.
There is no process or structure that guarantees or verifies the effectiveness of desk checking.
This method is not effective in detecting errors that arise due to incorrect understanding of
requirements or incomplete requirements. This is because developers may not have the domain
knowledge required to understand the requirements fully.
the defects are detected and corrected with minimum time delay.
Some of the disadvantages of this method of testing are as follows.
• A developer is not the best person to detect problems in his or her own code.
• He or she may be tunnel visioned and have blind spots to certain types of problems.
• Developers generally prefer to write new code rather than do any form of testing!
• This method is essentially person-dependent and informal and thus may not work
consistently across all developers.
• Owing to these disadvantages, the next two types of proactive methods are introduced.
The basic principle of walkthroughs and formal inspections is to involve multiple
people in the review process.
Some of the disadvantages of this method of testing are as follows.
• A developer is not the best person to detect problems in his or her own code.
• He or she may be tunnel visioned and have blind spots to certain types of
problems.
• Developers generally prefer to write new code rather than do any form of
testing!
• This method is essentially person-dependent and informal and thus may not
work consistently across all developers.
• Owing to these disadvantages, the next two types of proactive methods are
introduced. The basic principle of walkthroughs and formal inspections is to
involve multiple people in the review process.
Code walkthrough
• This method and formal inspection (described in the next section) are group-
oriented methods. Walkthroughs are less formal than inspections.
• The advantage that walkthrough has over desk checking is that it brings
multiple perspectives.
• In walkthroughs, a set of people look at the program code and raise questions
for the author.
• The author explains the logic of the code, and answers the questions. If the
author is unable to answer some questions, he or she then takes those questions
and finds their answers.
• Completeness is limited to the area where questions are raised by the team.
Formal inspection
Code inspection—also called Fagan Inspection (named after the original
formulator)—is a method, normally with a high degree of formalism. The focus of
this method is to detect all faults, violations, and other side-effects.
This method increases the number of defects detected by
1. preparation before an inspection/review;
2. enlisting multiple diverse views;
3. assigning specific roles to the multiple participants;
4. and going sequentially through the code in a structured manner.
. When the code is in such a reasonable state of readiness, an inspection meeting is
arranged.
There are four roles in inspection.
• First is the author of the code.
• Second is a moderator who is expected to formally run the inspection
according to the process.
• Third are the inspectors. These are the people who actually provides, review
comments for the code. There are typically multiple inspectors.
• Finally, there is a scribe, who takes detailed notes during the inspection
meeting and circulates them to the inspection team after the meeting.
The author or the moderator selects the review team. The chosen members have the
skill sets to uncover as many defects as possible. In an introductory meeting, the
inspectors get copies (These can be hard copies or soft copies) of the code to be
inspected along with other supporting documents such as the design document,
requirements document, and any documentation of applicable standards.
The moderator informs the team about the date, time, and venue of the inspection
meeting. The inspectors get adequate time to go through the documents and
program and ascertain their compliance to the requirements, design, and standards.
The moderator takes the team sequentially through the program code, asking each
inspector if there are any defects in that part of the code. If any of the inspectors
raises a defect, then the inspection team deliberates on the defect and, when agreed
that there is a defect,
classifies it in two dimensions––minor/major and systemic/mis-execution.
A mis-execution defect is one which, as the name suggests, happens because of an
error or slip on the part of the author. It is unlikely to be repeated later, either in this
work product or in other work products. An example of this is using a wrong variable
in a statement.
Systemic defects, on the other hand, can require correction at a different level..
Similarly, minor defects are defects that may not substantially affect a program,
whereas major defects need immediate attention.

A scribe formally documents the defects found in the inspection meeting and the
author takes care of fixing these defects. In case the defects are severe, the team may
optionally call for a review meeting to inspect the fixes to ensure that they address
the problems. In any case, defects found through inspection need to be tracked till
completion and someone in the team has to verify that the problems have been fixed
properly.

. Some of the challenges to watch out for in conducting formal inspections are as
follows.
• These are time consuming.
• Since the process calls for preparation as well as formal meetings, these can
take time.
• scheduling can become an issue since multiple people are involved.
• It may also not be necessary to subject the entire code to formal inspection.
In order to overcome the above challenges, it is necessary to identify, during the
planning stages, which parts of the code will be subject to formal inspections.
Portions of code can be classified on the basis of their criticality or complexity as
“high,” “medium,” and “low.”
High or medium complex critical code should be subject to formal inspections,
while those classified as “low” can be subject to either walkthroughs or even desk
checking.

.
Static Analysis Tools
The review and inspection mechanisms described above involve significant
amount of manual work. There are several static analysis tools available in the
market that can reduce the manual work and perform analysis of the code to find
out errors such as those listed below.
• whether there are unreachable codes (usage of GOTO statements)
• variables declared but not used
• mismatch in definition
• assignment of values to variables illegal
• error prone typecasting of variables
• use of non-portable or architecture-dependent programming constructs
• memory allocated but not having corresponding

These static analysis tools can also be considered as an extension of compilers as


they use the same concepts and implementation to locate errors. A good compiler
is also a static analysis tool. For example, most C compilers provide different
“levels” of code checking which will catch the various types of programming
errors given above.
Some of the static analysis tools can also check compliance for coding standards
as prescribed by standards such as POSIX. These tools can also check for
consistency in coding guidelines (for example, naming conventions, allowed data
types, permissible programming constructs, and so on).
While following any of the methods of human checking—desk checking,
walkthroughs, or formal inspections—it is useful to have a code review checklist.
Given below is checklist that covers some of the common issues.
Every organization should develop its own code review checklist.
The checklist should be kept current with new learning as they come about. In a
multi-product organization, the checklist may be at two levels—
first, an organization-wide checklist that will include issues such as
organizational coding standards, documentation standards, and so on;
second, a product-or project-specific checklist that addresses issues specific to
the product or project.
CODE REVIEW CHECKLIST
DATA ITEM DECLARATION RELATED
• Are the names of the variables meaningful?
• If the programming language allows mixed case names, are there variable
names with confusing use of lower case letters and capital letters?
• Are the variables initialized?
Are there similar sounding names (especially words in singular and plural)?
DATA USAGE RELATED
• Are values of right data types being assigned to the variables?
• Is the access of data from any standard files, repositories, or databases done
through publicly supported interfaces?
• If pointers are used, are they initialized properly? .
• Are bounds to array subscripts and pointers properly checked?
• Has the usage of similar-looking operators (for example,=and == or & and
&& in C) checked?
CONTROL FLOW RELATED
• Are all the conditional paths reachable?
• Are all the individal conditions in a complex condition condition separately
evaluated?
• If there is a nested IF statement, are the THEN and ELSE parts
appropriately delimited?
• In the case of a multi-way branch like SWITCH / CASE statement, is a
default clause provided?
• Are the breaks after each CASE appropriate?
Is there any part of code that is unreachable?
Are there any loops that will never execute?
STANDARDS RELATED
• Does the code follow the coding conventions of the organization?
• Does the code follow any coding conventions that are platform specific
(for example, GUI calls specific to Windows or Swing)
STYLE RELATED
• Are unhealthy programming constructs (for example, global variables in
C, ALTER statement in COBOL) being used in the program?
• Is sufficient attention being paid to readability issues like indentation of
code?
• Have you checked for memory leaks (for example, memory acquired but
not explicitly freed)?
DOCUMENTATION RELATED
• Is the code adequately documented, especially where the logic is complex
or the section of code is critical for product functioning?
• Is appropriate chan/ge history documented?
• Are the interfaces and the parameters thereof properly documented?
STRUCTURAL TESTING
Structural testing takes into account the code, code structure, internal design, and
how they are coded. The fundamental difference between structural testing and
static testing is that in structural testing tests are actually run by the computer
on the built product, whereas in static testing, the product is tested by humans
using just the source code and not the executables or binaries. Structural testing
entails running the actual product against some predesigned test cases to
exercise as much of the code as possible or necessary. A given portion of the code
is exercised if a test case causes the program to execute that portion of the code.
structural testing can be further classified into
• unit/code functional testing,
• code coverage,
• code complexity testing.
Unit/Code Functional Testing
This initial part of structural testing corresponds to some quick checks that a
developer performs before subjecting the code to more extensive code coverage
testing or code complexity testing.
• Initially, the developer can perform certain obvious tests, knowing the
input variables and the corresponding expected output variables. This
can be a quick test that checks out any obvious mistakes. By repeating these
tests for multiple values of input variables, the confidence level of the
developer to go to the next level increases.
• For modules with complex logic or conditions, the developer can build a
“debug version” of the product by putting intermediate print statements
and making sure the program is passing through the right loops and
iterations the right number of times. It is important to remove the
intermediate print statements after the defects are fixed.
• Another approach to do the initial test is to run the product under a
debugger or an Integrated Development Environment (IDE). These
tools allow single stepping of instructions (allowing the developer to stop
at the end of each instruction, view or modify the contents of variables, and
so on), setting break points at any function or instruction, and viewing the
various system parameters or program variable values.
Code Coverage Testing
Code coverage testing involves designing and executing test cases and finding
out the percentage of code that is covered by testing. The percentage of code
covered by a test is found by adopting a technique called instrumentation of code.
There are specialized tools available to achieve instrumentation. Instrumentation
rebuilds the product, linking the product with a set of libraries provided by the
tool vendors. This instrumented code can monitor and keep an audit of what
portions of code are covered. The tools also allow reporting on the portions of the
code that are covered frequently, so that the critical or most-often portions of code
can be identified.
Code coverage testing is made up of the following types of coverage.
• Statement coverage
• Path coverage
• Condition coverage
• Function coverage
Statement coverage
Program constructs in most conventional programming languages can be
classified as
Two-way decision statements like if then else
Multi-way decision statements like Switch Loops like while do, repeat until etc
• Statement coverage refers to writing test cases that execute each of the
program statements. One can start with the assumption that more the code
covered, the better is the testing of the functionality.
• Based on this assumption, code coverage can be achieved by providing
coverage to each of the above types of statements.
• For a section of code that consists of statements that are sequentially
executed (that is, with no conditional branches), test cases can be designed
to run through from top to bottom.
There are exceptions that the code encounters (for example, a divide by zero),
then, even if we start a test case at the beginning of a section, the test case may
not cover all the statements in that section. Thus, even in the case of sequential
statements, coverage for all statements may not be achieved.
Second, a section of code may be entered from multiple points. Even though
this points to not following structured programming guidelines, it is a common
scenario in some of the earlier programming languages.
When we consider a two-way decision construct like the if statement, then to
cover all the statements, we should also cover the then and else parts of the if
statement. This means we should have, for each if then else, (at least) one test
case to test the Then part and (at least) one test case to test the else part.
The multi-way decision construct such as a
To cover all possible switch cases, there would be multiple test cases. Loop
constructs present more variations to take care of. A loop—in various forms such
as for, while, repeat, and so on—is characterized by executing a set of statements
repeatedly until or while certain conditions are met. A good percentage of the
defects in programs come about because of loops that do not function properly.
More often, loops fail in what are called “boundary conditions.” One of the
common looping errors is that the termination condition of the loop is not
properly stated. In order to make sure that there is better statement coverage for
statements within a loop, so that the situation of the termination condition being
true before starting the loop is tested. Exercise the loop between once and the
maximum number of times, to check all possible “normal” operations of the loop.
Try covering the loop, around the “boundary” of n—that is, just below n, n, and
just above n.
The statement coverage for a program, which is an indication of the percentage
of statements actually executed in a set of tests, can be calculated by the formula
given alongside in the margin.
Statement Coverage=(Total statements exercised / Total number of
executable statements in program) * 100
It is clear from the above discussion that as the type of statement progresses from
a simple sequential statement to if then else and through to loops, the number of
test cases required to achieve statement coverage increases.
Consider a hypothetical case when we achieved 100 percent code coverage. If the
program implements wrong requirements and this wrongly implemented code is
“fully tested,” with 100 percent code coverage, it still is a wrong program and hence
the 100 percent code coverage does not mean anything. Next, consider the following
program.
Total = 0; /* set total to zero */
if (code == “M”) {

stmt1;
stmt2;
Stmt3;
stmt4;
Stmt5;
stmt6;
Stmt7;
}
Else
percent = value/Total*100; /* divide by zero */
In the above program, when we test with code=“M,” we will get 80 percent code
coverage. But if the value of code is not=“M,” then, the program will fail 90 percent
of the time (because of the divide by zero). Thus, even with a code coverage of 80
percent, we are left with a defect that hits the users 90 percent of the time.

Path coverage
In path coverage, we split a program into a number of distinct paths. A program (or
a part of a program) can start from the beginning and take any of the paths to its
completion.
Path Coverage=(Total paths exercised/ Total number of paths in program) *
100
Let us take an example of a date validation routine. The date is accepted as three
fields mm, dd and yyyy. We have assumed that prior to entering this routine, the
values are checked to be numeric. To simplify the discussion, we have assumed the
existence of a function called leapyear which will return TRUE if the given year is
a leap year. There is an array called DayofMonth which contains the number of days
in each month.A simplified flow chart for this is given in Figure below.

As can be seen from the figure, there are different paths that can be taken through
the program
A B-D-G
B-D-H
B-C-E-G
B-C-E-H
B-C-F-G
B-C-F-H
Regardless of the number of statements in each of these paths, if we can execute
these paths, then we would have covered most of the typical scenarios. Path
coverage provides a stronger condition of coverage than statement coverage as
it relates to the various logical paths in the program rather than just program
statements.
Condition coverage

In the above example, even if we have covered all the paths possible, it would not mean that
the program is fully tested. For example, we can make the program take the path A by giving
a value less than 1 (for example, 0) to mm and find that we have covered the path A and the
program has detected that the month is invalid. But, the program may still not be correctly
testing for the other condition namely mm > 12.

Most compliers perform optimizations to minimize the number of Boolean operations and
all the conditions may not get evaluated, even though the right path is chosen.

For example, when there is an OR condition (as in the first IF statement above), once the first
part of the IF (for example, mm < 1) is found to be true, the second part will not be evaluated
at all as the overall value of the Boolean is TRUE.

Similarly, when there is an AND condition in a Boolean expression, when the first condition
evaluates to FALSE, the rest of the expression need not be evaluated at all. For all these reasons,
path testing may not be sufficient. It is necessary to have test cases that exercise each Boolean
expression and have test cases test produce the TRUE as well as FALSE paths.

This will mean more test cases and the number of test cases will rise exponentially with the
number of conditions and Boolean expressions..

Condition Coverage = (Total decisions exercised / Total number of decisions in program


) * 100

The condition coverage, as defined by the formula alongside in the margin gives an indication
of the percentage of conditions covered by a set of test cases. Condition coverage is a much
stronger criteria than path coverage, which in turn is a much stronger criteria than statement
coverage.

Function coverage

This is to identify how many program functions (similar to functions in “C” language) are
covered by test cases. The requirements of a product are mapped into functions during the
design phase and each of the functions form a logical unit.

For example, in a database software, “inserting a row into the database” could be a function.
Or, in a payroll application, “calculate tax” could be a function. Each function could, in turn,
be implemented using other functions. While providing function coverage, test cases can be
written so as to exercise each of the different functions in the code.

The advantages that function coverage provides over the other types of coverage are as follows.

• Functions are easier to identify in a program and hence it is easier to write test cases to
provide function coverage.
• Since functions are at higher level of abstraction than code, it is easier to achieve 100
percent function coverage than 100 percent coverage in any of the earlier methods.
• Functions have a more logical mapping to requirements and hence can provide a more
direct correlation to the test coverage of the product.
• Function coverage provides a natural transition to black box testing. We can also
measure how many times a given function is called. This will indicate which functions
are used most often and hence these functions become the target of any performance
testing and optimization.

Function coverage can help in improving the performance as well as quality of the product.

Function Coverage = (Total functions exercised / Total number of functions in program)


* 100

Summary

Code coverage testing involves “dynamic testing” methods of executing the product with pre-
written test cases, and finding out how much of code has been covered.

Code coverage up to 40-50 percent is usually achievable. Code coverage of more than 80
percent requires enormous amount of effort and understanding of the code. Coverage provide
more confidence by exercising various logical paths and functions. Code coverage tests can
identify the areas of a code that are executed most frequently. Extra attention can then be paid
to these sections of the code. Code coverage testing provides information that is useful in
making such performance-oriented decisions.

Function coverage

This is to identify how many program functions (similar to functions in “C” language) are
covered by test cases. The requirements of a product are mapped into functions during the
design phase and each of the functions form a logical unit.
For example, in a database software, “inserting a row into the database” could be a function.
Or, in a payroll application, “calculate tax” could be a function. Each function could, in turn,
be implemented using other functions. While providing function coverage, test cases can be
written so as to exercise each of the different functions in the code.

The advantages that function coverage provides over the other types of coverage are as follows.

• Functions are easier to identify in a program and hence it is easier to write test cases to
provide function coverage.
• Since functions are at higher level of abstraction than code, it is easier to achieve 100
percent function coverage than 100 percent coverage in any of the earlier methods.
• Functions have a more logical mapping to requirements and hence can provide a more
direct correlation to the test coverage of the product.
• Function coverage provides a natural transition to black box testing. We can also
measure how many times a given function is called. This will indicate which functions
are used most often and hence these functions become the target of any performance
testing and optimization.

Function coverage can help in improving the performance as well as quality of the product.

Function Coverage = (Total functions exercised / Total number of functions in program)


* 100

Summary

Code coverage testing involves “dynamic testing” methods of executing the product with pre-
written test cases, and finding out how much of code has been covered.

Code coverage up to 40-50 percent is usually achievable. Code coverage of more than 80
percent requires enormous amount of effort and understanding of the code. Coverage provide
more confidence by exercising various logical paths and functions. Code coverage tests can
identify the areas of a code that are executed most frequently. Extra attention can then be paid
to these sections of the code. Code coverage testing provides information that is useful in
making such performance-oriented decisions.
Code Complexity Testing
The different types of coverage that can be provided to test a program.
Two questions that come to mind while using these coverage are:
• Which of the paths are independent?
• If two paths are not independent, then we may be able to minimize the number
of tests.
• Is there an upper bound on the number of tests that must be run to ensure that
all the statements have been executed at least once?
Cyclomatic complexity is a metric that quantifies the complexity of a program and
thus provides answers to the above questions. A program is represented in the form
of a flow graph. A flow graph consists of nodes and edges.
In order to convert a standard flow chart into a flow graph to compute cyclomatic
complexity, the following steps can be taken.
• Identify the predicates or decision points (typically the Boolean conditions or
conditional statements) in the program.
• Ensure that the predicates are simple (that is, no and/or, and so on in each
predicate).
• Figure 3.3 shows how to break up a condition having or into simple predicates.
Similarly, if there are loop constructs, break the loop termination checks into
simple predicates. Combine all sequential statements into a single node. The
reasoning here is that these statements all get executed, once started.
• When a set of sequential statements are followed by a simple predicate (as
simplified in (2) above), combine all the sequential statements and the
predicate check into one node and have two edges emanating from this one
node. Such nodes with two edges emanating from them are called predicate
nodes.
• Make sure that all the edges terminate at some node; add a node to represent
all the sets of sequential statements at the end of the program.
Figure 3.3 Flow graph translation of an OR to a simple predicate.
We have illustrated the above transformation rules of a
conventional flow chart to a flow diagram in Figure 3.4.

a flow graph and the cyclomatic complexity provide indicators to the complexity of
the logic flow in a program and to the number of independent paths in a program.
The primary contributors to both the complexity and independent paths are the
decision points in the program.
Consider a hypothetical program with no decision points. The flow graph of such a
program (shown in Figure 3.5 above) would have two nodes, one for the code and
one for the termination node. Since all the sequential steps are combined into one
node (the first node), there is only one edge, which connects the two nodes. This
edge is the only independent path. Hence, for this flow graph, cyclomatic complexity
is equal to one.
This graph has no predicate nodes because there are no decision points.
Hence,
the cyclomatic complexity is also equal to the number of
predicate nodes (0) + 1.
Note that in this flow graph,
the edges (E) = 1;
nodes (N) = 2.
The cyclomatic complexity is also equal to 1 = 1 - 2 +2 = E -N +2.
When a predicate node is added to the flow graph (shown in Figure 3.6 above),
there are obviously two independent paths, one following the path when the Boolean
condition is TRUE and one when the Boolean condition is FALSE. Thus, the
cyclomatic complexity of the graph is 2.

Incidentally, this number of independent paths, 2, is again equal to the number of


predicate nodes (1) + 1. When we add a predicate node (a node with two edges),
complexity increases by 1, since the “E” in the E – N + 2 formula is increased by
one while the “N” is unchanged. As a result, the complexity using the formula E –
N + 2 also works out to 2.
Cyclomatic Complexity = Number of Predicate Nodes + 1
Cyclomatic Complexity = E – N + 2.
From the above reasoning, the reader would hopefully have got an idea about the
two different ways to calculate cyclomatic complexity and the relevance of
cyclomatic complexity in identifying independent paths through a program.
The above two formulae provide an easy means to calculate cyclomatic complexity,
given a flow graph. In fact the first formula can be used even without drawing the
flow graph, by simply counting the number of the basic predicates.
Using the flow graph, an independent path can be defined as a path in the flow graph
that has at least one edge that has not been traversed before in other paths. A set of
independent paths that cover all the edges is a basis set. Once the basis set is
formed, test cases should be written to execute all the paths in the basis set.
Calculating and using cyclomatic complexity
There are several tools that are available in the market which can compute
cyclomatic complexity. Thus some basic complexity checks must be performed on
the modules before the testing (or even coding) phase. Based on the complexity
number that emerges from using the tool, one can conclude what actions need to be
taken for complexity measure using Table 3.1.

CHALLENGES IN WHITE BOX TESTING


• White box testing requires a sound knowledge of the program code and the
programming language.
• This means that the developers should get intimately involved in white box
testing.
• Developers, in general, do not like to perform testing functions.
• This applies to structural testing as well as static testing methods such as
reviews.
• In addition, because of the timeline pressures, the programmers may not “find
time” for reviews .
• Human tendency of a developer being unable to find the defects in his or her
code.
• As we saw earlier, most of us have blind spots in detecting errors in our own
products.
• Since white box testing involves programmers who write the code, it is quite
possible that they may not be most effective in detecting defects in their own
work products.
• An independent perspective could certainly help.
• These challenges do not mean that white box testing is ineffective. But when
white box testing is carried out and these challenges are addressed by other
means of testing, ie,black box testing.
Module III
WHAT IS BLACK BOX TESTING?

• Black box testing involves looking at the specifications and does not
require examining the code of a program. Black box testing is done from
the customer's viewpoint.

• The test engineer engaged in black box testing only knows the set of inputs
and expected outputs and is unaware of how those inputs are transformed
into outputs by the software.

• Black box testing is done without the knowledge of the internals of the
system under test. They do not require any knowledge of its construction.

Let us take a lock and key. We do not know how the levers in the lock work, but
we only know the set of inputs (the number of keys, specific sequence of using
the keys and the direction of turn of each key) and the expected outcome (locking
and unlocking). For example, if a key is turned clockwise it should unlock and if
turned anticlockwise it should lock. To use the lock one need not understand how
the levers inside the lock are constructed or how they work. However, it is
essential to know the external functionality of the lock and key system. Some of
the functionality that you need to know to use the lock are given below.
Black box testing thus requires a functional knowledge of the product to be tested.
It does not mandate the knowledge of the internal logic of the system nor does it
mandate the knowledge of the programming language used to build the product.
Our tests in the above example were focused towards testing the features of the
product (lock and key), the different states, we already knew the expected
outcome.

WHY BLACK BOX TESTING

Black box testing helps in the overall functionality verification of the system
under test.

Black box testing is done based on requirements


It helps in identifying any incomplete, inconsistent requirements as well as any
issues involved when the system is tested as a complete entity. Black box testing
addresses the stated requirements as well as implied requirements .Not all the
requirements are stated explicitly, but are deemed implicit. For example,
inclusion of dates, page header, and footer may not be explicitly stated in the
report generation requirements specification.

Black box testing encompasses the end user perspectives

Users want to test the behavior of a product from an external perspective, end-
user perspectives are an integral part of black box testing.

Black box testing handles valid and invalid inputs


It is natural for users to make errors while using a product. Hence, it is not
sufficient for black box testing to simply handle valid inputs. Testing from the
end-user perspective includes testing for these error or invalid conditions. This
ensures that the product behaves as expected in a valid situation and does not
hang or crash when provided with an invalid input. These are called

positive and negative test cases.

WHEN TO DO BLACK BOX TESTING?


Black box testing activities require involvement of the testing team from the
beginning of the software project life cycle. Testers can get involved right from
the requirements gathering and analysis phase for the system under test. Test
scenarios and test data are prepared during the test construction phase of the test
life cycle, when the software is in the design phase. Once the code is ready and
delivered for testing, test execution can be done. All the test scenarios developed
during the construction phase are executed. Usually, a subset of these test
scenarios is selected for regression testing.

HOW TO DO BLACK BOX TESTING?


Since we are testing external functionality in black box testing, we need to prepare
a set of tests that test as much of the external functionality as possible, uncovering
as many defects as possible, in as short a time as possible.

Various techniques to be used to generate test scenarios for effective black box
testing.

Black box testing exploits specifications to generate test cases in a methodical


way to avoid redundancy and to provide better coverage.

The various techniques we will discuss are as follows.

• Requirements based testing


• Positive and negative testing
• Boundary value analysis
• Decision tables
• Equivalence partitioning
• State based testing
• Compatibility testing
• User documentation testing
• Domain testing
Requirements Based Testing

Requirements testing deals with validating the requirements given in the


Software Requirements Specification (SRS) of the software system. As stated,
not all requirements are explicitly stated; some of the requirements are implied or
implicit. Explicit requirements are stated and documented as part of the
requirements specification. Implied or implicit requirements are those that are not
documented but assumed to be incorporated in the system. The precondition for
requirements testing is a detailed review of the requirements specification.
Requirements review ensures that they are consistent, correct, complete, and
testable and making requirements based testing more effective.

Some organizations follow a variant of this method to bring more details


into requirements. All explicit requirements (from the Systems Requirements
Specifications) and implied requirements (inferred by the test team) are collected
and documented as “Test Requirements Specification” (TRS). Requirements
based testing can also be conducted based on such a TRS. We will consider SRS
and TRS to be the same.

A requirements specification for the lock and key example explained


earlier can be documented as given in Table 4.1.
Table 4.1 Sample requirements specification for lock and key system.

Requirements (like the ones given above) are tracked by a Requirements


Traceability Matrix (RTM). An RTM traces all the requirements from their
design, development, and testing. This matrix evolves through the life cycle of
the project.

• Each requirement is given a unique id along with a brief description. The


requirement identifier and description can be taken from the Requirements
Specification.

• In the above table, the naming convention uses a prefix “BR” followed by
a two-digit number. BR indicates the type of testing—"Black box-
requirements testing.”

• The two-digit numerals count the number of requirement.


• Each requirement is assigned a requirement priority, classified as high,
medium or low. This not only enables prioritizing the resources for
development of features but is also used to sequence and run tests.

• Tests for higher priority requirements will get precedence over tests for
lower priority requirements.

• This ensures that the functionality that has the highest risk is tested earlier
in the cycle. Defects reported by such testing can then be fixed as early as
possible.

• The “test conditions” column lists the different ways of testing the
requirement. These conditions can be grouped together to form a single test
case. Alternatively, each test condition can be mapped to one test case.

• The “test case IDs” column can be used to complete the mapping between
test cases and the requirement. Test case IDs should follow naming
conventions so as to enhance their usability.

• For example, in Table 4.2, test cases are serially numbered and prefixed
with the name of the product.

• A requirement is subjected to multiple phases of testing—unit, component,


integration, and system testing. This reference to the phase of testing can
be provided in a column in the Requirements Traceability Matrix.
Positive and Negative Testing
Positive testing
Positive testing tries to prove that a given product does what it is supposed to do.
When a test case verifies the requirements of the product with a set of expected
output, it is called positive test case. The purpose of positive testing is to prove
that the product works as per specification and expectations. A product delivering
an error when it is expected to give an error, is also a part of positive testing.

Positive testing can thus be said to check the product's behavior for positive and
negative conditions as stated in the requirement. For the lock and key example, a
set of positive test cases are given below. (Please refer to Table 4.2 for
requirements specifications.)

Let us take the first row in the below table. When the lock is in an unlocked state
and you use key 123—456 and turn it clockwise, the expected outcome is to get
it locked. During test execution, if the test results in locking, then the test is
passed. This is an example of “positive test condition” for positive testing.

In the fifth row of the table, the lock is in locked state. Using a hairpin and turning
it clockwise should not cause a change in state or cause any damage to the lock.
On test execution, if there are no changes, then this positive test case is passed.
This is an example of a “negative test condition” for positive testing.

Positive testing is done to verify the known test conditions and negative testing
is done to break the product with unknowns.
Negative testing

Negative testing is done to show that the product does not fail when an
unexpected input is given. The purpose of negative testing is to try and break
the system. In other words, the input values may not have been represented in the
specification of the product. These test conditions can be termed as unknown
conditions for the product as far as the specifications are concerned.

But, at the end-user level, there are multiple scenarios that are encountered and
that need to be taken care of by the product. It becomes even more important for
the tester to know the negative situations that may occur at the end-user level so
that the application can be tested and made reliable.

Table 4.5 gives some of the negative test cases for the lock and key example.

Table 4.5 Negative test cases.


In the above table, in NEGATIVE testing, there are no requirement numbers. This
is because negative testing focuses on test conditions that lie outside the
specification. Since all the test conditions are outside the specification, they
cannot be categorized as positive and negative test conditions. Some people
consider all of them as negative test conditions, which is technically correct.

In contrast there is no end to negative testing, and 100 percent coverage in


negative testing is impractical. Negative testing requires a high degree of
creativity among the testers to cover as many “unknowns” as possible to avoid
failure at a customer site.
Boundary Value Analysis
Conditions and boundaries are two major sources of defects in a
software product. Most of the defects in software products are conditions and boundaries. By
conditions, we mean situations wherein, based on the values of various variables, certain
actions would have to be taken. By boundaries, we mean “limits” of values of the various
variables. Boundary value analysis believes and extends the concept that the density of defect
is more towards the boundaries.
To illustrate the concept of errors, Boundary value analysis is one of the widely used case
design technique for black box testing. It is used to test boundary values because the input
values near the boundary have higher chances of error.

Whenever we do the testing by boundary value analysis, the tester focuses on, while entering
boundary value whether the software is producing correct output or not.

Boundary values are those that contain the upper and lower limit of a variable. Assume that,
age is a variable of any function, and its minimum value is 18 and the maximum value is 30,
both 18 and 30 will be considered as boundary values.

The basic assumption of boundary value analysis is, the test cases that are created using
boundary values are most likely to cause an error.

There is 18 and 30 are the boundary values that's why tester pays more attention to these values,
but this doesn't mean that the middle values like 19, 20, 21, 27, 29 are ignored. Test cases are
developed for each and every value of the range.
Testing of boundary values is done by making valid and invalid partitions. Invalid partitions
are tested because testing of output in adverse condition is also essential.

Imagine, there is a function that accepts a number between 18 to 30, where 18 is the minimum
and 30 is the maximum value of valid partition, the other values of this partition are 19, 20, 21,
22, 23, 24, 25, 26, 27, 28 and 29. The invalid partition consists of the numbers which are less
than 18 such as 12, 14, 15, 16 and 17, and more than 30 such as 31, 32, 34, 36 and 40. Tester
develops test cases for both valid and invalid partitions to capture the behavior of the system
on different input conditions.

The software system will be passed in the test if it accepts a valid number and gives the desired
output, if it is not, then it is unsuccessful. In another scenario, the software system should not
accept invalid numbers, and if the entered number is invalid, then it should display error
message.

If the software which is under test, follows all the testing guidelines and specifications then it
is sent to the releasing team otherwise to the development team to fix the defects.

Decision Table Testing


Decision table testing is a software testing technique used to test system behavior for different
input combinations. This is a systematic approach where the different input combinations and
their corresponding system behavior (Output) are captured in a tabular form. That is why it is
also called as a Cause-Effect table where Cause and effects are captured for better test
coverage.

A Decision Table is a tabular representation of inputs versus rules/cases/test conditions. Let's


learn with an example.
Example 1: How to make Decision Base Table for Login Screen

Let's create a decision table for a login screen.

The condition is simple if the user provides correct username and password the user will be
redirected to the homepage. If any of the input is wrong, an error message will be displayed.

• T – Correct username/password
• F – Wrong username/password
• E – Error message is displayed
• H – Home screen is displayed

Interpretation:

• Case 1 – Username and password both were wrong. The user is shown an error message.
• Case 2 – Username was correct, but the password was wrong. The user is shown an
error message.
• Case 3 – Username was wrong, but the password was correct. The user is shown an
error message.
• Case 4 – Username and password both were correct, and the user navigated to
homepage
Decision tables act as invaluable tools for designing black boxtests to examine the behavior of
the product under various logical conditions of input variables.

The steps in forming a decision table are as follows.

• Identify the decision variables.


• Identify the possible values of each of the decision variables.
• Enumerate the combinations of the allowed values of each of the variables. Identify the
cases when values assumed by a variable (or by sets of variables) are immaterial for a
given combination of other input variables. Represent such variables by the don't care
symbol.
• A decision table is useful when input and output data can be expressed as Boolean
conditions (TRUE, FALSE, and DON't CARE). Once a decision table is formed, each
row of the table acts as the specification for one test case.
• decision tables are usually effective in arriving at test cases in scenarios which depend
on the values of the decision variables

due to time and budget considerations, it is not possible to perform exhausting testing for each
set of test data, especially when there is a large pool of input combinations.

• We need an easy way or special techniques that can select test cases intelligently from
the pool of test-case, such that all test scenarios are covered.
• We use two techniques - Equivalence Partitioning & Boundary Value Analysis
testing techniques to achieve this.

What is Boundary Testing?

Boundary testing is the process of testing between extreme ends or boundaries between
partitions of the input values.

• So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just Inside-
Just Outside values are called boundary values and the testing is called "boundary
testing".
• The basic idea in boundary value testing is to select input variable values at their:

1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
• In Boundary Testing, Equivalence Class Partitioning performss a good role
• Boundary Testing comes after the Equivalence Class Partitioning.

Equivalent Class Partitioning


Equivalent Class Partitioning is a black box technique (code is not visible to tester) which can
be applied to all levels of testing like unit, integration, system, etc. In this technique, you divide
the set of test condition into a partition that can be considered the same.

• It divides the input data of software into different equivalence data classes.
• You can apply this technique, where there is a range in the input field. Equivalence
partitioning is a software testing technique that involves identifyingall partitions for
the complete set of input.
• The set of input values that generate one single expected output is called a partition.
When the behavior of the software is the same for a set of values, then the set is termed
as an equivalence class or a partition.
• In this case, one representative sample from each partition (also called the member of
the equivalance class) is picked up for testing. One sample from the partition is enough
for testing as the result of picking up some more values from the set will be the same
and will not yield any additional defects. Since all the values produce equal and same
output they are termed as equivalance partition. Testing by this technique involves (a)
identifying all partitions for the complete set of input, output values
• This reduces the number of combinations of input, output values used for testing,
thereby increasing the coverage and reducing the effort involved in testing.

.Example 1: Equivalence and Boundary Value

• Let's consider the behavior of Order Pizza Text Box Below


• Pizza values 1 to 10 is considered valid. A success message is shown.
• While value 11 to 99 are considered invalid for order and an error message will
appear, "Only 10 Pizza can be ordered"

Submit
Order Pizza:
Here is the test condition

1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is considered
invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.

We cannot test all the possible values because if done, the number of test cases will be more
than 100. To address this problem, we use equivalence partitioning hypothesis where we divide
the possible values of tickets into groups or sets as shown below where the system behavior
can be considered the same.

The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick only
one value from each partition for testing. The hypothesis behind this technique is that if one
condition/value in a partition passes all others will also pass. Likewise, if one condition in
a partition fails, all other conditions in that partition will fail.
Boundary Value Analysis- in Boundary Value Analysis, you test boundaries between
equivalence partitions

In our earlier example instead of checking, one value for each partition you will check the
values at the partitions like 0, 1, 10, 11 and so on. As you may observe, you test values
at both valid and invalid boundaries. Boundary Value Analysis is also called range
checking.

Equivalence partitioning and boundary value analysis(BVA) are closely related and can be
used together at all levels of testing.

Why Equivalence & Boundary Analysis Testing

1. This testing is used to reduce a very large number of test cases to manageable chunks.
2. Very clear guidelines on determining test cases without compromising on the
effectiveness of testing.
3. Appropriate for calculation-intensive applications with a large number of
variables/inputs
State or graph based testing
State or graph based testing is very useful in situations where

• Workflow modeling where, depending on the current state and


appropriate combinations of input variables, specific workflows are
carried out, resulting in new output and new state.
• Dataflow modeling, where the system is modeled as a set of dataflow,
leading from one state to another.
Consider an application that is required to validate a number according to
the following simple rules.

o A number can start with an optional sign.


o The optional sign can be followed by any number of digits.
o The digits can be optionally followed by a decimal point,
represented by a period.
o If there is a decimal point, then there should be two digits after the
decimal.
o Any number—whether or not it has a decimal point, should be
terminated by a blank.
o The above rules can be represented in a state transition diagram as
shown in Figure 4.3.

• .
The state transition diagram can be converted to a state transition table (Table
4.10), which lists the current state, the inputs allowed in the current state, and
for each such input, the next state.
The above state transition table can be used to derive test cases to test valid and invalid numbers.

Valid test cases can be generated by: Start from the Start State (State #1 in the
example).

Choose a path that leads to the next state (for example, +/-/digit to go from State
1 to State 2).

If you encounter an invalid input in a given state (for example, encountering an


alphabetic character in State 2), generate an error condition test case.

Repeat the process till you reach the final state (State 6 in this example).

Graph based testing methods are applicable to generate test cases for state
machines such as language translators, workflows, transaction flows, and data
flows.

A second situation where graph based testing is useful is to represent a transaction


or workflows. Consider a simple example of a leave application by an employee.
A typical leave application process can be visualized as being made up of the
following steps.

• The employee fills up a leave application, giving his or her employee ID,
and start date and end date of leave required.

• This information then goes to an automated system which validates that


the employee is eligible for the requisite number of days of leave. If not,
the application is rejected; if the eligibility exists, then the control flow
passes on to the next step below.

• This information goes to the employee's manager who validates that it is


okay for the employee to go on leave during that time (for example, there
are no critical deadlines coming up during the period of the requested
leave).
• Having satisfied himself/herself with the feasibility of leave, the manager
gives the final approval or rejection of the leave application.

The above flow of transactions can again be visualized as a simple state based
graph as given in Figure

In the above case, each of the states (represented by circles) is an event or a


decision point while the arrows or lines between the states represent data inputs.
Like in the previous example, one can start from the start state and follow the
graph through various state transitions till a “final” state (represented by a
unshaded circle) is reached.

Graph based testing such as in this example will be applicable when

• The application can be characterized by a set of states.

• The data values (screens, mouse clicks, and so on) that cause the transition
from one state to another is well understood.

• The methods of processing within each state.


Compatibility Testing

In the above sections, we looked at several techniques to test product features and
requirements. It was also mentioned that the test case results are compared with expected
results to conclude whether the test is successful or not. The test case results not only depend
on the product for proper functioning; they depend equally on the infrastructure for delivering
functionality. When infrastructure parameters are changed, the product is expected to still
behave correctly and produce the desired or expected results. The infrastructure parameters
could be of hardware, software, or other components. These parameters are different for
different customers. A black box testing, not considering the effects of these parameters on the
test case results, will necessarily be incomplete and ineffective.

Hence, there is a need for compatibility testing. This testing ensures the working of the product
with different infrastructure components. The techniques used for compatibility testing are
explained in this section. Testing done to ensure that the product features work
consistently with different infrastructure components is called compatibility testing.

The parameters that generally affect the compatibility of the product are

• Processor (CPU) (Pentium III, Pentium IV, Xeon, SPARC, and so on) and the number
of processors in the machine

• Architecture and characterstics of the machine (32 bit, 64 bit, and so on)

• Resource availability on the machine (RAM, disk space, network card)

• Equipment that the product is expected to work with (printers, modems, routers, and so
on)

• Operating system (Windows, Linux, and so on and their variants) and operating system
services (DNS, NIS, FTP, and so on)

• Middle-tier infrastructure components such as web server, application server, network


server Backend components such database servers (Oracle, Sybase, and so on)

• Services that require special hardware-cum-software solutions

• Any software used to generate product binaries (compiler, linker, and so on and their
appropriate versions)
• Various technological components used to generate components (SDK, JDK, and so on
and their appropriate different versions)

• In order to arrive at practical combinations of the parameters to be tested, a


compatibility matrix is created. A compatibility matrix has as its columns various
parameters the combinations of which have to be tested. Each row represents a unique
combination of a specific set of values of the parameters. A sample compatibility matrix
for a mail application is given in Table 4.11.

• Some of the common techniques that are used for performing compatibility testing,
using a compatibility table are
• Horizontal combination
All values of parameters that can coexist with the product for executing the set test
cases are grouped together as a row in the compatibility matrix. The values of
parameters that can coexist generally belong to different layers/types of infrastructure
pieces such as operating system, web server, and so on. Machines or environments are
set up for each row and the set of product features are tested using each of these
environments.
• Intelligent sampling
The selection of intelligent samples is based on information collected on the set of
dependencies of the product with the parameters. If the product results are less dependent
on a set of parameters, then they are removed from the list of intelligent samples. All other
parameters are combined and tested.
• The compatibility testing of a product involving parts of itself can be further classified
into two types.

• Backward compatibility Testing is to verify the behavior of the developed


hardware/software with the older versions of the hardware/software.
• Forward compatibility Testing is to verify the behavior of the developed
hardware/software with the newer versions of the hardware/software.

User Documentation Testing


User documentation covers all the manuals, user guides, installation guides, setup guides, read
me file, software release notes, and online help that are provided along with the software to
help the end user to understand the software system.
User documentation testing should have two objectives.
• To check if what is stated in the document is available in the product.
• To check if what is there in the product is explained correctly in the document.
When a product is upgraded, the corresponding product documentation should also get updated as
necessary to reflect any changes that may affect a user.
One of the factors contributing to this may be lack of sufficient coordination between the
documentation group and the testing/development groups. Over a period of time, product
documentation diverges from the actual behavior of the product. User documentation testing
focuses on ensuring what is in the document exactly matches the product behavior, by sitting
in front of the system and verifying screen by screen, transaction by transaction and report
by report.
In addition, user documentation testing also checks for the language aspects of the document like
spell check and grammar. User documentation is done to ensure the documentation matches the
product and vice-versa.

Testing these documents attain importance due to the fact that the users will have to refer to these
manuals, installation, and setup guides when they start using the software at their locations. Most
often the users are not aware of the software and need hand holding until they feel comfortable.
Since these documents are the first interactions the users have with the product, they tend to create
lasting impressions.
A badly written installation document can affect the product quality, even if the product offers rich
functionality.
Some of the benefits that ensure from user documentation testing are:

• User documentation testing aids in highlighting problems over looked during reviews.
• High quality user documentation ensures consistency of documentation and product, thus
minimizing possible defects reported by customers. It also reduces the time taken for each
support call—sometimes the best way to handle a call is to alert the customer to the relevant
section of the manual. Thus the overall support cost is minimized.

• When a customer faithfully follows the instructions given in a document but is unable to
achieve the desired (or promised) results, it is frustrating and often this frustration shows
up on the support staff.
• It contributes to better customer satisfaction and better morale of support staff.
• New programmers and testers who join a project group can use the documentation to learn
the external functionality of the product.

• Customers need less training and can proceed more quickly to advanced training and
product usage if the documentation is of high quality and is consistent with the product.
• Thus high-quality user documentation can result in a reduction of overall training costs for
user organizations.
• Defects found in user documentation need to be tracked to closure like any regular software
defect.
• In order to enable an author to close a documentation defect information about the
defect/comment description, paragraph/page number reference, document version number
reference, name of reviewer, reviewer's contact number, priority, and severity of the
comment need to be passed to the author.

• Since good user documentation aids in reducing customer support calls, it is a major
contibutor to the bottomline of the organization.

• The effort and money spent on this effort would form a valuable investment in the long run
for the organization.

DOMAIN TESTING
White box testing required looking at the program code. Black box testing performs testing without
looking at the program code but looking at the specifications. Domain testing can be considered
as the next level of testing in which we do not look even at the specifications of a software product
but are testing the product, purely based on domain knowledge and expertise in the domain of
application. This testing approach requires critical understanding of the day-to-day business
activities for which the software is written. This type of testing requires business domain
knowledge rather than the knowledge of what the software specification contains or how the
software is written. Thus domain testing can be considered as an extension of black box testing. It
focus more on its external behavior.
The test engineers performing this type of testing are selected because they have in-depth
knowledge of the business domain. Since the depth in business domain is a prerequisite for this
type of testing, sometimes it is easier to hire testers from the domain area (such as banking,
insurance, and so on) and train them in software, rather than take software professionals and train
them in the business domain. This reduces the effort and time required for training the testers in
domain testing and also increases the effectivenes of domain testing.

For example, consider banking software.

Knowing the account opening process in a bank enables a tester to test that functionality better.
In this case, the bank official who deals with account opening knows the attributes of the people
opening the account, the common problems faced, and the solutions in practice. To take this
example further, the bank official might have encountered cases where the person opening the
account might not have come with the required supporting documents or might be unable to fill
up the required forms correctly. In such cases, the bank official may have to engage in a different
set of activities to open the account.

Though most of these may be stated in the business requirements explicitly, there will be cases
that the bank official would have observed while testing the software that are not captured in the
requirement specifications explicitly. Hence, when he or she tests the software, the test cases are
likely to be more thorough and realistic. Domain testing is the ability to design and execute test
cases that relate to the people who will buy and use the software.

It is also characterized by how well an individual test engineer understands the operation of the
system and the business processes that system is supposed to support. If a tester does not
understand the system or the business processes, it would be very difficult for him or her to use,
let alone test, the application without the aid of test scripts and cases. Domain testing exploits the
tester's domain knowledge to test the suitability of the product to what the users do on a typical
day.
Domain testing involves testing the product, not by going through the logic built into the product.
The business flow determines the steps, not the software under test. This is also called “business
vertical testing.” Test cases are written based on what the users of the software do on a typical
day.

Let us further understand this testing using an example of cash withdrawal functionality in an
ATM, extending the earlier example on banking software. The user performs the following actions.

Step 1: Go to the ATM.

Step 2: Put ATM card inside.

Step 3: Enter correct PIN.

Step 4: Choose cash withdrawal.

Step 5: Enter amount.

Step 6: Take the cash.

Step 7: Exit and retrieve the card.

In the above example, a domain tester is not concerned about testing everything in the design;
rather, he or she is interested in testing everything in the business flow.

Generally, domain testing is done after all components are integrated and after the product has
been tested using other black box approaches (such as equivalence partitioning and boundary value
analysis). Hence the focus of domain testing has to be more on the business domain to ensure that
the software is written with the intelligence needed for the domain. To test the software for a
particular “domain intelligence,” the tester is expected to have the intelligence and knowledge of
the practical aspects of business flow. This will reflect in better and more effective test cases which
examine realistic business scenarios, thus meeting the objective and purpose of domain testing.

You might also like