UNIT 6
Software Testing
What is Software Testing??
• Testing is the process of executing a program
with the intent to find the errors.”
• Testing v/s debugging
Objectives of Testing
1.Software Quality Improvement
2.Verification & Validation
3.Software Reliability Estimation
Testing Principles
• Test should be based on customer requirement
• Test cases should be planned before testing
• Should start in small and end towards big
• Testing should be done by independent third party
• Assign best persons for testing
• Document test cases and test result
• Provide expected results if possible.
Why Testing is important?
• Testing is a process that requires more efforts
than any other software engineering activity.
• Testing is a set of activities that can be planned in
advance and conducted systematically.
• If it is not conducted properly(or according to
organization rule) , then only time will be wasted,
and more even worse errors may get introduced.
• This may lead to have many undetected errors in
the system being developed. Hence performing
testing by adapting systematic strategies is very
much essential in during development of
software.
6.1 strategic approach to software testing
• A testing strategy provides a process that describes for
the developer, quality analysts and the customer ,the
steps conducted as part of testing. Testing strategy
include-
1.Test planning
2.Test case design
3.Test execution
4.Data collection
5.Effectiveness evaluation
• The strategic approach for software testing can
be -
1.Just before starting the testing process the formal
technical reviews must be conducted. this will
eliminate many errors before the actual testing process.
2.At the beginning , various components of the system are
tested , then gradually interface is tested and thus the
entire computer based system is tested.
3. Different testing techniques can be applied at different
point of time.
4.The developer of the software conducts testing. For the
large projects the independent test group (ITG) also
assist the developer.
6.2 Verification and validation
6.3 Organizing of software testing
• Testing need to be conducted from the beginning of the
software development process.
• Software developer should test the individual unit of
software system for its proper functioning. This is called
as unit testing . That means there must be an
involvement of s/w developers for unit testing activities.
• In many cases s/w developer performs integration
testing. Thus the complete s/w architecture is been
tested by the s/w developers during development
phases.
• Then the independent test group (ITG) gets involved in
the s/w testing activity.
6.4 Criteria for completion of Testing
• Entry criteria:
s/w testing should start early in SDLC. This helps to
capture and eliminate defects in the early stages of
SDLC i.e requirement gathering and design phase.
An early start to testing helps to reduce the number of
defects and ultimately the n/w cost in the end.
Early criteria for testing is defined as “specific
conditions or on going activities that must be present
before a process can begin.
The software testing life cylce (STLC) specifies the entry
criteria required during each testing phase.
It also defines the time interval or the expected
amount of lead time to make the entry criteria item
available to the process.
Following are the inputs necessary for the entry criteria
1.The requirements documents
2.Complete understanding of application flow
3.Test plan
• Exit criteria:
It can be defined as “the specific condition or on going
activities that should be fulfilled before completing the
s/w testing life cycle”.
The exit criteria can identify the intermediate
deliverables.
Following exit criteria should be considered for
completion of a testing phase
o Ensuring all critical test cases are passed.
o Achieving complete functional coverage.
o Identifying and fixing all the high priority defects.
The output achieved through exit criterial are-
1.Test summary report.
2.Test logs.
6.5 Strategic issues
• Before the testing starts , specify product requirements
appropriately.
• Specify testing objectives clearly.
• Identify categories of users for the s/w and their role
with reference to s/w system.
• Develop a test plan that emphasizes rapid cycle testing
• Build robust s/w which can be useful for testing
• Use effective formal reviews before actual testing
process begins.
• Conduct formal technical review to access the test
strategy and test cases
6.6 Testing strategies for conventional software
• Various testing strategies for conventional s/w
are-
1.Unit testing
2.Integration testing
3.Validation testing
4.System testing
1.Unit Testing:
In this type of testing techniques are applied to detect
the errors from each software component individually.
Unit Testing
2.Integration testing:
It focuses on issues associated with verification and program
construction as components begin interacting with one another
Top-Down Integration
1. The main control module
is used as a driver, and
stubs are substituted for
all modules directly
subordinate to the main
module.
2. Depending on the
integration approach
selected (depth or
breadth first), subordinate
stubs are replaced by
modules one at a time.
Problems with Top-Down Integration
• Many times, calculations are performed in the
modules at the bottom of the hierarchy
• Stubs typically do not pass data up to the higher
modules
• Delaying testing until lower-level modules are ready
usually results in integrating many modules at the
same time rather than one at a time
• Developing stubs that can pass data up is almost as
much work as developing the actual module
Bottom-Up Integration
• Integration begins with the lowest-
level modules, which are combined
into clusters, or builds, that perform a
specific software sub function
• Drivers (control programs developed
as stubs) are written to coordinate
test case input and output.[T1]
• The cluster is tested
• Drivers are removed and clusters are
combined moving upward in the
program structure
3.Validation testing:
It provides assurance that the s/w validation criterial
meets all functional , behavioral and performance
requirements.
4.System testing:
In system testing all system elements forming the system is
tested as a whole.
• Recovery testing
– checks system’s ability to recover from failures
• Security testing
– verifies that system protection mechanism prevents improper
penetration or data alteration
• Stress testing
– program is checked to see how well it deals with abnormal
resource demands
• Performance testing
– tests the run-time performance of software
6.7 Testing strategies for object oriented software
• The object oriented testing strategy is identical
to conventional testing strategy.
• The strategy is start testing in small and work
outward to testing in large.
• The basic unit of testing is class that contains
attributes and operations .
• These classes integrate to form object oriented
architecture. These are called collaborating
classes.
6.7.1 Unit testing in OO context
• Class is an encapsulation of data attributes and
corresponding set of operations.
• The object is an instance of a class. Hence objects also
specify some data attributes and operations.
• In object oriented software the focus of unit testing is
considered as classes or objects.
• In object oriented context the operation can not be tested as
single isolated unit because one operation can be defined in
one perticular class and can be used in multiple classes at the
same time.
Eg. Consider the operation display().This operation is defined as
super class at the same time it can be used by the multiple
derived classes. Hence it is not possible to test the
operations single module.
6.7.2 Integration testing in OO context
• There are two strategies used for integration testing and
those are-
1. Thread based testing 2. use based testing
• The thread based testing integrates all the classes that
respond to one input or event for the system. Each thread is
then tested individually.
• In use based testing the independent classes and
dependent classes are tested.
• The independent classes are those classes that uses the
server classes and dependant classes are those classes that
use the independent classes.
• In use based testing the independent classes are tested at
the beginning and the testing proceeds towards testingof
dependent classes.
• Drivers and stub:
In OO context the drivers can be used to test operations
at lowest level and for testing whole group of classes .
• The stub can be used in the situations in which
collaborations of the classes is required and one
collaborating class is not fully implemented.
• The cluster testing is one step in integration testing
in OO context. In this step all the collaborating classes
are tested in order to uncover the errors.
6.8 Test strategies for WebApps
• Testing strategy suggests to use following basic principles
of software testing-
1. The contest model must be reviewed in order to uncover
the errors.
2. The interfaces model is reviewed to ensure all the use
cases.
3. The design model is reviewed to uncover navigation
errors.
4. User interface must be tested to remove the navigation
errors.
5. For selected function components unit testing is done.
6. Navigation must be tested for the complete web
architecture.
7. The web application is tested in different environmental
configuration.
8.Security test must be conducted to exploit vulnerabilities
of the web applications.
9.Performance of the web application must be tested using
performance testing.
10.Finally controlled and monitored group of users must
test the web application.
6.9 Validation testing
• The integrated software is tested based on
requirements to ensure that the desired product
is obtained.
• In validation testing the main focus is to uncover
errors in
- System input/output
- System functions and information data
- System interfaces with external parts
- User interfaces
- System behaviour and performance
• Software validation can be perfomed through a series
of black box tests.
• After performing the validation tests there exists two
conditions
1. The function or performance characteristics are
according to the specification and are accepted.
2. The requirement specifications are derived and the
deficiency list is created. The deficiencies then can be
resolved by establishing the proper communication
with the customer.
• Finally in validation testing a review is taken to
ensure that all the elements of software
configuration are developed as per requirements.
the review is called configuration review or audit.
6.9.1 Acceptance testing
• The acceptance testing is a kind of testing conducted to
ensure that the software works correctly in the user work
environment.
• Types of acceptance testing
1. Alpha testing 2. beta testing
1. Alpha testing:
• It is a testing in which the version of complete software is
tested by the customer under the supervision of
developer.
• This testing is performed at developer’s site. The
software is used in natural setting in presence of
developer .
• This test is conducted in controlled environment.
2. Beta testing:
• Beta testing is a testing in which the version of software
is tested by the customer without the developer being
present.
• This testing is performed at customer’s site. As there is
no presence of developer during testing , it is not
controlled by developer.
• The end user records the problems and report them to
developer. The developer then makes appropriate
modification.
6.9.2 validation test criteria
• Validation is a technique, to evaluate whether the final
built software product fulfils the customer requirements.
• A test plan is created that contains the classes of tests to
be conducted and test procedure defines the specific
tests to be conducted.
• After each validation testing one of the two possible
conditions exists:
1.The function or performance characteristics conforms to
specification and is accepted. Or
2.A deviation from specification is represented. In other
words the list of requirements that does not get fulfilled
by the system is represented.
6.9.3 Configuration review
• Configuration review is important element of the
validation process.
• The main purpose of this review is to ensure that
all the elements of the software configuration
have been properly developed and catalogued.
• The configuration review is also called as audit.
Testing Types
Static Dynamic
Black Box White Box
36
Static vs Dynamic Testing
Static Testing
---This type of testing of software is carried out before putting
the software in action. Static testing is carried out to look for the
errors in the algorithms, codes or documents.
---This testing is done by the writer or developer of the software
or testers and is carried out by walking through it, checking the
code reviews, or visual inspection.
Dynamic Testing
---This type of testing is carried out once the software has been
fully compiled and loaded to the system.
---In Dynamic testing the software is checked for the consistency
of the input and output parameters by using another software.
1. black box testing:
• Black box testing is used to demonstrate that
the software functions are operational. As
the name suggests in black box testing it is
tested whether the input is accepted
properly and output is correctly produced.
• Major focus of black box testing is on
functions, operations, external interfaces,
external data and information.
2.White box testing:
• In white box testing the procedural details are
closely examined.
• In this testing the internals of software are
tested to make sure that they operate according
to specifications and designs.
• Thus major focus of white box testing is on
internal structures , logic paths , control flows ,
data flows , internal data structures , conditions
, loops etc
6.11 White-box testing
• Path testing: an attempt to use test input that
will pass once over each path in the code
– path testing is white box
– What would be path testing for daysInMonth(month,
year)?
some ideas:
• error input: year < 1, month < 1, month > 12
• one month from [1, 3, 5, 7, 10, 12]
• one month from [4, 6, 9, 11]
• month 2
– in a leap year, not in a leap year
Types of Black-Box Testing
• Positive/Negative- checks both good/bad results
• Boundary value analysis
• Decision tables
• Equivalence partitioning- group related inputs/outputs
Boundary Value Analysis
• Boundary value analysis is done to check boundary
conditions.
• A boundary value analysis is a testing technique in
which the elements at the edge of the domain are
selected and tested.
• Using boundary value analysis, instead of focusing on
input conditions only, the test cases from output
domain are also derived.
• Boundary value analysis is a test case design
technique that complements equivalence
partitioning technique.
• Create test cases to test boundaries of
equivalence classes
Decision tables
Equivalence testing
• Equivalence Partitioning:
– A black-box test technique to reduce no of required test
cases.
– steps in equivalence testing:
• identify classes of inputs with same behavior
• test on at least one member of each equivalence class
• assume behavior will be same for all members of class
– criteria for selecting equivalence classes:
• coverage : every input is in one class
• disjointedness : no input in more than one class
• representation : if error with 1 member of class, will occur
with all
Equivalence Partitioning
• First-level partitioning: Valid vs. Invalid test
cases
Valid Invalid
Equivalence Partitioning
Partition valid and invalid test cases into equivalence classes
Equivalence Partitioning
• Create a test case for at least one value from
each equivalence class