Unit & Integration
Testing
Problems that May Occur
testing is not regarded as a necessary,
separate activity
collateral to the development process
the personnel is not properly trained
testing is no trivial process
training is necessary
the lack of procedures
the lack of testing and debugging tools
2
Principles
use of metrics
identify the parts of the program with a high
density of errors
could reveal a complex or incorrectly specified
problem
helps estimating the necessary time and
resources for testing the subsequent versions
use of code inspection
before testing
3
Principles (continued)
testing reuse
what can be reused
test plans
test procedures
test cases etc.
where can we reuse all those
subsequent versions
higher testing levels
the documentation is essential
4
Testing Approaches
non-incremental
each module (unit) independently from the rest
then all modules together - the final program
(integration)
incremental
modules already tested are used in testing a new
module
the order in which modules are tested is very
important
5
Incremental vs. Non-incremental (1)
how to test a module
in theory - independently from anything else
in practice - it has to work together with other
modules
which must be at least simulated when testing the
current module
specific code must be written to simulate the
other modules
driver - module that calls the current one
stub - module called by the current one
6
Incremental vs. Non-incremental (2)
what kinds of modules developers write
general class hierarchies
MFC, VCL, .NET Framework
interact with virtually any other modules
classes developed for specific applications
the most common case
interact with certain well-defined modules
not really general-purpose modules
although theory says so...
7
Incremental vs. Non-incremental (3)
writing driver/stub code requires significant
effort
non-incremental testing
for each tested module, additional code must be
written for all adjacent modules
incremental testing
modules already tested are used in testing the
current module
no driver/stub code must be written for them
8
Incremental vs. Non-incremental (4)
detecting inter-module interface errors
non-incremental testing
only when the program is tested as a whole
incremental testing
each time we use already tested modules
the detected errors may require rewriting some
modules that were already tested
the problem is discovered earlier
9
Incremental vs. Non-incremental (5)
inter-module interface errors
incremental testing - allows determining the
module that is responsible
non-incremental testing - module is unknown
debugging
defects are easier to locate if we know the
module in which they arise
errors in module implementation
both approaches are equally efficient
10
Incremental vs. Non-incremental (6)
processor time consumption
non-incremental testing - who runs
the module being currently tested
the driver/stub code
incremental testing - who runs
the module being currently tested
the modules it interacts with
the driver/stub code is simpler than the
modules it replaces
11
Incremental vs. Non-incremental (7)
the parallelization of the testing process
non-incremental testing
all modules can be tested in parallel
incremental testing
only part of the modules can be tested in
parallel
12
Example
non-incremental - all modules in parallel
incremental (variant)
C, E, F
B, D
A
13
Incremental Testing
compared to non-incremental testing
advantages are more important than drawbacks
prefered in most cases
main alternatives
top-down
bottom-up
14
Top-down Testing (1)
testing begins with the topmost module
which is not called by any other module
at each step, how do we decide the next
module to be tested?
there is no clear, invariable rule
the principle - choose a module for which at
least one of the calling modules has already
been tested
15
Top-down Testing (2)
critical modules must be included in the
testing process as soon as possible
complex code
new/modified code
code under a high suspicion of containing
errors
modules responsible for the I/O operations
must also be tested as soon as possible
eases the subsequent testing process
16
Top-down Testing (3)
back to the previous example
we first test module A
stub code required for modules B, C, and D
we assume that module B comes next
modules A and B are tested together
stub code required for modules C, D, and E
the stub code for modules C and D - the same we
used when testing module A only?
17
Top-down Testing (4)
testing module A
top modules do not perform I/O operations
test data - supplied by module B (for example)
how can we supply multiple test cases?
multiple variants of stub code for module B
flexible code (e.g., reading data from file)
conclusion - writing stub code is not simple
18
Problems
we assume that module E implements the
I/O operations
we want to test module F
can we achieve all desired test cases for
module F by supplying appropriate input
values in module E?
if so, how difficult is it?
data is processed by intermediate modules
19
Problems (continued)
the test results are also available through
I/O operations
thus, through module E
once again, data is modified on its way
between modules F and E
by intermediate modules
can we analyze the outcomes correctly?
we see the failures, we want to infer the defects
20
Problems (continued)
risky decisions that can be made
begin testing before code has been fully written
e.g., we wrote modules A-D, so we test them
we do not wait for modules E-F to be written
testing may lead to changes in the modules
incomplete module testing
e.g., we do not test module A completely
we wait until we have the I/O module
but what if we forget to perform the complete tests?
21
Bottom-up Testing (1)
testing begins with the modules situated at
the lowest level
which do not call any other module
at each step, how do we decide the next
module to be tested?
again, there is no unique rule
the principle - choose a module for which all
called modules have already been tested
22
Bottom-up Testing (2)
critical modules - tested as soon as possible
I/O modules are not essential here
driver (not stub) code must be written
no need for multiple versions of the same
driver
which can call repeatedly the module being
tested
driver code is usually easier to write than stub
code
23
Comparison
what do we prefer?
usually - incremental testing
top-down or bottom-up?
no clear advantage for any of the variants
if major errors occur
in the upper modules - top-down
in the lower modules - bottom-up
e.g., a new version of an existing program
we have information about the distribution of errors
24
Proper Testing (1)
test case inspection
if the expected results are incorrect, we can
draw wrong conclusions from testing
using testing tools
easier to write driver code
pay attention to side effects
the program does things it is not supposed to do
hard to detect
e.g., variables whose values are changed,
although they shouldn't be
25
Proper Testing (2)
developers should not test their own code
if we do not have separate testing teams
everyone tests the code written by someone else
debugging
must be performed by the author of the code
tests - designed to be reusable
modules with many errors discovered will
continue to have many errors in the future
26
System Testing
Goal
to compare the final system with the initial
objectives
prerequisites
objectives have been clearly stated
in a measurable manner
what we are looking for in this stage
errors in translating the objectives into product
specifications
not programming errors
28
Why Is It Hard?
objectives tell nothing about the structure of
the program and of the functions it contains
there is no methodology for translating the
objectives into specifications
there is no methodology for designing the
test cases
29
Who Performs the Testing?
not the developers
in theory - end-users
they usually do not possess the necessary
knowledge for testing
in practice - testers and users
system designers can also be part of the testing
if possible - another company than the one
that developed the system
30
Types of Test Cases
what do we test during system testing?
several categories of test cases
not all are applicable for each real program
the characteristics of the program must be
analyzed
31
Facility Testing
determine whether each feature indicated by
the objectives was implemented
in general, can be done manually
by studying the documentation
based on a checklist
all objectives are reviewed, one by one, and the
features are identified
32
Volume Testing
tests the system's ability to handle large data
volumes
examples
compiler - very large source program
linker - program made from a large number of
modules
electronic simulator - circuit with a lot of
components
33
Stress Testing
tests the program's response to heavy load
(in terms of computational resources)
large amount of calculations, not data
applies to several kinds of programs
real-time applications
applications with a high degree of interactivity
process control applications
34
Stress Testing - Examples (1)
air traffic control
testing with the maximum number of aircraft
supported by the application
testing with a number of aircraft larger than the
maximum supported by the application
testing with a large number of aircraft arriving
in a short period of time
testing with multiple aircraft which experience
problems at the same time
35
Stress Testing - Examples (2)
operating system
testing with the maximum number of threads
supported by the system
Web application
testing with a large number of concurrent
accesses
36
Usability Testing (1)
the interface is adapted to the user's
knowledge and working style
the results are delivered by the program in a
useful and intelligible manner
the error messages help the user understand
the cause of the error
the set of interfaces is unitary with respect
to syntax, semantics, data format, etc.
37
Usability Testing (2)
the way of getting the input data is
redundant enough to ensure accuracy
the system has a way of confirming that an
input has been received
the options offered by the program are not
useless or too many
navigation through the menus is easy
38
Security Testing
tests whether the program has security
issues
examples
operating systems
avoiding the memory protection mechanisms
databases
based on the study of applications of the
same kind
design test cases that reproduce their problems
39
Performance Testing
tests whether the system carries out the
performance-related objectives
examples
response times
throughput
these parameters must be tested in various
conditions of
load
configuration
40
Storage Testing
tests whether the system meets the storagerelated objectives
examples
the necessary storage capacity
the size of the temporary files
what could be of interest
average (long term) necessities
peak load necessities
41
Configuration Testing
tests whether the program runs properly on
various configurations
I/O devices
network communication
memory size
operating systems
usually not all configurations can be tested
at least the minimal/maximal configurations for
each component
42
Compatibility/Conversion Testing
tests the relations between the different
versions of the program
feature compatibility
format conversion
essential for the migration to a new version
of the program
examples
databases
systems that involve working with documents
43
Installability Testing
tests the program's installation procedure
the complexity level
the correctness
particularly necessary if the program has an
automated installation procedure
the user cannot interfere if installation is not
working properly
44
Reliability Testing
tests whether the program carries out the
reliability-related objectives
e.g., uptime
over 99.9% for some systems
testing is difficult
hard to reproduce the test conditions
formal proof is preferred
45
Recovery Testing
tests how well the program can restore its
state after
programming errors
hardware failures
memory parity, I/O operations
data errors
disturbed communication lines
such errors are caused intentionally, in
order to assess the response of the program
46
Serviceability Testing
system maintenance-related objectives
program servicing
error logging
diagnostics
memory dumps
the quality of internal documentation (not enduser documentation)
maintenance procedures
47
Documentation Testing
tests the accuracy of end-user
documentation
testing methods
inspection
use the documentation to design test cases for
the other kinds of testing
start with the examples in the documentation
48
Procedure Testing
tests the procedures for the human operators
some programs cannot be properly operated
without such procedures
example
database administrator - backup and recovery
procedures
system administrator - installation and software
update procedures
49
Change Control (1)
unit testing and integration testing are often
performed by the developers
system testing must be carried out by a
specialized team
different from the developer team
eliminating the errors - when should fixes
be made?
50
Change Control (2)
developers' approach - any error must be
fixed as soon as it has been discovered
requires retesting the system
this cannot be done after each error fix
fixing an error is faster than testing
testing could get out of control
51
Change Control (3)
testers' approach
testing must be carried out only when the
system development is over
problem - the errors discovered through testing
require changing the system
realistic approach
fixes must be introduced in the system at
regular intervals of time
52
Acceptance Testing
Goal
to find out to what extent the program
complies with users' requirements
so it has to be carried out by the users
alpha testing - at the developer's location
beta testing - at the user's location
can be combined with system testing
if users are important in system testing
54
Release Strategies (1)
classic strategy
when acceptance testing is finished, the system
is sent to and installed at all clients
problem - what if errors occur?
pilot
initially - a single client uses the system
or a few clients - small number anyway
then it is sent to all clients
not beta testing - the system is used for
production, not for testing
55
Release Strategies (2)
gradual implementation
the system is sent to the clients during several
phases
initially - to a small number of clients
then to more clients
finally - to all clients
drawback - takes a long time until all clients get
the product
56
Release Strategies (3)
phased implementation
successive versions are sent to all clients
each version adds new features
in the end, all users get the final product
drawback - integration is more complex
parallel implementation
the new version runs in parallel with the old
one
the results are compared
57