Understanding Test
levels
Learning outcomes
• By the end of this unit, students should be able to:
• Explain the role of component testing in determining software quality
• Explain the role of integration testing in determining software quality
• Explain the role of system testing in determining software quality
• Explain the role of acceptance testing in determining software quality.
Test levels
• Software development is a step by step process
• During the process, various components are released
• Test Levels – one or more groups of testing activities which are carried out
during different phases of the software development life cycle (SDLC):
• Component Testing
• Integration Testing
• System Testing
• User Acceptance Testing (UAT)
• Objectives of testing determine test levels (are we testing usability,
performance relations amongst components…)
Component testing
• also referred to as Unit Testing
• Enables a tester to identify and validate various quality parameters required
for a component
• testing of a single program, module, or unit of code.
• usually performed by the developer of the unit
• Stub
• Special purpose software used to replicate or mimic the behaviour / functionality of
a component being called by the component being tested
• Driver
• Special purpose software used to replicate the behaviour of a component that calls
the component being tested
Objectives of component testing
• Test functionality – correct functionality as specified in requirements
• Test robustness (component not called correctly should deal with
error in such a way that the entire system is not affect
• Test efficiency (efficient use of computer resources)
• Test maintainability – components should be easy to modify
(modularity, program comments, adherence to standards…)
Importance of component
testing
• Enables developers to identify errors early during the development life
cycle thereby reducing overall development cost of the software product
• Effective component testing eliminates a considerable amount of defects
• Uses the following component testing strategy:
• The test techniques and the rationale behind their choice
• The completion criteria for component testing and the rationale for their choice so
as to streamline the testing process
• The degree of dependence required during the specification of test cases
• The environment in which component tests are executed
• A detailed description of all activities to be performed.
Component Testing
Consideration
• Always create a test plan before component testing
• Consider component’s significance and risks
• You don’t want to spend too much time testing less significant
components
Integration testing
• Testing performed to detect defects in interfaces and in the
interactions among the integrated component of a software system.
• Validates that multiple parts of the system interact according to the
system design
• Categories of Integration Testing:
• Component Integration Testing
• detects problems in integrating different components
• System Integration Testing
• detects problems in interfaces and interaction with external systems
Cont’d
• The following are implications of omitting component testing:
• Most failures originate in such situations originate from functional faults
• As you cannot access individual components properly, many failures cannot
be triggered
• In case a test fails, it is difficult or impossible to trace the origin of the fault or
to isolate its cause
Objectives of Integration
Testing
• Test interface formats – detect incompatible interface formats of
integrated formats (happens due to missing files)
• Test data exchange – detect defects such as components not
transmitting any data. These defects happen due to transmission of
data between components.
Integration strategies
• Top-down Integration
• Incremental approach
• Begin by testing higher level components
• Lower level components are simulated with stubs
• Tested components are then used as drivers to test lower level components
• Repeat until all lower level components are tested
• Bottom-up Integration
• Incremental approach
• Start with lower level components; use drivers to simulate higher level components.
• Tested components are used to test higher level components.
• Repeat process until all higher level components have been tested
• Functional Incremental Integration
• Components and subsystems are tested in the order in which basic functionalities start working
• Ad hoc Integration
• Components are integrated in the order in which they are developed; uses both stubs & drivers
• Backbone Integration
• A frame or backbone is created first and components are progressively added to it
Cont’d
Incremental vs. Big Bang Approaches
• Big Bang
• All components are integrated before integration testing begins (therefore no need
for stubs and drivers)
• Testing is done once, defects difficult to detect n fix
• Incremental
• Testing is done as the system is being assembled
• Early detection of defects (reduces risk of late defect detection
• Because drivers and stubs are required – time consuming
• Monitor – tool or hardware device that runs in parallel to a assembled
component under integration tests
• Which approach do you prefer incremental or Big Bang?
Find out?
• Factors affecting integration strategy
System testing
• Involves testing overall system
• Carried out after component and integration testing (in other words:
Performed once all components have been integrated – integrated
system)
• Validates quality of system
• Verifies that user requirements are met
Cont’d
• The process of testing an integrated system to validate that it meets
specified requirements
• Performed in an environment as similar as possible to the expected
functional environment of the system
• Objective of System Testing:
• To evaluate that the integrated system addresses the specified functional and
non-functional requirements (usability, reliability, performance, security)
• Can detect defects related to external hardware and software
interfaces
Testing in live Environment
• Some projects run system tests in the actual (live) operational
environment to save effort and cost
• Disadvantages:
• Failures during testing may cause damage to the operational environment
• Other systems running in the operational environment may change the test
conditions
• Simulator (or Emulator)
• Device, program or system that is used during system testing to mimic the
actual operating environment
• Used to replace software and hardware components that may be missing or
unavailable
Challenges of System testing
• Unclear system requirements
• Not documented, exist in people’s minds. Makes it hard to evaluate correct
system behavior.
• End result: end up gathering information about system behavior during testing
• Overlooked decisions
• The people involved during system testing may have an entirely different
perspective from the original requirements
• Likelihood of project failure
• Unspecified and undocumented requirements may lead to developers failing
to get clear objectives. End result: system that does not meet user
requirements is built.
User Acceptance Testing (UAT)
• System testing is carried out by technical people (developers and
testing teams). Often technical teams may not identify true
requirements of users.
• Enter users. When users are involved you have UAT
• Users can suggest requirements, higher user confidence in product
AND ULTIMATELY software company can delivers product highly
accepted by users
• UAT captures user requirements in a directly verifiable way
• UAT identifies problems which unit or integration tests might have
missed,
Cont’d
• Testing that helps determine whether or not to formally accept the
completed system
• Typically performed by end users either independently or in
coordination with the testing team
• Performed in an environment that resembles that actual operational
environment
• Evaluates parameters such as functionality, performance, interface,
security, etc.
• Ensure that the system meets the true needs of the user – not just
system specifications
Focus areas of UAT
• Four important components that render a product fit for use are:
• Reliability, Consistency and Usability of data
• Ability to match skills, aptitude and desire of user
• Optimal use of technology
• Efficacy of programming logic
Acceptance criteria
• A set of conditions that a system must meet in order for it to be
accepted by end users.
• Contractual in nature (must be agreed upon)
• Enable developers to identify user requirements
• Helps developers to plan, schedule, execute UATs at appropriate points in the
SDLC
• Acceptance Criteria Categories
• Functional Requirements
• Performance Requirements
• Interface Quality Requirements
• Overall software quality requirements
Types of Acceptance testing
• Contract Acceptance Testing
• Contractual
• Determine whether the terms defined in the development contract have
been met
• Regulation Acceptance Testing
• Also known as compliance acceptance testing
• Performed against any regulations e.g. government, legal or safety regulations
• User Acceptance Testing
• Whether software is fit for use by end users
Cont’d
• Operational Acceptance Testing
• Also known as production acceptance testing
• Typically performed by system administrators or operators
• Include testing of backup and restore cycles, disaster recovery mechanisms, etc.
• Alpha Field Testing
• A preliminary software field test carried out by a team of users in order to find bugs that
were not found previously through other tests.
• Performed as a form of internal acceptance testing
• Performed by potential customers, users, or independent testers
• Beta Field Testing
• Performed by potential/existing customers at an external site. The development
organization does not assist in this process.
• Performed to get feedback from the market
Cont’d
• Use Case Test Data
• A use case is a set of conditions that determine the operational use of a software product
• Built on the basis of a business transaction
• Acceptance Testing Roles
• Acceptance testing is typically performed by end users in coordination with the testing team.
User and Testers roles are independent. (Testers may act as advisers.
• Acceptance Planning
• Created along the general project plan in order to ensure that the user needs are correctly
interpreted and implemented
• Acceptance Decisions
• Users make acceptance decisions
• Occur in stages
• Final decision made when all requirements are met and all documentation delivered
System testing vs Acceptance
testing
• Both evaluate the overall performance of the whole software system