Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
63 views22 pages

SVV Notes Mid Lec1 17

The document discusses various aspects of the inquiry cycle, focusing on prototyping methods, quality assurance, and quality control in software development. It outlines the differences between verification and validation, as well as the importance of inspections and testing in ensuring software quality. Additionally, it highlights the significance of establishing quality standards and attributes early in the development process to meet project needs.

Uploaded by

tahreem nazim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views22 pages

SVV Notes Mid Lec1 17

The document discusses various aspects of the inquiry cycle, focusing on prototyping methods, quality assurance, and quality control in software development. It outlines the differences between verification and validation, as well as the importance of inspections and testing in ensuring software quality. Additionally, it highlights the significance of establishing quality standards and attributes early in the development process to meet project needs.

Uploaded by

tahreem nazim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Lecture-2

Inquiry Cycle

Shortcuts in the inquiry cycle

1
Prototyping
 “A software prototype is a partial implementation constructed primarily to enable customers, users, or developers to learn
more about a problem or its solution.” [Davis 1990]
 “Prototyping is the process of building a working model of the system” [Agresti 1986]

Approaches to prototyping
 Presentation Prototypes
 used for proof of concept; explaining design features; etc.
 explain, demonstrate and inform – then throw away
 Exploratory Prototypes
 used to determine problems, elicit needs, clarify goals, compare design options
 informal, unstructured and thrown away.
 Breadboards or Experimental Prototypes
 explore technical feasibility; test suitability of a technology
 Typically, no user/customer involvement
 Evolutionary (e.g. “operational prototypes”, “pilot systems”):
 development seen as continuous process of adapting the system
 “prototype” is an early deliverable, to be continually improved

Evolutionary Prototyping
 Purpose
 to learn more about the problem or its  Advantages:
solution  Requirements not frozen
 and reduce risk by building parts early  Return to last increment if error is found
 Use:  Flexible(?)
 incremental; evolutionary  Disadvantages:
 Can end up with complex, unstructured system
 Approach:
which is hard to maintain
 vertical - partial impl. of all layers;  early architectural choice may be poor
 designed to be extended/adapted  Optimal solutions not guaranteed
 Lacks control and direction Brooks: “Plan to throw
one away - you will anyway!”

Throwaway Prototyping
 Purpose:  Advantages:
 to learn more about the problem or its  Learning medium for better convergence
solution…  Early delivery → early testing → less cost
 discard after desired knowledge is gained.  Successful even if it fails!
 Use:  Disadvantages:
 early or late  Wasted effort if request change rapidly
 Approach:  Often replaces proper documentation of
 horizontal - build only one layer (e.g. UI) the requirements
 “quick and dirty”  May set customers’ expectations too high
 Can get developed into final product

Reviews
 “Management reviews”
 E.g. preliminary design review (PDR), critical design review (CDR)
 Used to provide confidence that the design is sound
 Attended by management and sponsors (customers)
2
 Often just a “dog-and-pony show”
Walkthroughs
 “Walkthroughs ”
 developer technique (usually informal)
 used by development teams to improve quality of product
 focus is on finding defects

Inspections
 “(Fagan) Inspections ”
 a process management tool (always formal)
 used to improve quality of the development process
 collect defect data to analyze the quality of the process
 written output is important
 major role in training junior staff and transferring expertise

Other terms used


 Formal Technical Reviews (FTRs)
 Formal Inspections
 “Formality” can vary:
 informal:
 meetings over coffee
 regular team meetings, etc.
 formal:
 scheduled meetings
 prepared participants
 defined agenda
 specific format
 documented output

Benefits of formal inspection


 Formal inspection works well for programming:
1. For applications programming:
 more effective than testing
 most reviewed programs run correctly first time
 compare: 10-50 attempts for test/debug approach
2. Data from large projects
 error reduction by a factor of 5; (10 in some reported cases)
 improvement in productivity: 14% to 25%
 percentage of errors found by inspection: 58% to 82%
 cost reduction of 50%-80% for V&V (even including cost of inspection)
3. Effects on staff competence:
 increased morale, reduced turnover
 better estimation and scheduling (more knowledge about defect profiles)
 better management recognition of staff ability
 These benefits also apply to requirements inspections
4. Many empirical studies investigated variant inspection processes
5. Mixed results on the relative benefits of different processes

Summary
 Requirement models are the theories about the world.

3
 Prototyping is the process of building a working model of the system.
 Designs are tests of those theories.
Lecture - 5,6: Product Quality Standards
Product Quality
 The quality of the end product depends upon:
 The “attributes” and characteristics of the software product
 The degree that they fulfill specific project needs
 To ensure that the product meets a defined quality standard:
 Standards and practices for s/w product must be defined early in the development process
 Standards must be specific to software product

Software Attributes
 Reliability  Functionality  Correctness  Testability
 Usability  Maintainability  Portability  Efficiency

Classification of Software Quality Attributes


 Performance Attributes  Processing Attributes  Operational Integrity
 Form Attributes  Functional Attributes Attributes
 Maintainability Attributes

Software Product Quality Models


 McCall’s & Boehm’s S/W Product Quality Model
 The ISO 9126 Standard Quality Model

McCall’s Product Quality Model


 The McCall quality model is organized
around three types of Quality Characteristics:
 Factors (To specify): They
describe the external view of the software, as
viewed by the users.
 Criteria (To build): They describe
the internal view of the software, as seen by
the developer.
 Metrics (To control): They are
defined and used to provide a scale and
method for measurement.

4
ISO 9126 Standard Quality Model
 The objective of this standard is to provide a framework for the evaluation of software quality.
 ISO/IEC 9126 does not provide requirements for software, but it defines a quality model which is applicable to every
kind of software.
 It defines six product quality characteristics and, in an annex, provides a suggestion of quality sub characteristics.

5
Lecture-7: Quality Assurance vs. Quality Control
 Quality Assurance (QA) is process oriented and focuses on defect prevention
 Quality control (QC) is product oriented and focuses on defect identification.

Quality Assurance Quality Control


Definition
QC is a set of activities for ensuring quality in products. The
QA is a set of activities for ensuring quality in the processes by
activities focus on identifying defects in the actual products
which products are developed.
produced.
Focus on
QC aims to identify (and correct) defects in the finished
QA aims to prevent defects with a focus on the process used to product. Quality control, therefore, is a reactive process.
make the product. It is a proactive quality process.  A reactive approach is based on responding to
 A proactive approach focuses on eliminating events after they have happened. The difference between
problems before they have a chance to appear. these two approaches is the perspective each one provides
in assessing actions and events.
Goal
The goal of QA is to improve development and test processes so The goal of QC is to identify defects after a product is
that defects do not arise when the product is being developed. developed and before it's released.

How
QA: Establish a good quality management system and the QC: Finding & eliminating sources of quality problems through
assessment of its adequacy. Periodic conformance audits of the tools & equipment so that customer's requirements are
operations of the system. continually met.
What
QA: Prevention of quality problems through planned and QC: The activities or techniques used to achieve and maintain
systematic activities including documentation. the product quality, process and service.
Responsibility
Everyone on the team involved in developing the product is Quality control is usually the responsibility of a specific team
6
responsible for quality assurance. that tests the product for defects.
Example: Verification is an example of QA Example: Validation/Software Testing is an example of QC
Statistical Techniques
Statistical Tools & Techniques can be applied in both QA &
When statistical tools & techniques are applied to finished
QC. When they are applied to processes (process inputs &
products (process outputs), they are called as Statistical Quality
operational parameters), they are called Statistical Process
Control (SQC) & comes under QC.
Control (SPC); & it becomes the part of QA.
As a tool
QA is a managerial tool QC is a corrective tool
Orientation
QA is process oriented QC is product oriented

Summary
 Quality Assurance (QA) refers to the process used to create the deliverables, and can be performed by a manager, client,
or even a third-party reviewer. Examples of quality assurance include process checklists, project audits and methodology
and standards development.
 Quality Control (QC) refers to quality related activities associated with the creation of project deliverables. Quality
control is used to verify that deliverables are of acceptable quality and that they are complete and correct. Examples of
quality control activities include inspection, deliverable peer reviews and the testing process.
 Quality assurance activities are determined before production work begins and these activities are performed while the
product is being developed. In contrast, Quality control activities are performed after the product is developed.
Lecture-8
Verification vs validation
• Verification:
"Are we building the product right"
The software should conform to its specification

• Validation:
"Are we building the right product"
The software should do what the user really requires

The V & V process


• Is a whole life-cycle process - V & V must be applied at each stage in the software process.
• Has two principal objectives
• The discovery of defects in a system
• The assessment of whether or not the system is usable in
an operational situation.

Static and dynamic verification


• Software inspections Concerned with analysis of the static system representation to discover problems (static
verification)
• May be supplement by tool-based document and code analysis
• Software testing Concerned with exercising and observing product behaviour (dynamic verification)
• The system is executed with test data and its operational behaviour is observed

7
Static
verification

Requirements High-level Formal Detailed


specification Program
specification design design

Dynamic
Prototype
validation

Program testing
• Can reveal the presence of errors NOT their absence
• A successful test is a test which discovers one or more errors
• The only validation technique for non-functional requirements
• Should be used in conjunction with static verification to provide full V&V coverage

Types of testing
• Defect testing
• Tests designed to discover system defects.
• A successful defect test is one which reveals the presence of defects in a system.
• Statistical testing
• tests designed to reflect the frequency of user inputs. Used for reliability estimation.

Lecture-9
V & V planning
• Careful planning is required to get the most out of testing and inspection processes
• Planning should start early in the development process
• The plan should identify the balance between static verification and testing
• Test planning is about defining standards for the testing process rather than describing product tests

The V-model of development

Requir ements System System Detailed


specification specification design design

System Sub-system Module and


Acceptance
integration integration unit code
test plan
test plan test plan and tess

Acceptance System Sub-system


Service
test integration test integration test

8
The structure of a software test plan
• The testing processes • Test recording procedures
• Requirements traceability • Hardware and software requirements
• Tested items • Constraints
• Testing schedule

Software inspections
• Involve people examining the source representation with the aim of discovering anomalies and defects
• Do not require execution of a system so may be used before implementation
• May be applied to any representation of the system (requirements, design, test data, etc.)
• Very effective technique for discovering errors

Inspection success
• Many different defects may be discovered in a single inspection. In testing, one defect, may mask another so several
executions are required
• The reuse domain and programming knowledge so reviewers are likely to have seen the types of error that commonly
arise

Inspections and testing


• Inspections and testing are complementary and not opposing verification techniques
• Both should be used during the V & V process
• Inspections can check conformance with a specification but not conformance with the customer’s real requirements
• Inspections cannot check non-functional characteristics such as performance, usability, etc.

Lecture-10
Program inspections
• Formalised approach to document reviews
• Intended explicitly for defect DETECTION (not correction)
• Defects may be logical errors, anomalies in the code that might indicate an erroneous condition (e.g. an uninitialized
variable) or non-compliance with standards

Inspection pre-conditions
• A precise specification must be available
• Team members must be familiar with the organisation standards
• Syntactically correct code must be available
• An error checklist should be prepared
• Management must accept that inspection will increase costs early in the software process
• Management must not use inspections for staff appraisal

The inspection process

9
Planning
Overview Follow-up
Individual
Rework
preparation
Inspection
meeting

Inspection procedure
• System overview presented to inspection team
• Code and associated documents are distributed to inspection team in advance
• Inspection takes place and discovered errors are noted
• Modifications are made to repair discovered errors
• Re-inspection may or may not be required

Inspection teams
• Author: The person who created the work product being inspected.
• Moderator: This is the leader of the inspection. The moderator plans the inspection and coordinates it.
• Reader: The person reading through the documents, one item at a time. The other inspectors then point out defects.
• Recorder/Scribe: The person that documents the defects that are found during the inspection.
• Inspector: The person that examines the work product to identify possible defects.

Lecture-11
Inspection checklists
• Checklist of common errors should be used to drive the inspection
• Error checklist is programming language dependent
• The 'weaker' the type checking, the larger the checklist
• Examples: Initialisation, Constant naming, loop termination, array bounds, etc.

Inspection checks

10
Fault class Inspection check
Data faults Are all program variables initialised before their values
are used?
Have all constants been named?
Should the lower bound of arrays be 0, 1, or something
else?
Should the upper bound of arrays be equal to the size of
the array or Size -1?
If character strings are used, is a delimiter explicitly
assigned?
Control faults For each conditional statement, is the condition correct?
Is each loop certain to terminate?
Are compound statements correctly bracketed?
In case statements, are all possible cases accounted for?
Input/output faults Are all input variables used?
Are all output variables assigned a value before they are
output?
Interface faults Do all function and procedure calls have the correct
number of parameters?
Do formal and actual parameter types match?
Are the parameters in the right order?
If components access shared memory, do they have the
same model of the shared memory structure?
Storage management If a linked structure is modified, have all links been
faults correctly reassigned?
If dynamic storage is used, has space been allocated
correctly?
Is space explicitly de-allocated after it is no longer
required?
Exception Have all possible error conditions been taken into
management faults account?

Automated static analysis


• Static analysers are software tools for source text processing
• They parse the program text and try to discover potentially erroneous conditions and bring these to the attention of the V
& V team
• Very effective as an aid to inspections. A supplement to but not a replacement for inspections

Static analysis checks

Fault class Static analysis check


Data faults Variables used before initialisation
Variables declared but never used
Variables assigned twice but never used
between assignments
Possible array bound violations
Undeclared variables
Control faults Unreachable code
Unconditional branches into loops
Input/output faults Variables output twice with no intervening
assignment
Interface faults Parameter type mismatches
Parameter number mismatches
Non-usage of the results of functions
Uncalled functions and procedures
Storage management Unassigned pointers
faults Pointer arithmetic

Stages of static analysis


• Control flow analysis. Checks for loops with multiple exit or entry points, finds unreachable code, etc.
11
• Data use analysis. Detects uninitialized variables, variables written twice without an intervening assignment, variables
which are declared but never used, etc.
• Interface analysis. Checks the consistency of routine and procedure declarations and their use
• Information flow analysis. Identifies the dependencies of output variables. Does not detect anomalies itself but
highlights information for code inspection or review
• Path analysis. Identifies paths through the program and sets out the statements executed in that path. Again, potentially
useful in the review process
• Both these stages generate vast amounts of information. Must be used with care.

Use of static analysis


• Particularly valuable when a language such as C is used which has weak typing and hence many errors are undetected by
the compiler
• Less cost-effective for languages like Java that have strong type checking and can therefore detect many errors during
compilation

Lecture-12
Software Quality
• The degree to which a system, component, or process meets specified requirements.
OR
• The degree to which a system, component or process meets customer or user needs or expectations.

Quality Assurance
• Product and software quality do not happen by accident, and is not something that can be added on after the fact.
• To achieve quality, we must plan for it from the beginning, and continuously monitor it day to day
• This requires discipline
• Methods and disciplines for achieving quality results are the study of Quality Assurance or QA
• Three General Principles of QA
• Know what you are doing
• Know what you should be doing
• Know how to measure the difference

Software Quality Assurance


QA Principle 1: Know What You Are Doing
• In the context of software quality, this means continuously understanding what it is you are building, how you are
building it and what it currently does
• This requires organization, including having a management structure, reporting policies, regular meetings and reviews,
frequent test runs, and so on
• We normally address this by following a software process with regular milestones, planning, scheduling, reporting and
tracking procedures

QA Principle 2: Know What You Should be Doing


• In the context of software quality, this means having explicit requirements and specifications
• These must be continuously updated and tracked as part of the software development and evolution cycle
• We normally address this by requirements and use-case analysis, explicit acceptance tests with expected results, explicit
prototypes, frequent user feedback
• Particular procedures and methods for this are usually part of our software process

QA Principle 3: Know How to Measure the Difference

12
• In the context of software quality, this means having explicit measures comparing what we are doing to what we should
be doing.

• Achieved using four complementary methods:


• Formal Methods - consists of using mathematical models or methods to verify mathematically specified
properties
• Testing - consists of creating explicit inputs or environments to exercise the software, and measuring its success
• Inspection- consists of regular human reviews of requirements, design, architecture, schedules and code
• Metrics- consists of instrumenting code or execution to measure a known set of simple properties related to
quality

Formal Methods
• Formal methods include formal verification (proofs of correctness), abstract interpretation (simulated execution in a
different semantic domain, e.g., data kind rather than value), state modelling (simulated execution using a mathematical
model to keep track of state transitions), and other mathematical methods
• Traditionally, use of formal methods requires mathematically sophisticated programmers, and is necessarily a slow and
careful process, and very expensive
• In the past, formal methods have been used directly in software quality assurance in a small (but important) fraction of
systems
• Primarily safety critical systems such as onboard flight control systems, nuclear

Testing
Focus of the Course
• The vast majority (over 99%) of software quality assurance uses testing, inspection and metrics instead of formal
methods
• Example: at the Bank of Nova Scotia, over 80% of the total software development effort is involved in testing!

Testing
• Testing includes a wide range of methods based on the idea of running the software through a set of example inputs or
situations and validating the results
• Includes methods based on requirements (acceptance testing), specification and design (functionality and interface
testing), history (regression testing), code structure (path testing), and many more

Inspection
• Inspection includes methods based on a human review of the software artifacts
• Includes methods based on requirements reviews, design reviews, scheduling and planning reviews, code walkthroughs,
and so on
• Helps discover potential problems before they arise in practice

Metrics
• Software metrics includes methods based on using tools to count the use of features or structures in the code or other
software artifacts, and compare them to standards
• Includes methods based on code size (number of source lines), code complexity (number of parameters, decisions,
function points, modules or methods), structural complexity (number or depth of calls or transactions), design
complexity, and so on.
• Helps expose anomalous or undesirable properties that may reduce reliability and maintainability

Achieving Software Quality


Software Process
• Software quality is achieved by applying these techniques in the framework of a software process.
• There are many software processes proposed, of which extreme Programming is one of the more recent.

Lecture-13: Software Quality Assurance in SDLC


13
SQA in SDLC
• Requirements • Architectural • Detailed design • Implementatio • Testing
design n

Requirements Phase
• Senior QA/Manager ensures that the user/client requirements are captured correctly
• Find out the risks in the requirement and decide how the system will be tested.
• Properly expressed as functional, performance and interface requirements.
• Review the requirement document and other deliverables meeting the standard
• Prepare the formal test plan including the test tools are being used in the project.

Architectural Design Phase


• Ensure that architectural design meets standards as designated in the Project Plan
• Verify all captured requirement are allocated to software components
• Verify all the design documents are completed on time according to the project plan and kept in project repository (ER
Diagram, Process diagram, Use Case, etc).
• Prepare the design test report and submit to the project manager.

Detailed Design Phase


• Prepare the test objectives from the requirement and design document created.
• Design a verification matrix or Check list and update on regular basis
• Send the test documents to project manager for approval and keep them in repository

Implementation Phase
• Verify the results of coding and design activities including the schedule available in the project plan
• Check the status of all deliverable items and verify that all are maintaining the standard.
• Getting updated with the tools and technologies used in the projects and provide the feedback to the team if any better
solution is available.
• Complete writing the check list/ test cases to start testing.
• Verify that the components are ready to start test or not.

Testing Phase
• Start testing individual module and stat reporting bugs
• Verify that all tests are run according to test plans
•  Verify all the bugs available in the bug tracking system are resolved.
•  Compile the test reports and verify that the report is complete and correct
• Certify that testing is complete according to the plan
• Start creating the documentation and verify that all documents are ready for delivery

Lecture-14: Principles of Software Testing


Seven Principles of Software Testing
• Testing shows presence of defects
• Exhaustive testing is impossible
• Early testing
• Defect clustering
• Pesticide paradox
• Testing is context dependent
• Absence of error – fallacy
Testing Shows Presence of Defects

14
• Testing shows the presence of defects in the software. The goal of testing is to make the software fail. Sufficient testing
reduces the presence of defects. In case testers are unable to find defects after repeated regression testing doesn’t mean that
the software is bug-free.
• Testing talks about the presence of defects and don’t talk about the absence of defects.

Exhaustive Testing is Impossible:


• What is Exhaustive Testing?
• Testing all the functionalities using all valid and invalid inputs and preconditions is known as Exhaustive
testing.
• Why it’s impossible to achieve Exhaustive Testing?
• Assume we have to test an input field which accepts age between 18 to 20 so we do test the field using 18,19,20.
In case the same input field accepts the range between 18 to 100 then we have to test using inputs such as 18, 19, 20, 21,
…., 99, 100. It’s a basic example, you may think that you could achieve it using automation tool. Imagine the same field
accepts some billion values. It’s impossible to test all possible values due to release time constraints.
• If we keep on testing all possible test conditions then the software execution time and costs will rise. So instead
of doing exhaustive testing, risks and priorities will be taken into consideration whilst doing testing and estimating testing
efforts.

Early Testing
• Defects detected in early phases of SDLC are less expensive to fix. So conducting early testing reduces the cost of fixing
defects.
• Assume two scenarios, first one is you have identified an incorrect requirement in the requirement gathering phase and
the second one is you have identified a bug in the fully developed functionality. It is cheaper to change the incorrect
requirement compared to fixing the fully developed functionality which is not working as intended.

Defect Clustering
• Defect Clustering in software testing means that a small module or functionality contains most of the bugs or it has the
most operational failures.
• As per the Pareto Principle (80-20 Rule), 80% of issues comes from 20% of modules and remaining 20% of issues from
remaining 80% of modules. So we do emphasize testing on the 20% of modules where we face 80% of bugs.

Pesticide Paradox
• Pesticide Paradox in software testing is the process of repeating the same test cases again and again, eventually, the same
test cases will no longer find new bugs. So to overcome this Pesticide Paradox, it is necessary to review the test cases
regularly and add or update them to find more defects.

Testing is Context Dependent


• Testing approach depends on the context of the software we develop. We do test the software differently in different
contexts. For example, online banking application requires a different approach of testing compared to an e-commerce site.

Absence of Error – Fallacy


• 99% of bug-free software may still be unusable, if wrong requirements were incorporated into the software and the
software is not addressing the business needs.
• The software which we built not only be a 99% bug-free software but also it must fulfill the business needs otherwise it
will become an unusable software.
• These are the seven principles of Software Testing every professional tester should know.

Lecture-15: Testing types, techniques and tactics

Installation testing
Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these
procedures to achieve an installed software system that may be used is known as installation testing.

15
Compatibility testing
 A common cause of software failure (real or perceived) is a lack of its compatibility with other application
software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the
original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web
application, which must render in a Web browser).
 For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test
software only on the latest version of the target environment, which not all users may be running. This results in the
unintended consequence that the latest work may not function on earlier versions of the target environment, or on older
hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by
proactively abstracting operating system functionality into a separate program module or library.

Smoke and sanity testing


 Sanity testing determines whether it is reasonable to proceed with further testing.
 Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic
problems that will prevent it from working at all. Such tests can be used as build verification test.

Regression testing
 Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover
software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever
software functionality that was previously working correctly, stops working as intended.
 Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the
software collides with the previously existing code. Regression testing is typically the largest test effort in commercial
software development, due to checking numerous details in prior software features, and even new software can be developed
while using some old test cases to test parts of the new design to ensure prior functionality is still supported.
 Common methods of regression testing include re-running previous sets of test cases and checking whether previously
fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added
features.
 They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting
of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. In regression testing, it is
important to have strong assertions on the existing behavior.
 For this, it is possible to generate and add new assertions in existing test cases, this is known as automatic test
improvement

Acceptance testing
 Acceptance testing can mean one of two things:
 A smoke test is used as a build acceptance test prior to further testing, e.g., before integration or regression.
 Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user
acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of
development.

Alpha testing
 Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the
developers' site.
 Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software
goes to beta testing.

Beta Testing
 Beta testing comes after alpha testing and can be considered a form of external user acceptance testing.

16
 Versions of the software, known as beta versions, are released to a limited audience outside of the programming team
known as beta testers.
 The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta
versions can be made available to the open public to increase the feedback field to a maximal number of future users and to
deliver value earlier, for an extended or even indefinite period of time (perpetual beta).
Functional vs non-functional testing
 Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the
code requirements documentation, although some development methodologies work from use cases or user stories.
Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
 Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such
as scalability or other performance, behavior under certain constraints, or security.
 Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable
execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of
the suitability perspective of its users.

Continuous testing
 Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain
immediate feedback on the business risks associated with a software release candidate.
 Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of
testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with
overarching business goals.

Destructive testing
 Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly
even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-
management routines.
 Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional
testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools
available that perform destructive testing.

Software performance testing


 Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness
and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of
the system, such as scalability, reliability and resource usage.
 There is little agreement on what the specific goals of performance testing are. The terms load testing, performance
testing, scalability testing, and volume testing, are often used interchangeably.
 Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.

 Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether
that be large quantities of data or a large number of users. This is generally referred to as software scalability.
 The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing.
 Volume testing is a way to test software functions even when certain components (for example a file or database)
increase radically in size.
 Stress testing is a way to test reliability under unexpected or rare workloads.
 Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function
well in or above an acceptable period.

Usability testing
 Usability testing is to check if the user interface is easy to use and understand.
 It is concerned mainly with the use of the application.
 This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled UI
designers.

17
Accessibility testing
 Accessibility is everyone can use the software.
 Accessibility testing may include compliance with standards such as:
 Americans with Disabilities Act of 1990
 Section 508 Amendment to the Rehabilitation Act of 1973
 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Security testing
 Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
 The International Organization for Standardization (ISO) defines this as a type of testing conducted to evaluate the
degree to which a test item, and associated data and information, are protected so that unauthorized persons or systems
cannot use, read or modify them, and authorized persons or systems are not denied access to them.

Development testing
 Development Testing is a software development process that involves the synchronized application of a broad spectrum
of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by
the software developer or engineer during the construction phase of the software development lifecycle. Development Testing
aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality
of the resulting software as well as the efficiency of the overall development process.
 Depending on the organization's expectations for software development, Development Testing might include static code
analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other
software testing practices.

A/B testing
 A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the
current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment)
and data is collected to determine which version is better at achieving the desired outcome.

Concurrent testing
 Concurrent computing is a form of computing in which several computations are executed during overlapping time
periods
 Concurrent or concurrency testing assesses the behavior and performance of software and systems that use concurrent
computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race
conditions and problems with shared memory/resource handling.

Conformance testing or type testing


 In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers,
for instance, are extensively tested to determine whether they meet the recognized standard for that language.

Lecture-16: Critical Systems Validation


Validating the reliability, safety and security of computer-based systems

Validation perspectives
• Reliability validation
• Does the measured reliability of the system meet its specification?
• Is the reliability of the system good enough to satisfy users?
• Safety validation
• Does the system always operate in such a way that accidents do not occur or that accident consequences are
minimised?
• Security validation
• Is the system and its data secure against external attack?

Validation techniques
18
• Static techniques
• Design reviews and program inspections
• Mathematical arguments and proof
• Dynamic techniques
• Statistical testing • Scenario-based • Run-time checking
testing
• Process validation
• Design development processes that minimise the chances of process errors that might compromise the
dependability of the system
Static validation techniques
• Static validation is concerned with analyses of the system documentation (requirements, design, code, test data).
• It is concerned with finding errors in the system and identifying potential problems that may arise during system
execution.
• Documents may be prepared (structured arguments, mathematical proofs, etc.) to support the static validation

Static techniques for safety validation


• Demonstrating safety by testing is difficult because testing is intended to demonstrate what the system does in a
particular situation. Testing all possible operational situations is impossible
• Normal reviews for correctness may be supplemented by specific techniques that are intended to focus on checking that
unsafe situations never arise

Safety reviews
• Review for correct intended function
• Review for maintainable, understandable structure
• Review to verify algorithm and data structure design against specification
• Review to check code consistency with algorithm and data structure design
• Review adequacy of system testing

Review guidance
• Make software as simple as possible
• Use simple techniques for software development avoiding error-prone constructs such as pointers and recursion
• Use information hiding to localise the effect of any data corruption
• Make appropriate use of fault-tolerant techniques but do not be seduced into thinking that fault-tolerant software is
necessarily safe

Hazard-driven analysis
• Effective safety assurance relies on hazard identification (covered in previous lectures)
• Safety can be assured by
• Hazard avoidance • Accident • Protection systems
avoidance

• Safety reviews should demonstrate that one or more of these techniques have been applied to all identified hazards

The system safety case


• It is now normal practice for a formal safety case to be required for all safety-critical computer-based systems e.g.
railway signalling, air traffic control, etc.

19
• A safety case presents a list of arguments, based on identified hazards, why there is an acceptably low probability that
these hazards will not result in an accident
• Arguments can be based on formal proof, design rationale, safety proofs, etc. Process factors may also be included

Lecture-17
Formal methods and critical systems
• The development of critical systems is one of the ‘success’ stories for formal methods
• Formal methods are mandated in Britain for the development of some types of safety-critical software for defence
applications
• There is not currently general agreement on the value of formal methods in critical systems development
Formal methods and validation
• Specification validation
• Developing a formal model of a system requirements specification forces a detailed analysis of that specification
and this usually reveals errors and omissions
• Mathematical analysis of the formal specification is possible and this also discovers specification problems
• Formal verification
• Mathematical arguments (at varying degrees of rigour) are used to demonstrate that a program or a design is
consistent with its formal specification

Problems with formal validation


• The formal model of the specification is not understandable by domain experts
• It is difficult or impossible to check if the formal model is an accurate representation of the specification for
most systems
• A consistently wrong specification is not useful!
• Verification does not scale-up
• Verification is complex, error-prone and requires the use of systems such as theorem provers. The cost of
verification increases exponentially as the system size increases.

Formal methods conclusion


• Formal specification and checking of critical system components is, in my view, useful
• While formality does not provide any guarantees, it helps to increase confidence in the system by demonstrating
that some classes of error are not present
• Formal verification is only likely to be used for very small, critical, system components
• About 5-6000 lines of code seems to be the upper limit for practical verification

Safety proofs
• Safety proofs are intended to show that the system cannot reach in unsafe state
• Weaker than correctness proofs which must show that the system code conforms to its specification
• Generally based on proof by contradiction
• Assume that an unsafe state can be reached
• Show that this is contradicted by the program code
• May be displayed graphically

Construction of a safety proof


• Establish the safe exit conditions for a component or a program
• Starting from the END of the code, work backwards until you have identified all paths that lead to the exit of the code
20
• Assume that the exit condition is false
• Show that, for each path leading to the exit that the assignments made in that path contradict the assumption of an unsafe
exit from the component

Gas warning system


• System to warn of poisonous gas. Consists of a sensor, a controller and an alarm
• Two levels of gas are hazardous
• Warning level - no immediate danger but take action to reduce level
• Evacuate level - immediate danger. Evacuate the area
• The controller takes air samples, computes the gas level and then decides whether or not the alarm should be activated

Gas sensor control


Gas_level: GL_TYPE ;
loop
-- Take 100 samples of air
Gas_level := 0.000 ;
for i in 1..100 loop
Gas_level := Gas_level + Gas_sensor.Read ;
end loop ;
Gas_level := Gas_level / 100 ;
if Gas_level > Warning and Gas_level < Danger then
Alarm := Warning ; Wait_for_reset ;
elsif Gas_level > Danger then
Alarm := Evacuate ; Wait_for_reset ;
else
Alarm := off ;
end if ;
end loop ;

Graphical argument

Condition checking

21
Code is incorrect.
Gas_level = Danger does not cause the alarm to be on

Key points
• Safety-related systems should be developed to be as simple as possible using ‘safe’ development techniques
• Safety assurance may depend on ‘trusted’ development processes and specific development techniques such as the use of
formal methods and safety proofs
• Safety proofs are easier than proofs of consistency or correctness. They must demonstrate that the system cannot reach an
unsafe state. Usually proofs by contradiction

22

You might also like