SVV Notes Mid Lec1 17
SVV Notes Mid Lec1 17
Inquiry Cycle
1
Prototyping
“A software prototype is a partial implementation constructed primarily to enable customers, users, or developers to learn
more about a problem or its solution.” [Davis 1990]
“Prototyping is the process of building a working model of the system” [Agresti 1986]
Approaches to prototyping
Presentation Prototypes
used for proof of concept; explaining design features; etc.
explain, demonstrate and inform – then throw away
Exploratory Prototypes
used to determine problems, elicit needs, clarify goals, compare design options
informal, unstructured and thrown away.
Breadboards or Experimental Prototypes
explore technical feasibility; test suitability of a technology
Typically, no user/customer involvement
Evolutionary (e.g. “operational prototypes”, “pilot systems”):
development seen as continuous process of adapting the system
“prototype” is an early deliverable, to be continually improved
Evolutionary Prototyping
Purpose
to learn more about the problem or its Advantages:
solution Requirements not frozen
and reduce risk by building parts early Return to last increment if error is found
Use: Flexible(?)
incremental; evolutionary Disadvantages:
Can end up with complex, unstructured system
Approach:
which is hard to maintain
vertical - partial impl. of all layers; early architectural choice may be poor
designed to be extended/adapted Optimal solutions not guaranteed
Lacks control and direction Brooks: “Plan to throw
one away - you will anyway!”
Throwaway Prototyping
Purpose: Advantages:
to learn more about the problem or its Learning medium for better convergence
solution… Early delivery → early testing → less cost
discard after desired knowledge is gained. Successful even if it fails!
Use: Disadvantages:
early or late Wasted effort if request change rapidly
Approach: Often replaces proper documentation of
horizontal - build only one layer (e.g. UI) the requirements
“quick and dirty” May set customers’ expectations too high
Can get developed into final product
Reviews
“Management reviews”
E.g. preliminary design review (PDR), critical design review (CDR)
Used to provide confidence that the design is sound
Attended by management and sponsors (customers)
2
Often just a “dog-and-pony show”
Walkthroughs
“Walkthroughs ”
developer technique (usually informal)
used by development teams to improve quality of product
focus is on finding defects
Inspections
“(Fagan) Inspections ”
a process management tool (always formal)
used to improve quality of the development process
collect defect data to analyze the quality of the process
written output is important
major role in training junior staff and transferring expertise
Summary
Requirement models are the theories about the world.
3
Prototyping is the process of building a working model of the system.
Designs are tests of those theories.
Lecture - 5,6: Product Quality Standards
Product Quality
The quality of the end product depends upon:
The “attributes” and characteristics of the software product
The degree that they fulfill specific project needs
To ensure that the product meets a defined quality standard:
Standards and practices for s/w product must be defined early in the development process
Standards must be specific to software product
Software Attributes
Reliability Functionality Correctness Testability
Usability Maintainability Portability Efficiency
4
ISO 9126 Standard Quality Model
The objective of this standard is to provide a framework for the evaluation of software quality.
ISO/IEC 9126 does not provide requirements for software, but it defines a quality model which is applicable to every
kind of software.
It defines six product quality characteristics and, in an annex, provides a suggestion of quality sub characteristics.
5
Lecture-7: Quality Assurance vs. Quality Control
Quality Assurance (QA) is process oriented and focuses on defect prevention
Quality control (QC) is product oriented and focuses on defect identification.
How
QA: Establish a good quality management system and the QC: Finding & eliminating sources of quality problems through
assessment of its adequacy. Periodic conformance audits of the tools & equipment so that customer's requirements are
operations of the system. continually met.
What
QA: Prevention of quality problems through planned and QC: The activities or techniques used to achieve and maintain
systematic activities including documentation. the product quality, process and service.
Responsibility
Everyone on the team involved in developing the product is Quality control is usually the responsibility of a specific team
6
responsible for quality assurance. that tests the product for defects.
Example: Verification is an example of QA Example: Validation/Software Testing is an example of QC
Statistical Techniques
Statistical Tools & Techniques can be applied in both QA &
When statistical tools & techniques are applied to finished
QC. When they are applied to processes (process inputs &
products (process outputs), they are called as Statistical Quality
operational parameters), they are called Statistical Process
Control (SQC) & comes under QC.
Control (SPC); & it becomes the part of QA.
As a tool
QA is a managerial tool QC is a corrective tool
Orientation
QA is process oriented QC is product oriented
Summary
Quality Assurance (QA) refers to the process used to create the deliverables, and can be performed by a manager, client,
or even a third-party reviewer. Examples of quality assurance include process checklists, project audits and methodology
and standards development.
Quality Control (QC) refers to quality related activities associated with the creation of project deliverables. Quality
control is used to verify that deliverables are of acceptable quality and that they are complete and correct. Examples of
quality control activities include inspection, deliverable peer reviews and the testing process.
Quality assurance activities are determined before production work begins and these activities are performed while the
product is being developed. In contrast, Quality control activities are performed after the product is developed.
Lecture-8
Verification vs validation
• Verification:
"Are we building the product right"
The software should conform to its specification
• Validation:
"Are we building the right product"
The software should do what the user really requires
7
Static
verification
Dynamic
Prototype
validation
Program testing
• Can reveal the presence of errors NOT their absence
• A successful test is a test which discovers one or more errors
• The only validation technique for non-functional requirements
• Should be used in conjunction with static verification to provide full V&V coverage
Types of testing
• Defect testing
• Tests designed to discover system defects.
• A successful defect test is one which reveals the presence of defects in a system.
• Statistical testing
• tests designed to reflect the frequency of user inputs. Used for reliability estimation.
Lecture-9
V & V planning
• Careful planning is required to get the most out of testing and inspection processes
• Planning should start early in the development process
• The plan should identify the balance between static verification and testing
• Test planning is about defining standards for the testing process rather than describing product tests
8
The structure of a software test plan
• The testing processes • Test recording procedures
• Requirements traceability • Hardware and software requirements
• Tested items • Constraints
• Testing schedule
Software inspections
• Involve people examining the source representation with the aim of discovering anomalies and defects
• Do not require execution of a system so may be used before implementation
• May be applied to any representation of the system (requirements, design, test data, etc.)
• Very effective technique for discovering errors
Inspection success
• Many different defects may be discovered in a single inspection. In testing, one defect, may mask another so several
executions are required
• The reuse domain and programming knowledge so reviewers are likely to have seen the types of error that commonly
arise
Lecture-10
Program inspections
• Formalised approach to document reviews
• Intended explicitly for defect DETECTION (not correction)
• Defects may be logical errors, anomalies in the code that might indicate an erroneous condition (e.g. an uninitialized
variable) or non-compliance with standards
Inspection pre-conditions
• A precise specification must be available
• Team members must be familiar with the organisation standards
• Syntactically correct code must be available
• An error checklist should be prepared
• Management must accept that inspection will increase costs early in the software process
• Management must not use inspections for staff appraisal
9
Planning
Overview Follow-up
Individual
Rework
preparation
Inspection
meeting
Inspection procedure
• System overview presented to inspection team
• Code and associated documents are distributed to inspection team in advance
• Inspection takes place and discovered errors are noted
• Modifications are made to repair discovered errors
• Re-inspection may or may not be required
Inspection teams
• Author: The person who created the work product being inspected.
• Moderator: This is the leader of the inspection. The moderator plans the inspection and coordinates it.
• Reader: The person reading through the documents, one item at a time. The other inspectors then point out defects.
• Recorder/Scribe: The person that documents the defects that are found during the inspection.
• Inspector: The person that examines the work product to identify possible defects.
Lecture-11
Inspection checklists
• Checklist of common errors should be used to drive the inspection
• Error checklist is programming language dependent
• The 'weaker' the type checking, the larger the checklist
• Examples: Initialisation, Constant naming, loop termination, array bounds, etc.
Inspection checks
10
Fault class Inspection check
Data faults Are all program variables initialised before their values
are used?
Have all constants been named?
Should the lower bound of arrays be 0, 1, or something
else?
Should the upper bound of arrays be equal to the size of
the array or Size -1?
If character strings are used, is a delimiter explicitly
assigned?
Control faults For each conditional statement, is the condition correct?
Is each loop certain to terminate?
Are compound statements correctly bracketed?
In case statements, are all possible cases accounted for?
Input/output faults Are all input variables used?
Are all output variables assigned a value before they are
output?
Interface faults Do all function and procedure calls have the correct
number of parameters?
Do formal and actual parameter types match?
Are the parameters in the right order?
If components access shared memory, do they have the
same model of the shared memory structure?
Storage management If a linked structure is modified, have all links been
faults correctly reassigned?
If dynamic storage is used, has space been allocated
correctly?
Is space explicitly de-allocated after it is no longer
required?
Exception Have all possible error conditions been taken into
management faults account?
Lecture-12
Software Quality
• The degree to which a system, component, or process meets specified requirements.
OR
• The degree to which a system, component or process meets customer or user needs or expectations.
Quality Assurance
• Product and software quality do not happen by accident, and is not something that can be added on after the fact.
• To achieve quality, we must plan for it from the beginning, and continuously monitor it day to day
• This requires discipline
• Methods and disciplines for achieving quality results are the study of Quality Assurance or QA
• Three General Principles of QA
• Know what you are doing
• Know what you should be doing
• Know how to measure the difference
12
• In the context of software quality, this means having explicit measures comparing what we are doing to what we should
be doing.
Formal Methods
• Formal methods include formal verification (proofs of correctness), abstract interpretation (simulated execution in a
different semantic domain, e.g., data kind rather than value), state modelling (simulated execution using a mathematical
model to keep track of state transitions), and other mathematical methods
• Traditionally, use of formal methods requires mathematically sophisticated programmers, and is necessarily a slow and
careful process, and very expensive
• In the past, formal methods have been used directly in software quality assurance in a small (but important) fraction of
systems
• Primarily safety critical systems such as onboard flight control systems, nuclear
Testing
Focus of the Course
• The vast majority (over 99%) of software quality assurance uses testing, inspection and metrics instead of formal
methods
• Example: at the Bank of Nova Scotia, over 80% of the total software development effort is involved in testing!
Testing
• Testing includes a wide range of methods based on the idea of running the software through a set of example inputs or
situations and validating the results
• Includes methods based on requirements (acceptance testing), specification and design (functionality and interface
testing), history (regression testing), code structure (path testing), and many more
Inspection
• Inspection includes methods based on a human review of the software artifacts
• Includes methods based on requirements reviews, design reviews, scheduling and planning reviews, code walkthroughs,
and so on
• Helps discover potential problems before they arise in practice
Metrics
• Software metrics includes methods based on using tools to count the use of features or structures in the code or other
software artifacts, and compare them to standards
• Includes methods based on code size (number of source lines), code complexity (number of parameters, decisions,
function points, modules or methods), structural complexity (number or depth of calls or transactions), design
complexity, and so on.
• Helps expose anomalous or undesirable properties that may reduce reliability and maintainability
Requirements Phase
• Senior QA/Manager ensures that the user/client requirements are captured correctly
• Find out the risks in the requirement and decide how the system will be tested.
• Properly expressed as functional, performance and interface requirements.
• Review the requirement document and other deliverables meeting the standard
• Prepare the formal test plan including the test tools are being used in the project.
Implementation Phase
• Verify the results of coding and design activities including the schedule available in the project plan
• Check the status of all deliverable items and verify that all are maintaining the standard.
• Getting updated with the tools and technologies used in the projects and provide the feedback to the team if any better
solution is available.
• Complete writing the check list/ test cases to start testing.
• Verify that the components are ready to start test or not.
Testing Phase
• Start testing individual module and stat reporting bugs
• Verify that all tests are run according to test plans
• Verify all the bugs available in the bug tracking system are resolved.
• Compile the test reports and verify that the report is complete and correct
• Certify that testing is complete according to the plan
• Start creating the documentation and verify that all documents are ready for delivery
14
• Testing shows the presence of defects in the software. The goal of testing is to make the software fail. Sufficient testing
reduces the presence of defects. In case testers are unable to find defects after repeated regression testing doesn’t mean that
the software is bug-free.
• Testing talks about the presence of defects and don’t talk about the absence of defects.
Early Testing
• Defects detected in early phases of SDLC are less expensive to fix. So conducting early testing reduces the cost of fixing
defects.
• Assume two scenarios, first one is you have identified an incorrect requirement in the requirement gathering phase and
the second one is you have identified a bug in the fully developed functionality. It is cheaper to change the incorrect
requirement compared to fixing the fully developed functionality which is not working as intended.
Defect Clustering
• Defect Clustering in software testing means that a small module or functionality contains most of the bugs or it has the
most operational failures.
• As per the Pareto Principle (80-20 Rule), 80% of issues comes from 20% of modules and remaining 20% of issues from
remaining 80% of modules. So we do emphasize testing on the 20% of modules where we face 80% of bugs.
Pesticide Paradox
• Pesticide Paradox in software testing is the process of repeating the same test cases again and again, eventually, the same
test cases will no longer find new bugs. So to overcome this Pesticide Paradox, it is necessary to review the test cases
regularly and add or update them to find more defects.
Installation testing
Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these
procedures to achieve an installed software system that may be used is known as installation testing.
15
Compatibility testing
A common cause of software failure (real or perceived) is a lack of its compatibility with other application
software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the
original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web
application, which must render in a Web browser).
For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test
software only on the latest version of the target environment, which not all users may be running. This results in the
unintended consequence that the latest work may not function on earlier versions of the target environment, or on older
hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by
proactively abstracting operating system functionality into a separate program module or library.
Regression testing
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover
software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever
software functionality that was previously working correctly, stops working as intended.
Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the
software collides with the previously existing code. Regression testing is typically the largest test effort in commercial
software development, due to checking numerous details in prior software features, and even new software can be developed
while using some old test cases to test parts of the new design to ensure prior functionality is still supported.
Common methods of regression testing include re-running previous sets of test cases and checking whether previously
fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added
features.
They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting
of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. In regression testing, it is
important to have strong assertions on the existing behavior.
For this, it is possible to generate and add new assertions in existing test cases, this is known as automatic test
improvement
Acceptance testing
Acceptance testing can mean one of two things:
A smoke test is used as a build acceptance test prior to further testing, e.g., before integration or regression.
Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user
acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of
development.
Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the
developers' site.
Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software
goes to beta testing.
Beta Testing
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing.
16
Versions of the software, known as beta versions, are released to a limited audience outside of the programming team
known as beta testers.
The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta
versions can be made available to the open public to increase the feedback field to a maximal number of future users and to
deliver value earlier, for an extended or even indefinite period of time (perpetual beta).
Functional vs non-functional testing
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the
code requirements documentation, although some development methodologies work from use cases or user stories.
Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such
as scalability or other performance, behavior under certain constraints, or security.
Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable
execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of
the suitability perspective of its users.
Continuous testing
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain
immediate feedback on the business risks associated with a software release candidate.
Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of
testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with
overarching business goals.
Destructive testing
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly
even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-
management routines.
Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional
testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools
available that perform destructive testing.
Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether
that be large quantities of data or a large number of users. This is generally referred to as software scalability.
The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing.
Volume testing is a way to test software functions even when certain components (for example a file or database)
increase radically in size.
Stress testing is a way to test reliability under unexpected or rare workloads.
Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function
well in or above an acceptable period.
Usability testing
Usability testing is to check if the user interface is easy to use and understand.
It is concerned mainly with the use of the application.
This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled UI
designers.
17
Accessibility testing
Accessibility is everyone can use the software.
Accessibility testing may include compliance with standards such as:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Security testing
Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
The International Organization for Standardization (ISO) defines this as a type of testing conducted to evaluate the
degree to which a test item, and associated data and information, are protected so that unauthorized persons or systems
cannot use, read or modify them, and authorized persons or systems are not denied access to them.
Development testing
Development Testing is a software development process that involves the synchronized application of a broad spectrum
of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by
the software developer or engineer during the construction phase of the software development lifecycle. Development Testing
aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality
of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, Development Testing might include static code
analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other
software testing practices.
A/B testing
A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the
current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment)
and data is collected to determine which version is better at achieving the desired outcome.
Concurrent testing
Concurrent computing is a form of computing in which several computations are executed during overlapping time
periods
Concurrent or concurrency testing assesses the behavior and performance of software and systems that use concurrent
computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race
conditions and problems with shared memory/resource handling.
Validation perspectives
• Reliability validation
• Does the measured reliability of the system meet its specification?
• Is the reliability of the system good enough to satisfy users?
• Safety validation
• Does the system always operate in such a way that accidents do not occur or that accident consequences are
minimised?
• Security validation
• Is the system and its data secure against external attack?
Validation techniques
18
• Static techniques
• Design reviews and program inspections
• Mathematical arguments and proof
• Dynamic techniques
• Statistical testing • Scenario-based • Run-time checking
testing
• Process validation
• Design development processes that minimise the chances of process errors that might compromise the
dependability of the system
Static validation techniques
• Static validation is concerned with analyses of the system documentation (requirements, design, code, test data).
• It is concerned with finding errors in the system and identifying potential problems that may arise during system
execution.
• Documents may be prepared (structured arguments, mathematical proofs, etc.) to support the static validation
Safety reviews
• Review for correct intended function
• Review for maintainable, understandable structure
• Review to verify algorithm and data structure design against specification
• Review to check code consistency with algorithm and data structure design
• Review adequacy of system testing
Review guidance
• Make software as simple as possible
• Use simple techniques for software development avoiding error-prone constructs such as pointers and recursion
• Use information hiding to localise the effect of any data corruption
• Make appropriate use of fault-tolerant techniques but do not be seduced into thinking that fault-tolerant software is
necessarily safe
Hazard-driven analysis
• Effective safety assurance relies on hazard identification (covered in previous lectures)
• Safety can be assured by
• Hazard avoidance • Accident • Protection systems
avoidance
•
• Safety reviews should demonstrate that one or more of these techniques have been applied to all identified hazards
19
• A safety case presents a list of arguments, based on identified hazards, why there is an acceptably low probability that
these hazards will not result in an accident
• Arguments can be based on formal proof, design rationale, safety proofs, etc. Process factors may also be included
Lecture-17
Formal methods and critical systems
• The development of critical systems is one of the ‘success’ stories for formal methods
• Formal methods are mandated in Britain for the development of some types of safety-critical software for defence
applications
• There is not currently general agreement on the value of formal methods in critical systems development
Formal methods and validation
• Specification validation
• Developing a formal model of a system requirements specification forces a detailed analysis of that specification
and this usually reveals errors and omissions
• Mathematical analysis of the formal specification is possible and this also discovers specification problems
• Formal verification
• Mathematical arguments (at varying degrees of rigour) are used to demonstrate that a program or a design is
consistent with its formal specification
Safety proofs
• Safety proofs are intended to show that the system cannot reach in unsafe state
• Weaker than correctness proofs which must show that the system code conforms to its specification
• Generally based on proof by contradiction
• Assume that an unsafe state can be reached
• Show that this is contradicted by the program code
• May be displayed graphically
Graphical argument
Condition checking
21
Code is incorrect.
Gas_level = Danger does not cause the alarm to be on
Key points
• Safety-related systems should be developed to be as simple as possible using ‘safe’ development techniques
• Safety assurance may depend on ‘trusted’ development processes and specific development techniques such as the use of
formal methods and safety proofs
• Safety proofs are easier than proofs of consistency or correctness. They must demonstrate that the system cannot reach an
unsafe state. Usually proofs by contradiction
22