Stqa Module 1
Stqa Module 1
AND QUALITY
ASSURANCE
MODULE 1
MODULE 1
TESTING
METHODLOGIES
Introduction
• Software testing is a procedure to verify whether the actual results are same as of the expected
results.
• Software Testing is defined as an activity to check whether the actual results match the expected
results and to ensure that the software system is Defect free.
• Software testing also helps to identify errors, gaps or missing requirements in contrary to the actual
requirements. It can be either done manually or using automated tools.
• Software testing is broadly categorized into two types - functional testing and non-
functional testing.
GOALS OF SOFTWARE TESTING
The Goals of software testing are classified into three main categories
► Short-term or immediate goals These goals are the immediate results after performing testing.
These goals may be set in the individual phases of SDLC. Some of them are discussed below.
► Bug discovery The immediate goal of testing is to find errors at any stage of software
development. More the bugs discovered at an early stage, better will be the success rate of
software testing.
► Bug prevention It is the consequent action of bug discovery. From the behavior and
interpretation of bugs discovered, everyone in the software development team gets to learn
how to code safely such that the bugs discovered should not be repeated in later stages or
future projects. Though errors cannot be prevented to zero, they can be minimized. In this
sense, bug prevention is a superior goal of testing.
► Long-term goals These goals affect the product quality in the long run, when one cycle of the
SDLC is over. Some of them are discussed here.
► Quality Since software is also a product, its quality is primary from the users’ point of view.
Thorough testing ensures superior quality. Therefore, the first goal of understanding and
performing the testing process is to enhance the quality of the software product. Though quality
depends on various factors, such as correctness, integrity, efficiency, etc., reliability is the major
factor to achieve quality. The software should be passed through a rigorous reliability analysis to
attain high quality standards. Reliability is a matter of confidence that the software will not fail,
and this level of confidence increases with rigorous testing. The confidence in reliability, in turn,
increases the quality, as shown in Fig. 1.3.
► Rigorous testing is a kind of complete testing where we follow strict entry and exit criteria ; and also we
deal with all possible combinations of test cases and test data so that every possible flaws can be found out
from the system and we can remove them before the system goes live. It can be also termed as exhaustive
testing.
► Customer satisfaction From the users’ perspective, the prime concern of testing is customer
satisfaction only. If we want the customer to be satisfied with the software product, then testing
should be complete and thorough. Testing should be complete in the sense that it must satisfy
the user for all the specified requirements mentioned in the user manual, as well as for the
unspecified requirements which are otherwise understood. A complete testing process achieves
reliability, reliability enhances the quality, and quality in turn, increases the customer
satisfaction, as shown in Fig. 1.4.
► Risk management Risk is the probability that undesirable events will occur in a system. These
undesirable events will prevent the organization from successfully implementing its business
initiatives. Thus, risk is basically concerned with the business perspective of an organization.
► Risks must be controlled to manage them with ease. Software testing may act as a control, which
can help in eliminating or minimizing risks (see Fig. 1.5).
► Post-implementation goals These goals are important after the product is released. Some of them are
discussed here.
► Reduced maintenance cost The maintenance cost of any software product is not its physical cost, as the
software does not wear out. The only maintenance cost in a software product is its failure due to errors. Post-
release errors are costlier to fix, as they are difficult to detect. Thus, if testing has been done rigorously and
effectively, then the chances of may notfailure are minimized and in turn, the maintenance cost is reduced.
► Improved software testing process A testing process for one project be successful and there may be scope
for improvement. Therefore, the bug history and post-implementation results can be analyzed to find out
snags in the present testing process, which can be rectified in future projects. Thus, the long-term post-
implementation goal is to improve the testing process for future projects.
Software Maintenance
► Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to
modify and update software application after delivery to correct errors and to improve
performance. Software is a model of the real world. When the real world changes, the software
require alteration wherever possible.
► Software Maintenance is an inclusive activity that includes error corrections, enhancement of
capabilities, deletion of obsolete capabilities, and optimization.
Need for Maintenance
► Testing is not an intuitive activity, rather it should be learnt as a process. Therefore, testing should
be performed in a planned way. For the planned execution of a testing process, we need to consider
every element and every aspect related to software testing. Thus, in the testing model, we consider
the related elements and team members involved (see Fig. 1.6)
► Software and software model
► Bug Model
► Testing methodologies and Testing
Software and Software Model
► Based on the inputs from the software model and the bug model, testers can develop a testing methodology that
incorporates both testing strategy and testing tactics.
► Testing strategy is the roadmap that gives us well-defined steps for the overall testing process. It prepares the
planned steps based on the risk factors and the testing phase.
► Once the planned steps of the testing process are prepared, software testing techniques and testing tools can be
applied within these steps.
► Thus, testing is performed on this methodology. However, if we don’t get the required results, the testing plans
must be checked and modified accordingly.
EXHAUSTIVE SOFTWARE TESTING
► Exhaustive testing, which is also known as complete testing, occurs when all the testers in your team are
exhausted and when all the planned tests have been executed. It is a quality assurance testing technique
in which all scenarios or data is tested for testing. In a more understandable way, exhaustive testing
means ensuring there are no undiscovered faults at the end of the test phase. Testing everything (all
combinations of inputs and preconditions) is not feasible except for trivial cases. As testers, we often say
“well I just never have enough time for testing”. Even if you had all the time in this world, you still
wouldn’t have enough time to test all the possible input combinations and output combinations.
► Exhaustive testing is a test approach in which all possible data combinations are used for testing.
Exploratory testing includes implicit data combinations present in the state of the software/data at the start of
testing.
► Eg :Consider an application in which a password field that accepts 3 characters, with no consecutive
repeating entries. Hence, there are 26 * 26 * 26 input permutations for alphabets only. Including special
characters and standard characters, there are much more combinations. So, there are 256 * 256 * 256 input
combinations.
Why Exhaustive Testing is impractical and impossible?
It is not possible to perform complete testing or exhaustive testing. For most of the systems, it is near impossible
because of the following reasons:
► The domain of possible inputs of a program is too large to be completely used in testing a system. There are
both valid inputs and invalid inputs.
► The program may have a large number of states. There may be timing constraints on the inputs, that is, an
input may be valid at a certain time and invalid at other times. An input value which is valid but is not
properly timed is called an inopportune input.
► The input domain of a system can be very large to be completely used in testing a program.
► The design issues may be too complex to completely test. The design may have included implicit design
decisions and assumptions.
► It may not be possible to create all possible execution environments of the system. This becomes more
significant when the behavior of the software system depends on the real, outside world, such as
weather, temperature, altitude, pressure, and so on.
Effective testing
► Test Effectiveness can be defined as how effectively testing is done or goal is achieved that meets
the customer requirement. ... Test effectiveness starts right at the beginning of the development and
execution of test cases and after development is completed to count the number of defects.
How do you write an effective test case?
To perform effective testing, we can use the Suppose a password field accepts 3 chaos.
equivalence class method, BVA, etc. to Hence, there are around 256x256x256 input
minimize the problems faced by exhaustive combinations which we have to test during
testing exhaustive testing.
Software Failure Case Studies,
► The air traffic control has the important responsibility of informing aircraft pilots about relevant information
regarding weather, routes, the distance between other airplanes, and more. Failing to communicate with aircraft
pilots promptly could result in catastrophe. On September 14, 2004, at 5 P.M. air traffic control at the LA airport
lost voice communication with approximately 400 airplanes being tracked in the southwestern United States and
many planes were headed towards each other. So what happened? The primary voice communication system
shut down unexpectedly. To top it off the backup system failed a few minutes after it was turned on. The cause
of the error was that the communication system had an internal timer that ticks off in milliseconds. After it
reached zero, it could not time itself so it would shut down. The outage affected 800 flights across the country
Case #4: Toyota
► In the mid-2000’s many Toyota drivers were reporting that their car was accelerating without them touching
the pedal. After a series of accidents, which lead to investigations, investigators discovered that software errors
were the cause of the unintended acceleration. In this case, there was a series of things wrong with the software
installed in Toyota cars: Memory corruption, wrong memory handling, disabling safety systems, systems with
single points of failure, and thousands of global variables. Toyota recalled millions of vehicles and Toyota’s
stock price decreased 20% a month after the cause of the problem was discovered. This case demonstrates the
consequences of not giving enough attention to good programming practices and testing as a result of wanting
to launch the product.
Conclusion
► As long as humans are involved in the development process, software systems will contain errors and will be
prone to failure. As software developers, our responsibility is to ensure that the systems we built are thoroughly
tested in different and realistic conditions. It is to ensure that the software we are promoting is actually capable
of helping and not harming its users. In many cases, competition and the desire to be the first on the market are
the motivators for launching an untested and unfinished product. As software users, our responsibility is to use
our software tools as a support for our activities and not blindly accept their results or suggestions.
Software testing terminologies
TERMINOLOGIES
► Test Plan
► Test Scenarios
► Test Cases
► Priority
► Severity
► Verification
► Validation
► SDLC & STLC
Test Plan
► A test plan is a document outlining the scope, approach, resources, and schedule of the intended test
activities. It identifies amongst others, the test items, the features to be tested, tasks, person assigned to
each task, the test environment, test design techniques, entry and exit criteria as well as contingency
planning.
Simply put – A test plan defines the strategy to be used to test an application and elucidates the
objective, resources used and the information about the test environment. It is a blueprint which
summarizes how the testing activities will proceed for any project.
Test Scenarios vs Test Cases
► A Test Case mentions the detailed inputs, execution conditions, steps to reproduce, test data, the actual & expected
results. Different test cases are designed and noted-down during the Test Case Development phase of STLC which
the testers later refer to, to check the application performance.
Test cases can be broadly classified into Functional & Non-functional or Positive & Negative.
People often get confused amongst – Test Scenarios and Test Cases. Here is a simple example to quickly
comprehend the difference between the two,
► Test Case 1: Check results on entering valid User Id & valid Password
► Test Case 2: Check results on entering Invalid User ID & valid Password
► Test Case 3: Check results on entering valid User ID & Invalid Password
► Test Case 4: Check results on entering Invalid User ID & Invalid Password
► Test Case 5: Check response when fields are Empty & Login Button is pressed.
► A Test Scenario is defined as any functionality that can be tested. It is a collective set of test cases
which helps the testing team to determine the positive and negative characteristics of the
project. Test Scenario gives a high-level idea of what we need to test
Priority vs Severity
► Priority specifies the level of urgency under which the bug needs to get resolved. Whereas severity
illustrates how critical the bug is, it basically describes the magnitude of impact if the bug is not resolved.
There are different levels of priority and severity, which are depicted in the image below,
Verification v/s Validation:
► Verification is a static routine involving checking documents, structure, code, and program. It incorporates
activities involved in creating fantastic programming, including, assessment, structure investigation, and detail
examination. It is a moderate target process.
Validation is assessing the last item to check whether the product meets the clients’ desires and prerequisites.
It is a dynamic tool of approving and testing the real item.
It’s essential for a tester to possess a clear understanding of the process followed to develop the product in
their organization. Similar to SDLC, a proper process is followed to perform software testing which is
called STLC or Software Testing Life Cycle. Let’s catch some more details about these processes.
SDLC vs STLC
► SDLC
Software Development Life Cycle (SDLC) explains the journey of Software Development.SDLC is a
process followed for software development. It consists of a detailed strategy outlining how to develop,
maintain, replace, alter and enhance a specific software.
► STLC
STLC illustrates a systematic and well-planned testing process which includes different stages to make the
testing process quick, effective and accountable. Unlike SDLC, STLC – software testing life cycle identifies
how test cases will be implemented and how the testing would be conducted successfully.
Software Testing Life cycle
► Requirement Analysis:
Requirement Analysis is the first step of Software Testing Life Cycle (STLC). In this phase quality assurance
team understands the requirements like what is to be tested. If anything is missing or not understandable then
quality assurance team meets with the stakeholders to better understand the detail knowledge of requirement.
► Test Planning:
Test Planning is most efficient phase of software testing life cycle where all testing plans are defined. In this
phase manager of the testing team calculates estimated effort and cost for the testing work. This phase gets
started once the requirement gathering phase is completed.
► Test Case Development:
The test case development phase gets started once the test planning phase is completed. In this phase testing
team note down the detailed test cases. Testing team also prepare the required test data for the testing. When the
test cases are prepared then they are reviewed by quality assurance team.
► Test Environment Setup:
Test environment setup is the vital part of the STLC. Basically test environment decides the conditions on
which software is tested. This is independent activity and can be started along with test case development. In
this process the testing team is not involved. either the developer or the customer creates the testing
environment.
► Test Execution:
After the test case development and test environment setup test execution phase gets started. In this phase
testing team start executing test cases based on prepared test cases in the earlier step.
► Test Closure:
This is the last stage of STLC in which the process of testing is analyzed.
Software Testing methodology
► UNIT TESTING is a level of software testing where individual units/ components of a software are tested.
The purpose is to validate that each unit of the software performs as designed. A unit is the smallest
testable part of any software. It usually has one or a few inputs and usually a single output.
SYSTEM TESTING
► SYSTEM TESTING is a level of testing that validates the complete and fully integrated software
product. The purpose of a system test is to evaluate the end-to-end system specifications.
INTEGRATION TESTING
► INTEGRATION TESTING is a level of software testing where individual units are combined and tested as
a group. The purpose of this level of testing is to expose faults in the interaction
between integrated units. Test drivers and test stubs are used to assist in Integration Testing.
PERFORMANCE TESTING
► Performance testing is the process of determining the speed, responsiveness and stability of a computer,
network, software program or device under a workload.
LOAD TESTING
Load testing generally refers to the practice of modeling the expected usage of a software program by
simulating multiple users accessing the program concurrently. As such, this testing is most relevant for multi-user
systems; often one built using a client/server model, such as web servers.
STRESS TESTING
Stress testing is a type of Software Testing that verifies the stability & reliability of
the system. This test mainly measures the system on its robustness and error handling capabilities under
extremely heavy load conditions.Stress Testing is done to make sure that the system would not crash
under crunch situations. It even tests beyond the normal operating point and evaluates how the system
works under those extreme conditions.
SPIKE TESTING
The goal of spike testing is to determine the behavior of a software application when it
receives extreme variations in traffic.
Soak Testing is a type of software testing in which system is tested under huge load over a continuous
availability period to check the behavior of the system ...
VOLUME TESTING
It is a type of Software Testing, where the software is subjected to a huge volume of data. It
is also referred to as flood testing. Volume testing is done to analyze the system performance by
increasing the volume of data in the database.Jul 2, 2020
Verification and Validation
Architectural Design
► Architectural specifications are understood and designed in this phase.
Usually more than one technical approach is proposed and based on
the technical and financial feasibility the final decision is taken. The
system design is broken down further into modules taking up different
functionality. This is also referred to as High Level Design (HLD).
► The data transfer and communication between the internal modules
and with the outside world (other systems) is clearly understood and
defined in this stage. With this information, integration tests can be
designed and documented during this stage
Verification Low level Design
Module Design
► In this phase, the detailed internal design for all the system modules is
specified, referred to as Low Level Design (LLD). It is important that
the design is compatible with the other modules in the system
architecture and the other external systems. The unit tests are an
essential part of any development process and helps eliminate the
maximum faults and errors at a very early stage. These unit tests can
be designed at this stage based on the internal module designs.
Difference between Verification and
Validation
Verification Validation
Are we building the system right? Are we building the right system?
The objective of Verification is to make sure that The objective of Validation is to make sure that
the product being develop is as per the the product actually meet up the user’s
requirements and design specifications. requirements, and check whether the
specifications were correct in the first place.
Following activities are involved in Verification: Following activities are involved in Validation:
Reviews, Meetings and Inspections. Testing like black box testing, white box testing,
gray box testing etc.
Difference between Verification and
Validation
Verification Validation
Verification is carried out by QA team to check Validation is carried out by testing team.
whether implementation software is as per
specification document or not.
Execution of code is not comes Execution of code is comes under Validation.
under Verification.
Verification process explains whether the Validation process describes whether the
outputs are according to inputs or not. software is accepted by the user or not.
Verification is carried out before the Validation. Validation activity is carried out just after the
Verification.
Difference between Verification and
Validation
Verification Validation
Following items are evaluated Following item is evaluated during Validation:
during Verification: Plans, Requirement Actual product or Software under test.
Specifications, Design Specifications, Code,
Test Cases etc,
Cost of errors caught in Verification is less than Cost of errors caught in Validation is more than
errors found in Validation. errors found in Verification.
It is basically manually checking the of It is basically checking of developed program
documents and files like requirement based on the requirement specifications
specifications etc. documents & files.