Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
84 views68 pages

Merged PDF

This document provides answers to questions about software quality assurance. It defines quality as meeting customer needs, requirements, and satisfaction. There are different views of quality from the perspectives of customers, manufacturers, and products. The document also discusses the quality improvement lifecycle, total quality management principles, and the structure of a quality management system. The quality management structure generally has three tiers: quality policy at the top, followed by procedures and quality planning, with quality control and improvement at the operational level.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views68 pages

Merged PDF

This document provides answers to questions about software quality assurance. It defines quality as meeting customer needs, requirements, and satisfaction. There are different views of quality from the perspectives of customers, manufacturers, and products. The document also discusses the quality improvement lifecycle, total quality management principles, and the structure of a quality management system. The quality management structure generally has three tiers: quality policy at the top, followed by procedures and quality planning, with quality control and improvement at the operational level.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

BSc.

(Information Technology)
(Semester VI)
2018-19

Software Quality
Assurance
(USIT 601 Core)
University Paper Solution

By
Janhavi Vadke

Janhavi Vadke Page 1


Question 1

Q1a. Define the term quality and elaborate different views on quality.
Ans: There are different perception towards quality of products.
A. Customer-based definition of quality:
Quality product must meet customer needs, expectation and satisfaction.
B. Manufacturing based definition of quality:
It means customer is not fully aware about requirements & requirements are defined by
architects designed on feedback survey.
C. Product-based definition of quality:
Production must add some new appreciable feature on comparison to similar product in the
market.
D. Value-based definition of quality:
A product is the best combination of price and features required by the customers the cost of
the product has direct relationship with the value that customer find in it
E. Transcendent quality:
It means that customer purchased the product because of specific feature absent/present in the
product.
Quality views
1) producer’s view -Meeting requirements is a producer’s view of quality. This is the view of
the organization responsible for the project and processes, and the products and services
acquired, developed, and maintained by those processes. Meeting requirements means that
the person building the product does so in accordance with the requirements. Requirements
can be very complete or they can be simple, but they must be defined in a measurable format,
so it can be determined whether they have been met. The producer’s view of quality has these
four characteristics:
 Doing the right thing
 Doing it the right way
 Doing it right the first time
 Doing it on time without exceeding cost
2) customer’s view-Being fit for use is the customer’s definition. The customer is the end user
of the products or services. Fit for use means that the product or service meets the customer’s
needs regardless of the product requirements. Of the two definitions of quality, fit for use, is
the more important. The customer’s view of quality has these characteristics:
 Receiving the right product for their use
 Being satisfied that their needs have been met
 Meeting their expectations
 Being treated with integrity, courtesy and respect
Most Information Technology (IT) groups have two quality gaps: the producer gap and the
customer gap. The producer gap is the difference between what is specified (the documented
requirements and internal standards) versus what is delivered (what is actually built). The
customer gap is the difference between what the producers actually delivered versus what
the customer wanted.
3) Provider view – This is the perspective of the organization that delivers the products and
services to the customer.
4) Supplier view – This is the perspective of the organization (that may be external to the
producer’s company, such as an independent vendor) that provides either the producer and/or
the provider with products and services needed to meet the requirements of the customer.

Janhavi Vadke Page 2


Q1b. Explain the lifecycle of quality improvements.
Ans: For improvement of quality, a cycle must be followed:
Dr. Joseph Juran is a pioneer of statistical quality control with a definition of
improvement cycle through (DMMCI) i.e. ‘Define’, ’Measure’, ’Monitor’, ’Control’,
’Improve’. There is interrelationships among the customers, suppliers and processes
used in development, testing etc. and people should establish quality management
based on metrics program.
Following are three parts of the approach, namely :
(1) Quality planning at all levels : Quality planning happens at two levels :
Organization Level : It must be in the form of policy definition and strategic quality plans
on the basis of vision, missions set by senior management.
Unit Level : Quality planning at unit level must be done by the people responsible for
managing the unit. Project plan and quality plan at unit level must be consistent with
the strategic quality plans at organization level.
(2) Quality control :
Quality control process examines the present product at various levels with the defined
standards so that an organization may appraise the outcome of the processes.
It must measure the deviations with respect to the number of achievements planned in
quality planning to reduce those deviations to minimum.
(3) Quality improvement :
Improvement processes attempts to continuously improve the quality of the process
used for producing products.
There is no end to quality improvements and it needs or take newer challenges again
and again.

Q 1c. What are the quality principles of Total Quality Management (TQM)?
Ans:
TQM principles
1. Develop constancy of purpose of definition and deployment of various initiatives
Management must create constancy of purpose for products and processes, allocating
resources adequately to provide for long term as well as short term needs. The processes
followed during entire lifecycle of product development from requirement gathering to final
delivery must be consistent with each other over a larger horizon.
2. Adapting to new philosophy of managing people / stakeholders by building
confidence and relationships
Management must adapt to the new philosophies of doing work and getting work done from
its people and suppliers.
3. Declare freedom from mass inspection of incoming/produced output
There must be an approach for elimination of mass inspection followed by cost of failure as the
way to achieve quality of products because mass inspection results into huge cost overrun and
product produced is of inferior quality.
4. Stop awarding of lowest price tag contracts to suppliers
Organizations must end the practice of comparing unit purchase price as criteria for awarding
contracts. Vendor selection must be done on the basis of total cost including price, rejections,
etc .Aim of vendor selection is to minimize total cost, not merely initial cost of purchasing.
5. Improve every process used for development and testing of product.
Janhavi Vadke Page 3
Improve every process of planning, production and service to the customer and other support
processes constantly.
6. Institutionalize training across the organization for all the people
An organization must include modern methods of training which may include on-the-job-
training, classroom training, self-study etc. for all people to make better use of their capabilities.
Skill levels of people can be enhanced to make them suitable for better performance by
planning different training programs.
7. Institutionalize leadership throughout organization at each level
An organization must adopt and include leadership at all levels with the aim of helping people
to do their jobs in a better way. Their focus must shift from number of work items to be
produced to quality of output.
8. Drive out fear of failure from employees
An organization must encourage effective two-way communication and other means to drive
out fear of failure from minds of all employees. Employees can work effectively and more
productively to achieve better quality output when there is no fear of failure
9. Break down barriers between functions/departments
Physical as well as psychological barriers between departments and staff areas must be broken
down to create a force of cohesion. People start performing as a team and there in synergy of
group activities
10. Eliminate exhortations by numbers, goals, targets
Eliminate use of slogans , posters and exhortations of the work force, demanding ‘Zero
Defects’ and new levels of productivity ,without providing methods and guidance about how
to achieve it
11. Eliminate arbitrary numerical targets which are not supported by processes
Eliminate quotas for work force and numerical goals for managers to be achieved. Substitute
the quotas with mentoring and support to people, and helpful leadership in order to achieve
continual improvement in quality and productivity of processes.
12. Permit pride of workmanship for employees
People must feel proud of the work that they are doing, and know how they are contributing
to organizational level. This means replacing ‘management by objective’ to ‘management by
fact’
13. Encourage education of new skills and techniques
Arrange a rigorous program of education and training for people working in different areas
and encourage self-improvement programs for everyone. They need to accept new challenges
14. Top management commitment and action to improve continually
Clearly define top management’s commitment to ever-improving quality and productivity and
their obligation to implement quality principles throughout the organization.

Q1d Explain the structure of quality management system.


Ans:
Every organization has a different quality management structure depending upon its need and
circumstances. General view of quality management is defined below in a diagram :

Janhavi Vadke Page 4


Following are the three main tiers of quality management system structure :
1st TIER – Quality Policy
Quality policy sets the wish, intent and direction by the management about how activities will
be conducted by the organization.
Since management is the strongest driving force in an organization, its intents are most
important
It is a basic framework on which the quality temple rests.
2nd TIER – Quality Objectives
Quality objectives are the measurements established by the management to define progress
and achievements in a numerical way.
An improvement in quality must be demonstrated by improvement in achievements of quality
factors (test factors) in numerical terms as expected by the management.
The achievements of these objectives must be compared with planned levels expected and
results and deviations must be acted upon.
3rd TIER – Quality Manual
Quality manual also termed as policy manual is establishes and publishes by the management
of
the organization.
It sets a framework for other process definitions and is a foundation of quality planning at
organization level.

Q1e. How the quality and productivity are related with each other?
Ans:
Productivity is the relationship between a given amount of output and the amount of input
needed to produce it. Profitability results when money is left over from sales after costs are paid.
The expenditures made to ensure that the product or service meets quality specifications affect
the final or overall cost of the products and/or services involved. Efficiency of costs will be an
important consideration in all stages of the market system from manufacturing to consumption.
Quality affects productivity. Both affect profitability. The drive for any one of the three must not
interfere with the drive for the others. Efforts at improvement need to be coordinated and
integrated. The real cost of quality is the cost of avoiding nonconformance and failure. Another
cost is the cost of not having quality—of losing customers and wasting resources.

Janhavi Vadke Page 5


1)Improvement in quality directly leads to improved productivity. Quality experts such as W.
Edwards Deming have stated that quality is positively associated with productivity because as
the quality of a product or service increases, there is less need for correcting work or fixing
mistakes, so productivity improves.
2)The hidden factory producing scrap, rework, sorting, repair, customer complaint is closed.
3)Effective way to improve productivity is to reduce scrap and rework by improving processes.
By improving and Implementing sound management practices, use research and development
effectively, adopt modern manufacturing techniques, and improve time management.
4) Quality improvements lead to cost reduction

Q1f. Write a short note on continual improvement cycle.


Ans:
i) Continual (Continuous) improvement cycle is based on systematic sequence of Plan-Do-Check-
Act activities representing a never ending cycle of improvements. It was initially implemented in
agriculture them later in electronic industry and now it is famous in all industries.
(ii) PDCA improvement cycle can be thought of as a wheel of improvement
continually(continuously) rolling up the problem-solving and achieving better and better results
for organization at each iteration.
(iii) Following are the stages of PDCA cycle explained with the help of a diagram

(a) Plan:
 Planning includes answering all questions like who, when, what, why, where, how etc. about
various activities and setting expectations.
 Expected results must be defined in quantitative terms an actions must be planned to achieve
answers to these questions.
 Baseline studies (i.e. where one is standing and vision defines where one wishes to be) are
important for planning.
(b) Do:
 An organization must work in the direction set by the plan devised in earlier phase for
improvement.
 Actual execution of a plan can determine whether the results as expected are achieved or not.
 ‘Do’ process need inputs like resources, hardware, software, training etc. for execution of a
plan.
(c) Check:

Janhavi Vadke Page 6


 An organization must compare the actual outcome of ‘Do’ stage with reference or expected
results which are planned outcomes.
 Expected and actual results must be in numerical terms, and compared at some periodicity as
defined in the plan.
(d) Act:
 If any deviations are observed in actual outcome with respect to planned results, the
organization may need to decide actions to correct the situation.
 One may have to initiate corrective or preventive actions as per the outcome of ‘Check’ stage.
 When expected results and actual results match with given degree of variation, one can
consider that the plan is going in the right direction.

Janhavi Vadke Page 7


Question 2

Q2a. Explain the lifecycle of software testing.


Ans:
Software Testing Life Cycle (STLC) is defined as a sequence of activities conducted to perform
Software Testing. Software Testing is not a just a single activity. It consists of a series of activities
carried out methodologically to help certify your software product.
Test planning & control:
 Determine the scope and risks & identify the objectives of testing.
 Determine the test approach.
 Implement the test policy test strategy.
 Determine the requirement test resources ( people, test environment pc’s )
 Determine the exit criteria i.e. task & checks should be completed.
Test analysis & design:
 Designing a black-box test before code exists.
 Requirement should help in designing test like s/w needs to responds in 5 seconds with 20
people logged on.
 Assigning priorities to test identifying test data used to execute test cases.
Test implementation & execution:
 In this phase test cases are executed either manually or automation.
 It compares expected results & actual results.
 It logs the result of test execution.
Evaluating exit criteria & reporting:
 Exit criteria like coverage criteria, acceptance, and criteria are evaluated.
 Test summary report is generated & it is provided to stake holders.
Test closure activities:
 Test closure activities are adding reuse component in repository.
 Check whether all bugs have been resolved or not.
 Document the result which can be reused in future releases.

Q2b Write a note on requirement traceability matrix


Ans:

Requirement Traceability matrix provides complete mapping for the software . It is a


Blueprint of an entire application. It helps in tracing if any requirement is not implemented.
If any redundancy, the entire software development can be tracked. It can track failure of any
test case. Application can become maintainable.

Disadvantages:
1 Difficult to create manually, Invest money, training
2 Maintaining relationship need huge effort
3 Requirements changes frequently
4 Developing team may not understand the importance
5 Customer may not find value in it

Horizontal Traceability:
1 When an application can be traced from requirement up to test results
2 On failure of test case, we must be able to find which requirement have not been met

Janhavi Vadke Page 8


3 When any requirement is not traceable to design, that requirement is not implemented at
all.

Bidirectional traceability
1 It must be able to go in any direction from any point in traceability matrix

Vertical Traceability
1 Exist in individual column
2 Interdependencies, parent-child

Risk Traceability
1 It References about the risks or failure
2 It provides Control mechanism to reduce probability or improve detection ability.

Q2c. State and explain any 5 principles of software testing.


Ans:
The basic principles on which testing is based are given below:
1) Define the expected output or result for each test case executed, to understand if expected
and actual output matches or not. Mismatches may indicate possible defects. Defects may be
in
product or test cases or test plan.
2) Developers must not test their own programs. No defects would be found in such kind of
testing as approach-related defects will we difficult to find.
3) Inspect the results of each test completely and carefully. It would help in root cause analysis
and can be used to find weak processes. This will help in building processes rightly and
improving their capability.
4) Include test cases for invalid or unexpected conditions which are feasible during
production. Testers need to protect the users from any unreasonable failure.
5) Test the program to see if it does what is no supposed to do as well as what it is supposed
to do.
6) Reusability of test case is important for regression. Test cases must be used repetitively so
that they remain applicable. Test data may be changed in different iterations.
7) Do not plan tests assuming no errors will be found. There must be targeted number of
defects for testing.
8) The probability of locating more errors in any module is directly proportional to the number
of errors already found in that module.

Q2d. Explain the relationship between error, defect and failure with a proper example.
Ans:
ERROR: An error is a mistake, misconception, or misunderstanding on the part of a
software developer. In the category of developer we include software engineers,
programmers, analysts, and testers.
For example, a developer may misunderstand a de-sign notation, or a programmer might
type a variable name incorrectly – leads to an Error. It is the one which is generated
because of wrong login, loop or due to syntax. Error normally arises in software; it leads to
change the functionality of the program.
DEFECT: It can be simply defined as a variance between expected and actual. Defect is an
error found AFTER the application goes into production. It commonly refers to several
troubles with the software products, with its external behavior or with its internal features.

Janhavi Vadke Page 9


In other words Defect is the difference between expected and actual result in the context
of testing. It is the deviation of the customer requirement.
FAILURE: A failure is the inability of a software system or component to perform its required
functions within specified performance requirements. When a defect reaches the end
customer it is called a Failure. During development Failures are usually observed by testers.
When a system or piece of software produces an incorrect result or does not perform the
correct action, this is known as a failure. Failures are caused by faults in the software
Q2e. Discuss the challenges in software testing.
Ans:

Challenges in testing
• Requirements are not clear, complete, consistent, measurable and testable. These may
create some problem in defining test cases.
• Requirements are wrongly documented and interpreted by business analyst and system
analyst. These knowledgeable people are supposed to gather requirements of customers by
understanding their business workflow. But sometimes they are prejudiced based on their
experience.
• Code logic may be difficult to capture. Often testers are not able to understand the code
due to lack of technical language knowledge. Sometimes they do not have access to files.
• Error handling may be difficult to capture. There may be combinations of errors and various
error messages and controls are required such as detective controls , corrective controls,
suggestive controls and preventive controls.

Other challenges in testing


• Badly written code introduces many defects
• Bad architecture of software cannot implement good requirement statement
• Testing is considered as a negative activity
• Testers find themselves in Lose - Lose situation

Q2f. Describe the structure of a testing team.


Ans: Location of test teams in organization Independent test team
Advantages of independent test team
 Test team is not under a delivery pressure
 Test team is not under pressure of `Not finding' a defect
 Independent view about a product is obtained as thought process of developers and
testers may be completely different
 Expert guidance and mentoring required by test team for doing effective testing may
be available in form of test manager

Disadvantages of independent test team


 There is always `Us' vs `Them' mentality
 Testers may not get a good understanding of development process
 Sometimes management is inclined excessively towards development team or test
team and other team feels that they have no value in an organization

Test team reporting to development manager


Advantages of test team reporting to development manager
 There is a better cooperation between development team and test team as both are
part of same team

Janhavi Vadke Page 10


 Test team can be involved in development and verification / validation activities from
the start of the project
 Testers may get a good understanding of development process and can help in
process improvement

Disadvantage of test team reporting to development manager


 Expert advice in form of test manager may not be available to testers
 Sometimes development managers are more inclined towards
development team
 Many times testers start perceiving product from developer’s angle and their defect
finding ability is reduced

Matrix organization
Developers becoming testers Advantages of this approach
 Developers do not need another knowledge transfer while working as a tester
 Developers have better understanding of detail design, coding etc and can test
application it easily
 For automation, some amount of development skill is required in writing the
automation scripts
 It is less costly as there is no separate test team
 Psychological acceptance of defects is not a major issue as developers themselves
find the defects

Disadvantages of this approach


 Developers may not find value in doing testing
 There may be blindfolds while understanding requirements or selection of approach
 Developers may concentrate more on development activities
 Development needs more of a creation skill while testing needs more of a
destruction skill

Independent testing team Advantages of this approach


 Separate test team is supposed to concentrate more on test planning, test strategies
and approach, creating test artifacts etc
 There is independent view about the work products derived from requirement
statement
 Special skills required for doing special tests may be available in such independent
teams
 `Testers working for customer', can be seen in such environment

Disadvantages of this approach


 Separate team means additional cost for organization
 Test team needs ramping up and knowledge transfer
similar to development team
 Organization may have to check for rivalries between development team and test
team

Domain experts doing a software testing Advantages of this approach


 `Fitness for use' can be tested
 Domain experts may provide facilitation

Janhavi Vadke Page 11


 Domain experts understand the scenario faced by actual users

Disadvantages of this approach


 Domain experts may have prejudices about the domain
 It may be very difficult to get domain experts in all areas
 It may mean huge cost for the organization

Janhavi Vadke Page 12


Question 3

Q3a. Explain boundary value testing and its guidelines.


Ans:
Boundary Value Analysis focuses on the boundary of the input space to identify test
cases.
The rationale behind value testing is that errors tend to occur near the extreme values
of an input variable.
Ex: Loop conditions may test for < when they should test for .
BVA uses input variable values at their:
1)Minimum (min)
2) Just above the minimum (min+)
3) A nominal value (nom)
4) Just below their maximum (max-)
5) Maximum (max)
x(nominal)

x(max-)
x(min+)

x(max)
x(min)

x1
a b
• Test values for variable x where, a ≤ x ≤ b and
Four types of Boundary value testing
1) Normal Boundary Value Testing
2) Robust Boundary Value Testing
3) Worst case Boundary Value Testing
4) Robust worst case Boundary Value Testing

1) Normal Boundary Value Testing

x2

x1
a b

there will be (4n + 1) test cases for n independent inputs

Janhavi Vadke Page 13


2) Robust Boundary Value Testing

In general, there will be (6n+ 1) test cases for n independent inputs

3) Worst case Boundary Value Testing

- In general, there will be 5n test cases for n dependent inputs.

4) Robust worst case Boundary Value Testing

Janhavi Vadke Page 14


Robust Worst-case testing for a function on n variables generates 7n test cases.

Q3b. Write a note on improved equivalence class testing.


Ans:
Equivalence class testing
1) The inputs and outputs are partitioned into mutually exclusive parts called as
Equivalence classes.
2) The elements in the equivalence classes are such that, they are expected to exhibit
similar behaviour.
3) Equivalence partitions/classes can be found for both valid data and invalid data.
4) Any one sample from a class, represents the entire class.
Steps to write test cases:
1. Identify equivalence classes by taking each input & output conditions and partitioning
it into valid and invalid classes.
2. Generate test cases using equivalence classes.
Ex: If Age lies between 18 to 60, then equivalence classes will be

There are 4 types of equivalence class testing


1) Weak Normal equivalence class testing
2) Strong Normal equivalence class testing
3) Weak Robust equivalence class testing
4) Strong Robust equivalence class testing

1) Weak normal equivalence class testing is accomplished by using one variable from each
equivalence class in a test case. The word ‘weak’ means ‘single fault assumption’. This type of
testing is accomplished by using one variable from each equivalence class in a test case. We
would, thus, end up with the weak equivalence class test cases as shown in the figure. Each
dot in above graph indicates a test data. From each class we have one dot meaning that there
is one representative element of each test case. In fact, we will have, always, the same number
of weak equivalence class test cases as the classes in the partition.

Janhavi Vadke Page 15


2) Strong Normal Equivalence Class Testing :
This type of testing is based on the multiple fault assumption theory. So, now we need test
cases from each element of the Cartesian product of the equivalence classes, as shown in the
figure. Just like we have truth tables in digital logic, we have similarities between these truth
tables and our pattern of test cases. The Cartesian product guarantees that we have a notion
of “completeness” in following two ways :
 We cover all equivalence classes
 We have one of each possible combination of inputs.

3) Weak Robust equivalence class testing


Identify equivalence classes of valid and invalid values.
Test cases have all valid values except one invalid value.
Detects faults due to calculations with valid values of a single variables.
Detects faults due to invalid values of a single variable.

Janhavi Vadke Page 16


4) Strong Robust equivalence class testing
Robust part comes from consideration of invalid values. Strong part refers to the multiple fault
assumption.

Q3c. Describe the decision table testing technique in detail.


Ans: Decision table ideal for describing situations in which a number of combinations of
actions are taken under varying sets of conditions.
1. A decision table has four portions the condition stub, the condition entries, the action
stub, and the action entries.
2.A column in the entry portion is a rule. Rules indicate which actions, if any, are taken for
the circumstances indicated in the condition portion of the rule. In the decision table below,
when conditions c1, c2, and c3 are all true, actions a1 and a2 occur. When c1 and c2 are
both true and c3 is false, then actions a1 and a3 occur.
3. The entry for c3 in the rule where c1 is true and c2 is false is called a “don’t care” entry.
The don’t care entry has two major interpretations: the condition is irrelevant, or the

Janhavi Vadke Page 17


condition does not apply.
4.This structure guarantees that we consider every possible combination of condition
values.
5. This completeness property of a decision table guarantees a form of complete testing.
6. Decision tables in which all the conditions are binary are called Limited Entry Decision
Tables (LETDs).

Q3d. Write a note on DD path testing.


Ans:
The best-known form of code-based testing is based on a construct known as a decision-to
– decision path (DD-path) (Miller, 1977). The name refers to a sequence of statements that,
in Miller’s words, begins with the “outway” of a decision statement and ends with the “inway”
of the next decision statement. No internal branches occur in such a sequence, so the
corresponding code is like a row of dominoes lined up so that when the first falls, all the rest
in the sequence fall.
We will define DD-paths in terms of paths of nodes in a program graph. In graph theory,
these paths are called chains, where a chain is a path in which the initial and terminal nodes
are distinct, and every interior node has indegree = 1 and outdegree = 1.
Definition
A DD-path is a sequence of nodes in a program graph such that
Case 1: It consists of a single node with indeg = 0.
Case 2: It consists of a single node with outdeg = 0.
Case 3: It consists of a single node with indeg ≥ 2 or outdeg ≥ 2.
Case 4: It consists of a single node with indeg = 1 and outdeg = 1.
Case 5: It is a maximal chain of length ≥ 1.

Q3e. Explain the concept and significance of cause and effect graphing technique
Ans:

 Cause-Effect Graphing is a technique where causes are the input conditions and effects are
the results of those input conditions.
 The Cause-Effect graph technique restates the requirements specification in terms of the
logical relationship between the input and output conditions. Since it is logical, it is obvious
to use Boolean operators like AND, OR and NOT.
 Cause-and-effect graphs shows unit inputs on the left side of a drawing, and using AND,
OR, and NOT “gates” to express the flow of data across stages of unit.
 The following Figure shows the basic cause-and-effect graph structures :

Janhavi Vadke Page 18


Q3f. Compare weak robust and strong robust equivalence class testing.
Ans: There are two main properties that underpin the methods used in functional testing. The
single fault assumption and the multiple fault assumption. These two properties lead to two
different types of equivalence class testing, weak and strong. However if we decide to test for
invalid input/output as well as valid input/output we can produce another two different types
of Equivalence Class Testing, normal and robust. Robust Equivalence Class testing takes into
consideration the testing of invalid values, whereas normal does not. Therefore we now have
four different types of Equivalence Class Testing, namely weak normal, strong normal, weak
robust and strong robust.

1) Weak Robust Equivalence Class Testing


As with weak normal Equivalence Class testing we only test for one variable from each
Equivalence Class. However we now also test for invalid values as well. Since weak Equivalence
Class Testing is based on the single fault assumption a test case will have one invalid value and
the remaining values will all be valid.

2) Strong Robust Equivalence Class Testing


This form of Equivalence Class testing produces test cases for all valid and invalid elements of
the Cartesian product of all the equivalence classes.

Janhavi Vadke Page 19


Janhavi Vadke Page 20
Question 4

Q4a. Explain different methods of verification.


Ans: Verification is the process, to ensure that whether we are building the product right
i.e., to verify the requirements which we have and to verify whether we are developing
the product accordingly or not.
Methods of verification:
 Inspection - This is the process of examining the product using one or several of
the five senses: visual, auditory, olfactory, tactile, and taste. An example of
inspection is the taste test of a cake you ordered. For software development, it
may mean reading the source code and checking for syntax mistakes.
 Self-Review-Self review may not be referred as an official way of review in most
of the software verification descriptions, as it is assumed that everybody does a
self-check before giving work product for further verification.
 Peer Review-Peer review is the most informal type of review where an author and
a peer are involved. It is review done by a peer and review records are maintained.
A peer may be a fellow developer or tester as the case may be.
 Walkthrough- A walkthrough is conducted by the author of the ‘document
under review’ who takes the participants through the document and his or her
thought processes, to achieve a common understanding and to gather feedback.
 Audits-Audit is a formal review based on samples. Audits are conducted by
auditors who may or may not be experts in the given work product.

Q4b. Explain the steps involved in management of verification and validation.


Ans:
Verification is a process of evaluating software at the development phase. It helps you to
decide whether the product of a given application satisfies the specified requirements.
Validation is the process of evaluating software at the after the development process and to
check whether it meets the customer requirements.
• Defining the processes for Verification and Validation
• Software quality assurance process
• Software quality control process
• Software development process
• Software life cycle definition
• Prepare plans for execution of process
• Initiate implementation plan
• Monitor execution plan
• Analyze problems discovered during execution
• Report progress of the processes
• Ensure product satisfies requirements

Q4c Describe the benefits of review technique.


Ans:
A review is a systematic examination of a document by one or more people with the main
aim of finding and removing errors early in the software development life cycle. Reviews are
used to verify documents such as requirements, system designs, code, test plans and test
cases.
 Verification can confirm that the work product has followed the processes correctly

Janhavi Vadke Page 21


 It can find defect in terms of deviations from standards easily
 Location of the defect can be found
 It can reduce the cost of finding and fixing the defects
 It can locate the defect easily as work product under review is yet to be integrated
 It can be used effectively for training people
 Productivity is improved and timescales reduced because the correction of defects in early
stages and work-products will help to ensure that those work-products are clear and
unambiguous.
 Testing costs and time is reduced as there is enough time spent during the initial phase.
 Reduction in costs because fewer defects in the final software.
 Verification helps in lowering down the count of the defect in the later stages of
development.
 Verifying the product at the starting phase of the development will help in understanding
the product in a better way.
 It reduces the chances of failures in the software application or product.
 It helps in building the product as per the customer specifications and needs.

Q4d. List and explain how the formal review is carried out.
Ans: A typical formal review process consists of six main steps:
1 Planning
2 Kick-off
3 Preparation
4 Review meeting
5 Rework
6 Follow-up.

(i) Planning
The review process for a particular review begins with a 'request for review' by the author to the
moderator (or inspection leader). A moderator is often assigned to take care of the scheduling
(dates, time, place and invitation) of the review. On a project level, the project planning needs
to allow time for review and rework activities, thus providing engineers with time to thoroughly
participate in reviews. For more formal reviews, e.g. inspections, the moderator always
performs an entry check and defines at this stage formal exit criteria. The entry check is carried
out to ensure that the reviewers' time is not wasted on a document that is not ready for review.
A document containing too many obvious mistakes is clearly not ready to enter a formal review
process and it could even be very harmful to the review process. It would possibly de-motivate
both reviewers and the author. Also, the review is most likely not effective because the
numerous obvious and minor defects will conceal the major defects.

(ii) Kick-off
An optional step in a review procedure is a kick-off meeting. The goal of this meeting is to get
everybody on the same wavelength regarding the document under review and to commit to
the time that will be spent on checking. Also the result of the entry check and defined exit
criteria are discussed in case of a more formal review. In general a kick-off is highly
recommended since there is a strong positive effect of a kick-off meeting on the motivation of
reviewers and thus the effectiveness of the review process.

During the kick-off meeting the reviewers receive a short introduction on the objectives of
the review and the documents. The relationships between the document under review and

Janhavi Vadke Page 22


the other documents (sources) are explained, especially if the number of related documents is
high.

Role assignments, checking rate, the pages to be checked, process changes and possible
other questions are also discussed during this meeting. Of course the distribution of the
document under review, source documents and other related documentation, can also be
done during the kick-off.

(iii) Preparation
The participants work individually on the document under review using the related
documents, procedures, rules and checklists provided. The individual participants identify
defects, questions and comments, according to their understanding of the document and
role. All issues are recorded, preferably using a logging form. Spelling mistakes are recorded
on the document under review but not mentioned during the meeting. The annotated
document will be given to the author at the end of the logging meeting. Using checklists
during this phase can make reviews more effective and efficient, for example a specific
checklist based on perspectives such as user, maintainer, tester or operations, or a checklist
for typical coding problems.

A critical success factor for a thorough preparation is the number of pages checked per hour.
This is called the checking rate.

(iv) Review Meeting


The meeting typically consists of the following elements (partly depending on the review
type): logging phase, discussion phase and decision phase.

During the logging phase the issues, e.g. defects, that have been identified during the
preparation are mentioned page by page, reviewer by reviewer and are logged either by the
author or by a scribe. A separate person to do the logging (a scribe) is especially useful for
formal review types such as an inspection. To ensure progress and efficiency, no real
discussion is allowed during the logging phase. If an issue needs discussion, the item is
logged and then handled in the discussion phase. A detailed discussion on whether or not an
issue is a defect is not very meaningful, as it is much more efficient to simply log it and
proceed to the next one. Furthermore, in spite of the opinion of the team, a discussed and
discarded defect may well turn out to be a real one during rework.

During the logging phase the focus is on logging as many defects as possible within a certain
timeframe. To ensure this, the moderator tries to keep a good logging rate (number of
defects logged per minute). In a well-led and disciplined formal review meeting, the logging
rate should be between one and two defects logged per minute.

For a more formal review, the issues classified as discussion items will be handled during this
meeting phase. Informal reviews will often not have a separate logging phase and will start
immediately with discussion. Participants can take part in the discussion by bringing forward
their comments and reasoning. As chairman of the discussion meeting, the moderator takes
care of people issues. For example, the moderator prevents discussions from getting too
personal, rephrases remarks if necessary and calls for a break to cool down 'heated'
discussions and/or participants.

Janhavi Vadke Page 23


At the end of the meeting, a decision on the document under review has to be made by the
participants, sometimes based on formal exit criteria. The most important exit criterion is the
average number of critical and/or major defects found per page.

(v) Rework
Based on the defects detected, the author will improve the document under review step by
step. Not every defect that is found leads to rework. It is the author's responsibility to judge if
a defect has to be fixed. If nothing is done about an issue for a certain reason, it should be
reported to at least indicate that the author has considered the issue.

Changes that are made to the document should be easy to identify during follow-up.
Therefore the author has to indicate where changes are made.

(vi) Follow-up
The moderator is responsible for ensuring that satisfactory actions have been taken on all
(logged) defects, process improvement suggestions and change requests. Although the
moderator checks to make sure that the author has taken action on all known defects, it is
not necessary for the moderator to check all the corrections in detail. If it is decided that all
participants will check the updated document, the moderator takes care of the distribution
and collects the feedback. For more formal review types the moderator checks for compliance
to the exit criteria.

In order to control and optimize the review process, a number of measurements are collected
by the moderator at each step of the process.

Q4e. Explain the VV model of testing.


Ans: The VV-model is an SDLC model where execution of processes happens in a sequential
manner in a V-shape. It is also known as Verification and Validation model. The VV-Model is
an extension of the waterfall model and is based on the association of a testing phase for
each corresponding development stage.
 Requirement Analysis
Requirements analysis is critical to the success or failure of a systems or software project.
The requirements should be documented, actionable, measurable, testable, traceable,
related to identified business needs or opportunities, and defined to a level of detail
sufficient for system design.
 System Design
Software design usually involves problem solving and planning a software solution. This
includes both a low-level component and algorithm activities and a high-
level, architecture design.
 Program Design
Program Design is the process that an organization uses to develop a program. It is most.
often an iterative process involving research, consultation, initial design, testing and.
redesign. A program design is the plan of action that results from that process.
 Acceptance testing
Acceptance testing is a level of software testing where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
 Interface testing

Janhavi Vadke Page 24


Interface testing is defined as a software testing type which verifies whether the
communication between two different software systems is done correctly. A
connection that integrates two components is called interface.
 Integration testing
Integration testing is a level of software testing where individual units are combined and
tested as a group? The purpose of this level of testing is to expose faults in the
interaction between integrated units. Test drivers and test stubs are used to assist in
Integration Testing
 Coding, code review, unit testing
Coding methodology includes a diagrammatic notation for documenting the results of the
procedure. It also includes an objective set (ideally quantified) of criteria for
determining whether the results of the procedure are of the desired quality.
Unit testingis a level of software testing where individual units/ components of a software
are tested. The purpose is to validate that each unit of the software performs as
designed. A unit is the smallest testable part of any software. It usually has one or a
few inputs and usually a single output.

Q4f. What are the roles and responsibilities of a reviewer.


Ans:

1. Moderator: The Moderator is the key role in a code review. The moderator is responsible for
selecting a team of reviewers, scheduling the code review meeting, conducting the meeting, and
working with the author to ensure that necessary corrections are made to the reviewed
document.
2. Author: The Author wrote the code that is being reviewed. The author is responsible for
starting the code review process by finding a Moderator. The role of Author must be separated
from that of Moderator, Reader, or Recorder to ensure the objectivity and effectiveness of the
code review. However, the Author serves an essential role in answering questions and making
clarifications during the review and making corrections after the review.
3. Reader: The Reader presents the code during the meeting by paraphrasing it in his own words.
It is important to separate the role of Reader from Author, because it is too easy for an author to

Janhavi Vadke Page 25


explain what he meant the code to do instead of explaining what it actually does. The reader's
interpretation of the code can reveal ambiguities, hidden assumptions, poor documentation and
style, and other errors that the Author would not be likely to catch on his own.
4. Scribe: The Scribe records all issues raised during the code review. Separating the role of
Scribe from the other roles allows the other reviewers to focus their entire attention on the code.

Janhavi Vadke Page 26


Question 5

Q5a. What is integration testing? Explain the Big bang approach.


Ans: .
Integration testing:
 Integration means combining. For Example, In this testing phase, different software
modules are combined and tested as a group to make sure that integrated system is
ready for system testing.
 Integrating testing checks the data flow from one module to other modules.
 This kind of testing is performed by testers.
Characteristics of Big bang approach
Big Bang Integration Testing is an integration testing strategy wherein all units are linked at once,
resulting in a complete system. When this type of testing strategy is adopted, it is difficult to
isolate any errors found, because attention is not paid to verifying the interfaces across individual
units.
• Testing is a last phase of development life cycle when everything is finalized
• Characterized by huge rework, retesting, scrap, sorting
• Regression testing reveals many issues as correction may not be correct
• All requirements and designs cannot be covered in testing
• Major part of software build never gets tested

Disadvantages of Big-Bang approach


 Defects present at the interfaces of components are identified at very late stage as all
components are integrated in one shot.
 It is very difficult to isolate the defects found.
 There is high probability of missing some critical defects, which might pop up in the
production environment.
 It is very difficult to cover all the cases for integration testing without missing even a
single scenario.

Q5b. What is the need of a Security Testing?


Ans:
Security Testing:
 Security testing is a special type of testing intended to check the level of security and
protection offered by an application to the users against unfortunate incidences.
 The incidence could be loss of privacy, loss of data etc.
 There are some weak points in a system which is vulnerable to outside
attacks/unauthorized entry in the system.
 Security testing need to follow validation activities to prove that system is protected
enough against external attacks.
 No system is fully protected from all types of attacks.
Some definitions associated with security are given below:
Vulnerability:
 No system in the world is perfect. There is no system exist which does not have any
vulnerability, therefore one must take precaution to not expose these weak points to
outside.
Threats:

Janhavi Vadke Page 27


 A threat represents the possible attacks on the system from outsiders with malicious
intentions.
 Threat is defined as an exploitation of vulnerabilities of the system.
Perpetrators:
 Perpetrators are the entities who are unwelcomed guest in the system.
 The can create a problem in a system by doing something undesirable like loss of data
and making changes in system.
 Perpetrator can be people, other system, viruses, etc.

Q5c. What is performance testing? List different types of performance testing.


Ans:
Performance Testing:
 Performance testing is intended to find whether the system meets its performance
requirements under normal level of activities.
 Generally, system performance requirements are identified in requirement statement
defined by the customer and system design implements them. Performance criteria must
be expressed in numerical terms.
 Design verification can help in determining whether required measures meet with
performance requirements or not.
 This is the situation where verification does not work at that extent, and one need to test
it by actually performing the operations on the system.

The focus of Performance Testing is checking a software program's
Speed - Determines whether the application responds quickly
Scalability - Determines maximum user load the software application can handle.
Stability - Determines if the application is stable under varying loads

Performance criteria must be measurable in quantitative terms such as time in ‘milliseconds’.


Some examples of performance testing can be given below:
 Adding a new record in a database must take maximum five milliseconds.
 Database containing one million records must not take more than one second while
searching a record in a database.
 Sending information across the system size of 1 MB with a network of 512 KBPS must
not take more than one minute.

Types of Performance Testing

 Load testing - checks the application's ability to perform under anticipated user loads.
The objective is to identify performance bottlenecks before the software application
goes live.
 Stress testing - involves testing an application under extreme workloads to see how it
handles high traffic or data processing. The objective is to identify the breaking point
of an application.
 Endurance testing - is done to make sure the software can handle the expected load
over a long period of time.
 Spike testing - tests the software's reaction to sudden large spikes in the load
generated by users.

Janhavi Vadke Page 28


 Volume testing - Under Volume Testing large no. of. Data is populated in a database
and the overall software system's behavior is monitored. The objective is to check
software application's performance under varying database volumes.
 Scalability testing - The objective of scalability testing is to determine the software
application's effectiveness in "scaling up" to support an increase in user load. It helps
plan capacity addition to your software system.

Q5d. Explain the concept of inter system testing and its Importance.
Ans: Many a times, an application is hosted across locations; however, all data needs to be
deployed over a central location. The process of testing the integration points for single
application hosted at different locations and then ensuring correct data flow across each
location is known as inter system testing.
Objective
 Determine Proper parameters and data are correctly passed between the applications.
 Documentation for involved system is correct and accurate.
 Ensure Proper timing and coordination of functions exists between the application system.
Use
 When there is change in parameters in application system.
 The parameters, which are erroneous then risk associated to such parameters, would
decide the extent of testing and type of testing.
 Intersystem parameters would be checked / verified after the change or new application
is placed in the production.
Example
 Develop test transaction set in one application and passing to another system to verify
the processing.
 Entering test transactions in live production environment and then using integrated test
facility to check the processing from one system to another.
 Verifying new changes of the parameters in the system, which are being tested, are
corrected in the document.

Q5e. Explain the significance of Usability testing.


Ans:
Usability testing
 It is simple to understand application usage
 Help is available and user can use it effectively
 It is easy to execute an application Usability testing is done by,
 Direct observation of people using system
 Conducting usability surveys
 Beta testing or business pilot of application

Usability testing checks for human factor problems such as,

 Whether outputs from the system such as printouts, reports etc are meaningful or not
 Is error diagnostic straightforward or not
 Error messaging must help common users using the system
 Does User Interface have conformity to the syntax, format, style observations etc
 Is the application easy to use?
 Is there an exit option in all choices so that user can exit the system at any moment?
 System must not annoy intended user

Janhavi Vadke Page 29


 System taking control from user without indicating when it will be returned can be a
problem
 System must provide online help or user manual
 Consistent in it's function and overall design

Q5f. Explain Commercial off-the-shelf software testing.


Ans:
‘COTS’ stands for ‘Commercially Off The Shelf’, These software are readily available in the
market and user can buy and use them directly.
There are many limitations for a development organisation that prevent it from making its own
required software, and one has to make a decision to buy ‘COTS’. Some of the reasons are given
below:
 Line of Business: An organisation developing software for banks, financial, institute etc.
It may not be in a line of business to develop automation testing tool for its test
requirements. In such case, the organisation buy software from outside and use it
without investing much time and resources for making such software in-house.
 Cost-Benefits Analysis: Sometimes, it is very costly to develop software in-house due to
various limitations like knowledge, skills, resources, etc. It would be easy to go to
market and purchase that product. Because buying a product may be cost effective.
 Expertise/Domain Knowledge: An organisation may have knowledge about how to use
the software that it needs. But it may not have development knowledge to build such
software in-house. Such software can be bought from market and use without going in
details of how it is built.
 Delivery Schedule: ‘COTS’ are available across the table by paying a price while
developing such software in-house may take a very long time and huge effects.
Features of COTS Testing:
 ‘COTS’ are developed with general requirements collected from the market. It may not
exactly match with organisation’s needs and expectations.
 Some ‘COTS’ may need changing business process to suite the ‘COTS’ implementation
in organisation.
 This is another way of Business Process Reengineering(BPR) for an organisation where
intentionally proven practices can be implemented by using ‘COTS’.
 Sometimes, ‘COTS’ may need configuration of software or system to suite business
model.
Challenges in Testing ‘COTS’:
 Requirement statement and design may not be available to testers as product
manufacturer never share with any customer.
 Verification and validation records prepared during SDLC are very important for
system testing and acceptance testing.

Janhavi Vadke Page 30


BSc.(Information Technology)
(Semester VI)
2018-19

Software Quality
Assurance
(USIT 601 Core)
University Paper Solution

By
Ms. Seema Vishwakarma

Ms. Seema Vishwakarma Page 1


Question 1

Q1 a. What is quality? Explain its core component. .

Ans: Quality means different things to different people at different times, different places and for
different products. It can also be defined as conformance to specification and fitness for use.
Core Component of Quality
Quality is based on Customer Satisfaction
• The effect of quality product delivered & used by a customer, on his satisfaction and
delight is the most important factor in determining whether quality has been achieved
or not.
• It talks about the ability of a product or service to satisfy a customer by fulfilling his/her
needs.
• Manufacture must understand the purpose or usage of a product and devise a quality
plan to satisfy the purpose of the product
The organisation must define Quality parameters before it can be achieved:
1. Define : Defining the product in terms of features, functionalities, attributes &
characteristics of a product. What should be , could be and must be present in the
project
2. Measure: The quantitative measures must be defined as an attribute of quality of a
product. Measurement also gives a gap between what is expected by a customer &
what is delivered to him when the product is sold.
3. Monitor: There must be a mechanism used by the manufacturer to monitor the
development, testing and delivering of a product process.
4. Control: Control gives the ability to provide desired results & avoid the undesired
things going to a customer.
5. Improve: Continuous improvement are necessary to maintain ongoing customer
satisfaction & overcome the possible competitionManagement must lead the
organisation through improvement efforts
Management must lead the organization through improvement efforts
• Management should lead the endeavor of quality improvement program in the
organization by defining vision, mission, policies, objectives, goals etc. to improve the
quality improvement program.
• And the same must be followed by the employees which is called as ‘cultural change
brought in by management’.
Continuous process:
• It is an older belief that quality can be improved by more inspection, testing and rework
which leads to the increase in the cost of inspection, segregation, failure and reduces
the profit margin for manufacturer.
• For improving the quality and to have a win-win situation for both manufacturer &
customer, quality must be produced at the first time and must be improved
continuously

Ms. Seema Vishwakarma Page 2


Q1 b. Differentiate between tools and techniques.

Ans:

Tools Techniques
Tool is of no use unless technique is available Technique is independent of any tool
Different technique may use same tools to get Same technique may use different tools to get
the different result the same result
Tool improvement need technological changes Technique change can be affected through
procedural change
Contribution of tool in improvement is limited Contribution of technique in improvement is
important

Q1 c. Explain continual (continuous) improvement cycle.

Ans: Continual(Continuous) improvement cycle also known as Plan, Do, Check, Act(PDCA) cycle
is based on systematic sequence of plan do check and act activities representing a never
ending cycle of improvements.

PLAN : An organization must plan for an improvement on the


basis of its vision and mission definition . Planning includes
answering questions such as who, when, where, how etc about
various activities. Quality planning as unit level must be in sync
with quality planning at various organization level.

DO : An organization must work in the direction set by the plan.


Actual execution of plan can determine whether the results as
expected are achieved or not . Plan sets the tone while execution makes the plan work. ‘Do’
process needs inputs like resources like hardware, software, training

CHECK : An organization must compare actual outcome of ‘Do’ stage with reference or
expected result which are planned outcomes. It must be done periodically to check of the
progress is in right proper direction and whether the plan is right or not

ACT: If any deviations are observed in actual outcomes with respect to planned results. The
organization must decide actions to correct the situation. One may need to implement
corrective and preventive actions as per the outcome of ‘Check’

Q1 d. List and explain any five requirements of a product.

Ans:
Must/Must not requirement /Primary requirement
Must requirements are primary requirements for which customer is going to ay for while
acquiring a product. These are essential requirements and the value of the product is
decided based on the accomplishment of must requirement. It has highest priority and
denoted by P1
Should/Should not requirement / Secondary requirement
These are the requirements that can be appreciated by the customer if present/absent in the
product. Customer may pay extra for the satisfaction of these requirements. These
requirements gives customer delight and are at lower priority. These requirements are
denoted by P2.

Ms. Seema Vishwakarma Page 3


Could/Could not requirement /Tertiary requirement
These requirements may add a competitive advantage to the product but may not add values
in terms of price paid by a customer. These requirements have lowest priority and denoted by
P3
Generic/Specific requirement
Some requirements are generic in nature and are accepted for a type of product for all the
users while some others are specific to the product. Eg Additon of two numbers should be
correct is generic requirement while the accuracy of 8 digits after decimal. Usability is generic
requirement while authentication to users may be driven by specific requirement

Present/Future requirement
Present requirements are essential when an application is used in present circumstances while
future requirements are for future needs which will be needed after some time span. Definition
of future has a direct relationship with usable life of an application.

Q1 e. Explain types of products based on criticality to the users?

Ans: Life affecting product


Products that affects the life of an individual indirectly or directly are considered as most critical
product by the users. The quality requirements are very stringent and testing is very critical as
failure may result into loss of life or disablement of the user. They are further grouped into
five categories
1. Any product failure resulting into death of a person
2. Any product failure resulting into permanent disablement of a person
3. Any product failure resulting into temporary disablement of a person
4. Any product failure resulting minor injury to a person
5. Other product which do not affect health or safety direclty

Product affecting huge sum of money


It is second in the list of criticality of the product which includes products having direct
relationship with loss of huge sum of money. Such product may need large testing efforts and
have many regulatory as well as statutory requirements. Security, confidentiality and accuracy
are some of the important quality factors for such product

Product that can be tested only by simulator


Product which cannot be tested in real life scenario but need simulated environment for testing
are third in the ranking if criticality. Products used in aeronautics, space research etc falls into
this category

Q1 f. List and explain any five quality principles of Total Quality Management.

Ans: Quality principles of Total Quality Management (TQM)


Develop constancy of purpose of definition and deployment of various initiatives :
Management must create constancy of purpose for product and processes allocating
resources adequately to provide for long term needs rather than concentrating on short term
profitability. Decision taken by the management should be consistent

Adapting to the new philosophy of managing people :


Management should adopt to the new philosophies of doing work and getting the work done
from its people and supplier. Skills makes an individual indispensable. Transformation of

Ms. Seema Vishwakarma Page 4


management style to total quality management is necessary to take the business on the path
of continued improvement

Declare freedom from Mass inspection of produced output :


Mass inspection results into huge cost overrun and product produced is of inferior quality.
Improving the quality of product needs setting up the right processes of development and
measurement of process capabilities and statistical evidences of built in quality in all
department

Stop Awarding of lowest proce tag contracts to suppliers :


Vendor selection must be done on the basis of total cost including price, rejections etc.
Organization must perform measurement of quality of supply along with price and do the
source selection on the basis of final cost paid by it in terms of procurement, rework,
maintenance, operations etc

Drive out fear of failure from Employees:


An organization must encourage effective two way communication and other means of driving
out the fear. Employees can work effectively and more productively to achieve better quality
output when there is no fear of failure
Question 2

Q2 a. Explain salient features of good testing

Ans: Good testing involves following steps:


Capture user requirement
Intended requirements defined by the users or customer and the implied requirements are to
be analyzed and documented by testers so that they write the test scenario and test cases for
these requirements.
Capturing user needs
User needs includes present, future, process and implied requirements. Elicitation of
requirements is to be done by the development organization to understand and interpret the
requirements
Design Objective
Design objectives state why particular approach has been selected for building software.
Functional requirement, user interface requirement are some of the requirements mentioned
in software design and how can they achieve it.
User Interface
It is the way how user interacts with the system. This includes screens, displays and reports
generated by the system. User interface should be simple so that the user can understand what
he is supposed to do and what the system is doing
Internal structures
Internal structures are mainly guided by the software designs or standards used for designing
and development. It also talks about reusability
Execution of gray box testing of code
Testing ensures that it works as intended by customer and is prevented from any probable
misuse or risk of failure. Execution can only prove that application module and program works
correctly
Q2 b. Differentiate between verification and validation.

Ans: Verification: It is a disciplined approach to evaluate whether a software product fulfils the
requirements or conditions imposed on them by the standards or processes. It is done to ensure
that the processes and procedures defined by the customer and/or organisation for development
& testing.

Ms. Seema Vishwakarma Page 5


Following are the techniques of Verification:
1.Self-Review: It may not be considered as an official way of review, because it assumes that
everybody does a self-check before giving work product for further verification.
2.Peer Review: It is the most informal type of review where an author & a peer are involved.
Review records are maintained.
3.Walkthrough: It is a semi-formal type of review as it involves larger teams along with the author
reviewing a work product.
4.Inspection: It is a formal review where people external to the team may be involved as
inspectors. They are the ‘Subject Matter Experts’ who review the work product.
5.Audit: It is a formal review based on samples. Audits are conducted by the auditors who may
or may not be the experts in the given work product.

Validation: It is used to evaluate whether the final built software product fulfils its specific
intended use. It is also called as ‘Dynamic Testing’. It must be done by the independent users,
functional experts, and black box testers to ensure independence of testing from development
activities. It helps in analysing whether the software product meets the requirements as specified
in requirement statement.

Following are the levels of validation:

1. Unit Testing
2. Integration Testing
3. Interface Testing
4. System Testing
5. Cause & Effect Graphing
6. Path Expression & Regular Expression

Q2 c. List and explain any two approaches of software testing team with its advantages
and disadvantages.

Ans: Following approaches of software testing


• Independent testing team
An organization may create a separate testing team with independent responsibility of
testing
The team would be having people with sufficient knowledge and ability to test the
software
Advantages:
Separate test team is supposed to concentrate more on test planning, test strategies
and approaches
There is an independent view about the product
Disadvantages
Separate teams means additional cost for an organization. Testing teams needs
ramping up and knowledge transfer. Organization needs to keep a check on
development and testing team

• Domain expert doing software testing


An organization may employ domain experts for doing testing. Domain experts may
use their expertise on subject matter for performing such type of testing

Advantages:
Fitness for use. Domain may provide facilitation to developers about defects and
customer expectations. Domain expert understands the scenario faced by actual users
and hence their testing is realistic

Ms. Seema Vishwakarma Page 6


Q2 d. What is test strategy? Explain different stages involve in process of developing test
strategy..

Ans: A test strategy defines the project's testing objectives and the means to achieve them.
The test strategy therefore determines testing effort and costs. Selecting an appropriate test
strategy is one of the most important planning task decisions the test manager has. The goal
is to choose a test approach that optimizes the relation between costs of testing and costs of
defects.
Steps involved in developing test strategies

Select and rank test factors for given application

Identify the critical test factors for software product under testing. Test factors are analysed
and prioritized. The trade of decision must be taken consulting the customer

Identify system development phases and related test factors

The critical success factors have varying importance depending upon the developing life cycle
phase. The test approach changes according to factors influencing the life cycle phase

Identify associated risks with each selected test facto in case if it is not achieved

Customer must do trade offs of test factors and poss

Identify phase in which risk if not meeting a test factor need to be addressed

Q2 e. Explain gray box testing with its advantages and disadvantages

Ans: Gray box testing


• Gray box testing is done on the basis of internal structures of software as defined by
requirements, design, coding standards and guidelines as well as functional and non
functional specifications
• Gray box testing combines verification techniques with validation techniques where
one can ensure that software is build correctly and also works
• Gray box testing talks about combination of approaches namely black box testing and
white box testing at the same time
Advantages of gray box testing:
It checks whether the work product works in correct manner, both functionally as well as
structurally
Disavantages of gray box testing:
Knowledge of some automation tools along with their configuration is essential for performing
gray box testing

Q2 f. List and explain different testing skills required by tester.


Ans:
• Tester must be selected on the basis of available skills and requirements of tools and
its techniques.
• If the testers are conversant with a tool, it can help in easy entry for a new tool in the
organization. Users skills are required for all tools used by the testers.
• Programming skills are required when testers have to develop scripts for using the
tools for testing. Programming skills are needed to write scripts in specific language

Ms. Seema Vishwakarma Page 7


• System skills are required when application tool needs some configuration to be made.
Good skills related to systems like working with database, configure and install new
tools
• Technical skills are needed to understand the prerequisite of tools and testing.
Understanding of user manual, design, requirements are need for usage and
troubleshooting

Question 3
Q3 a. What are cause-effect graphs? Explain with the help of an example..
Ans: Cause – effect graph
• Cause and Effect is a tool used to identify possible causes of a problem by representing
the relationship between an effect and and its possible cause. It is also known as the
“fishbone” diagram, this method can be used in brainstorming
• Cause-and-effect graphs works by showing unit inputs on the left side of a drawing,
and using AND, OR, and NOT “gates” to express the flow of data across stages of a
unit.
• Cause-and-effect graph structure for commission problem is as follow

Q3 b. Define equivalence class. Explain systematic approaches for selecting equivalence.


Ans:
• Equivalence class testing is based on creating partitions. It removes the redundancy
gaps that appears in the boundary value analysis.Input or output data is grouped or
partitioned into sets of data that is expected to behave similarly using an Equivalence
relation.
• An equivalence relation describes how data is going to be processed when it enters a
function.
• The equivalence class testing requires to test only one condition from each partition.
This is because all the conditions in one partition will be treated in the same way by
the software.
• If one condition in a partition works, then all the conditions in that partition will work,
and so there is little point in testing any of these others.
• Conversely, if one of the conditions in a partition does not work, then we assume that
none of the conditions in that partition will work so again there is little point in testing
any more in that partition
• Consider a function of two variables x1,x2 having the following boundaries and
intervals within the boundaries:
• a ≤ x1≤ d with interval [a,b)[b,c)[c,d)
• c ≤ x2≤ d with interval [e,f)[f,g)

Q3 c. What is boundary value testing? Explain robust boundary value testing.


Ms. Seema Vishwakarma Page 8
Ans: Boundary Value Testing (BVA)
The BVA depends on the concept that errors tend to occur near the extremities of the input
variables. Boundary value analysis (BVA) is based on testing at the boundaries between
partitions.
Range checking is an example of using the boundary value analysis technique. BVA
concentrated more on the boundary of the input space to identify the test cases. Most of the
programs can be viewed as a function F. The input variables of F will have some possibly
boundaries where a,b and c,d are the range of x1 and x2 respectively.
a ≤ x1≤ b
c ≤ x2≤ d

Robust Boundary Value Testing


Robustness testing can be seen as an extension of Boundary Value Analysis.
The idea behind Robustness testing is to test for variables that lie in the legitimate input range
and variables that fall just outside this input domain.
Two more values for each variable (min-, max+) are added such
that it fall just outside of input range.In addition to 5 testing
values (min, min+, nom, max-, max), two more values (min-,
max+) are added to fall just outside of the input range

Q3 d. Explain slice-based testing with an example.


Ans: Slice-based testing
Definition:
Given a program P and a set V of variables in P, a slice on the variable set V at statement
n, written S(V, n), is the set of all statement fragments in P that contribute to the values
of variables in V at node n
P-use used in predicate decision
C-use used in computation decision
O-use used for output
L-use used for location
i-used used for iteration
Example:
• S(price, 5) = {5}
• S(price, 6) = {5, 6, 8, 9}
• S(price, 7) = {5, 6, 8, 9}
• S(price, 8) = {8}
• Lines 1 to 4 have no bearing on the value of the variable at line 7 (and,for that matter,
for no other variable at any point), so they are not added to the slice.
• Line 5 contains a defining node of the variable price that can affect the value at line
7, so 5 is added to the slice.
• Line 6 can affect the value of the variable as it can affect the flow of control of the
program. Therefore, 6 is added to the slice.
Line 7 is not added to the slice, as it cannot affect the value of the variable at line 7 in
any way.
• Line 8 is added to the slice – even though it comes after line 7 in the program listing.
This is because of the loop: after the first iteration of the loop, line 8 will be executed
before the next

Ms. Seema Vishwakarma Page 9


Q3 e. Explain DD-paths and basis path testing.
Ans: DD-path testing
• The best known form of structural testing is based on construct known as decision to
decision path
• The reason that program graphs play such an important role in structural testing is
because it forms the basis of a number of testing methods, including one based on a
construct known as decision-to-decision paths (DD-Paths).
• The idea is to use DD-Paths to create a condensation graph of a piece of software’s
program graph, in which a number of constructs are collapsed into single nodes known
as DD-Paths.
• DD-Paths are chains of nodes in a directed graph that adhere to certain definitions.
• Each chain can be broken down into a different type of DD-Path, the result of which
ends up as being a graph of DD-Paths. The length of a chain corresponds to the
number of edges that the chain contains.

Basic Path Testing


• The basis is always define in terms of vector space, which is a set of element as well as
operations that corresponds to multiplication and addition defined for the vectors.
• The basis of a vector space contains a set of vectors that are independent of one
another, and have a spanning property; this means that everything within the vector
space can be expressed in terms of the elements within the basis.

Q3 f. Write a note on decision table technique


Ans: Decision Table Technique
• A decision table is a good way to deal with combinations of things (e.g. inputs).
• This technique is sometimes also referred to as a 'cause-effect' table. If different
combinations of inputs result in different actions being taken, this can be more difficult
to show using equivalence partitioning and boundary value analysis, which tend to be
more focused on the user interface.

Advantages
• Decision tables can be used in test design whether or not they are used in
specifications, as they help testers explore the effects of combinations of different
inputs and other software states that must correctly implement business rules
• Helping the developers do a better job can also lead to better relationships with them
• Decision tables aid the systematic selection of effective test cases and can have the
beneficial side-effect of finding problems and ambiguities in the specification.
• It is a technique that works well in conjunction with equivalence partitioning

Question 4
Q4 a. Explain the concept of workbench.
Ans:

Ms. Seema Vishwakarma Page 10


The following basic things are
required for workbench
Input : There must be some entry
check criteria when inputs are
entering the workbench and it
should match with the output
criteria of the earlier work bench
Output: There must be some exit
criteria from work bench which
should match with input criteria for
the next work bench. Output must
include review comment
Verification Process: must describe the step by step activities to be conducted in each work
bench
Check Process: Describes how the verification process has been checked. Quality plan must
describe the objective to be achieved.
Standard tools and guideline: There may be tools, coding, guidelines or standards for
verification

Q4 b. List all the methods of verification. Explain all.


Ans: Self Review : It is not an official way of review. One must capture the self review
records and defects found in self review to improve the process. It is a kind of self learning
and defect prevention method
Peer Review: It is the most informal type of review where an author and a peer are involved.
It is a review done by the peer and review records are maintained. The organization defines a
checklist for doing peer review
Walkthrough: It is the most formal review but less formal then inspection. It is also known
as semi formal as only related people are involved. Some members of a project team are
involved in examining an artifact under review
Inspection: It is a very formal way of reviewing the product. A presenter presents a work
product and a recorder makes notes of comments given by inspector

Q4 c. Discuss different types of reviews in verification.


Ans:
In-process review : Review conducted during different phases of software development life
cycle. The are intended to check whether inputs to the phase are correct or not and whether
all the applicable process are followed

Milestone review : It is conducted on periodic basis depending on the completion of a


particular phase or a time frame defined or a milestone achieved. These reviews confirms that
the output from the phase matches with predefined quality

Phase end review: It is conducted at the end of the development phase under review such as
requirement phase, design coding and testing. Waterfall cycle is suitable for doing a phase
end review where distinction between different phases are clearly defined

Periodic review: When one phase ends in reality another phase may have half way through.
In such cases one must review some on periodic basis such as weekly, monthly and quarterly.

Percent completion review: Percent completion review is a combination of periodic review


and phase end review where the project activities or product development activities are
assessed on the basis of percent completion

Ms. Seema Vishwakarma Page 11


Q4 d. Explain V model for software.

Ans: V model
• Test policy and test strategy for performing verification and validation activities are
documented beforehand to avoid any problem in final deliverables
• Testing activities are referred in project plan but they are detailed in quality plan of
verification
• Plan of activities should decide about 5 Ws(What, When, Where, Why and Who) and
H(How) with respect to people, process, training etc
• Activities prepared and documented should be analyzed for coverage, relationship with
different entities, structure and traceability
• Functional test scenario and structured test scenario are developed for design
specification.
• High level and low level design must ensure that requirement are completely covered
so that software development covers everything
• The output of one phase must match the input criteria of other phase
• The test artifacts that developed must be reviewed and updated

Q4 e. Describe V &V activities during designs.

Ans: V & V model

Design may include high level architectural design and low level design or detailed design
which are created by architect.

Design Verification: Verification of design may be a walkthrough of design document by


design experts team members and stakeholders of the project.
Project team along with architect may walkthrough the design to find the completeness of the
give component.
Specific tools or methodologies like UML are used to create design.

Design Validation: Validation of the design can happen at two or more stages during software
development life cycle.

Ms. Seema Vishwakarma Page 12


The first stage of validation happens when data flow diagram can be created by referring to
the design document. If the flow of the data is complete, design is considered to be complete.
The second stage happens at integration testing and interface testing.

Q4 f. Explain different roles and responsibilities of development group.

Ans

Manager :
The development manager selects the objects to be reviewed and confirms that the base
documents, as well as the necessary resources, are available. They also choose the participating
people.

Moderator
The moderator is responsible for: the administrative tasks pertaining to the review, planning
and preparation, ensuring that the review is conducted in an orderly manner and meets its
objectives, collecting review data, and issuing the review report.

Author
The author is the creator of the document that is the subject of a review. If several people have
been involved in the creation, one person with lead responsibility should be appointed; this
person takes over the role of the author.

Reviewer
The reviewers, sometimes also called inspectors, are several (usually a maximum of five)
technical experts that shall check the review object after individual preparation.

Recorder
The recorder (or scribe) shall document the findings (problems, action items, decisions, and
recommendations) made by the review team. The recorder must be able to record in a short
and precise way, capturing the essence of the discussion.

Question 5

Q5a. Explain the characteristic of design testing.

Ans: Clarity: A design must define all functions ,component, tables, stored procedures and
reusable components very clearly

Complete: It must define the parameters to be passed/received formats of data handled etc

Ms. Seema Vishwakarma Page 13


Traceable: A design must be traceable to requirements. The project manager must check if
there is any requirement which does not have corresponding design
Implement: A design must be made in such a way that it can implement easily with selected
technology and system
Testable: Testers makes structural test case on the basis of design. Thus a good test design
must help in creating structural test case

Q5b. Discuss Bottom up and top down testing with an example..

Ans: Top Down Testing:


• In top-down testing approach, the top level of the application is tested first and then
it goes downward till it reaches the final component of the system. All top-level
components called by tested components are combined one by one and tested in the
process.
• Drivers may not be required as we go downward as earlier phase will act as driver for
latter phase while one may have to design stubs to take care of lower-level components
which are not available at that time.
• Top-level components are the user interfaces which are created first to elicit user
requirements or creation of prototype. Agile approaches like prototyping, formal proof
of concept, and test-driven development use this approach for testing.
Bottom Up testing
• It focuses on testing the bottom part/individual units and modules, and then goes
upward by integrating tested and working units and modules for system testing.
• It is a mirror image of the Top-down approach, with the difference that stubs are
replaced by driver modules that emulate units at the next level up in the tree. In
Bottom-top integration, we start with the leaves of the decomposition tree and the test
them with specially coded drivers
• Each component & unit is tested first for its correctness. If it found to be working
correctly, then only it goes for further integration. It makes a system more robust since
individual units are tested & confirmed as working.

Q5c. What is acceptance testing? Explain different forms of it..

Ans: The focus of acceptance testing is on the customer's perspective and judgment.The
acceptance test might be the only test the customer is actually involved in or which they can
understand. Acceptance tests can even be executed within lower test levels or distributed over
several test levels.

Typical forms of acceptance testing include the following:


1. Testing to determine if the contract has been met: If customer specific software was
developed, the customer (in cooperation with the vendor) will perform acceptance testing
according to the contract. On the basis of the results of these acceptance tests the customer
considers whether the ordered software system is free of (major) deficiencies and whether the
development contract or the service defined by the contract has been accomplished.
2. User acceptance testing: Another aspect concerning acceptance as the last phase of
validation is the test for user acceptance. Such a test is especially recommended if the customer
and the user are different individuals.

Ms. Seema Vishwakarma Page 14


3. Operational (acceptance) testing: Operational (acceptance) testing assures the acceptance
of the system by the system administrators. It may include the testing of backup/restore cycles,
disaster recovery, user management, maintenance tasks, and checks of security vulnerabilities.
4. Field test (alpha and beta testing): The objective of the field test is to identify influences
from users' environments that are not entirely known or that are not specified, and to eliminate
them if necessary

Q5d. Explain GUI testing with its advantages and disadvantages.

Ans: GUI Testing


GUI testing is defined as the process of testing the system's Graphical User Interface of the
Application Under Test. GUI testing involves checking the screens with the controls like menus,
buttons, icons, and all types of bars - toolbar, menu bar, dialog boxes, and windows, etc.
GUI is what the user sees. A user does not see the source code. The interface is visible to the
user. Especially the focus is on the design structure, images that they are working properly or
not.
Advantages:
• Tests the user interface from the users perspective.
• Efficiently reduces the number of risks towards the end of development life cycle.
• Offers developers and testers ease of use and learning.
• Helps validate the compliance of various icons and elements with their design
specifications.
• Increases the reliability and improves quality of the product.
Disadvantages
• It requires more memory resources, which leads the system to perform slowly.
• The process of testing is time consuming and may require extra software for running
GUIs.
• Since the interface of an application changes frequently, the team might have to
refactor recorded test script to improve its accuracy.
• Limited access or no access to the source code makes the process of testing difficult.

Q5e. Write a short note on smoke testing.

Ans: Smoke Testing


• Smoke Testing is also known as “Build Verification Testing”, is a type of software testing
that comprises of a non-exhaustive set of tests
• It aims at ensuring that the most important functions work. The result of this testing is
used to decide if a build is stable enough to proceed with further testing.
• Installation, navigation through the application, invoking or accessing some major
functionalities involved in testing.
• If smoke testing fails the user will not be able to work with application and it may result
into rejection of application
• Test manager and senior tester performs smoke testing and the entry criteria is
depended on smoke testing

Q5 f. Explain compatibility testing in details.

Ans: Compatibility Testing


• Compatibility testing means testing the software on multiple configuration to check
the behaviors of different system component and their combination
• The variables can be operating systems, browsers, databases and languages
• The hardware can be machine, server, routers and printers

Ms. Seema Vishwakarma Page 15


• Integration with other communication systems can be mailing software, messaging
softwares.
• Friend compatibility: happens when the application behavior on new plateform is as
if it is working on its base platform. Cost benefit analysis has to be done to determine
how much friend compatibility is required.
• Neutral compatibility: If the application has its own utilities and services and uses
them as if nothing has been provided by any platform.
• Enemy compatibility: if the application is not compatible with targeted platform then
this may be termed as enemy compatibility

-----------------------------x---------------------------

Ms. Seema Vishwakarma Page 16


BSc.(Information Technology)
(Semester VI)
2019-20

Software Quality
Assurance
(USIT 601 Core)
University Paper Solution

By
Ms. Snehal Tandale

Ms. Snehal Tandale Page 1


Question 1

Q1a. Define the term Quality and elaborate different views on quality.

Ans: To some users, a quality product may be one, which has no/less defects and works
exactly as expected and matches with his/her concept of cost and delivery schedule along
with services offered. “Quality is fitness for use”.

Quality is defined as,” Conformance to specifications” because a very little change to the
design of a product and have to make the product as per the design which will best suited
the user’s other expectations like less cost, fast delivery and good service support.

The definitions of quality from different perceptions are as follow: -

1) Customer- Based definition of Quality- Quality product must have “Fitness for use”
and must meet customer needs, expectations and help in achieving customer
satisfactions and possibly customer delight. Any product can be considered as quality
products if it satisfies its purpose of existence.
2) Manufacturing- Based definition of Quality-This definition is mainly derived from
engineering product manufacturing where it is not expected that the customer knows
all the requirements of the product, and many product requirements are defined by
architects and designers on the basis of customer feedback/survey. Market research
may have to generate requirement statement on the basis of perception of probable
customers about what features and characteristics of a product are expected by the
market. This approach gives the definition of “Conformance to requirements”.
3) Product- Based definition of Quality- The product must have something that other
similar products do not have. These attributes must add value for the customer/user
so that they can appreciate the product in comparison to competing products. The
product must be distinguishable from similar products in the market.
4) Value- Based definition of Quality- A product is the best combination of price and
features or attributes expected by or required by the customers. The customer must
get value for his investment by buying the product. The cost of the product has direct
relationship with the value that the customer finds in it. More value for the customer
helps in better appreciation of a product. Many times it is claimed that “People do
not buy products, they buy benefits”.
5) Transcendent Quality- To many users/customers, it is not clear what is meant by a
“quality product”, but as per their perception it is something good and they want to
purchase it because of some quality present/absent in the product.

Q1b. Explain the lifecycle of quality improvements.

Ans: Quality improvement includes the following steps: -

a) Identifying areas in which quality can be improved depending upon process


capability measurements and organisation priorities. It must prioritise the

Ms. Snehal Tandale Page 2


improvements depending upon the resources available and
efforts/investments required and benefits derived from such improvements.

b) Improving quality of the processes of development, testing, managing etc. is a team


work lead by management directives. Improvements in processes automatically
improve the product and customer satisfaction.

c) Setting measureable goals in each area of an organisation can help in improving


processes at all levels. Goals may be set with reference to customer expectations or
something which may give competitive advantage to the organisation in the market.

d) Giving recognition to achievers of quality goals will boost their morale and set a
positive competition among the teams leading to organisational improvements. This
may lead to dramatic improvements in all areas.

e) Repeating quality improvement cycle continuously by stretching goals further for


next phase of improvements is required to maintain and improve the status further.
The organisation must evaluate the goals to be achieved in short term, long term and
the combination of both to realise organisational vision.

Q1c. What are the quality principles of Total Quality Management(TQM)?

Ans: Following are the quality principles of Total Quality Management(TQM): -

i. Develop Constancy of purpose of definition and deployment of various initiatives.

ii. Adapting to new philosophy of managing people/stakeholders by building


confidence and relationships.

iii. Declare freedom from mass inspection of incoming/produced output.

iv. Stop awarding of lowest price tag contracts to suppliers.

v. Improve every process used for development and testing of products.

vi. Institutionalize training across the organization for all people.

vii. Institutionalize leadership throughout organization at each level.

viii. Drive out fear of failure from employees.

ix. Break down barriers between functions/departments.

x. Eliminate exhortations by numbers, goals, targets.

xi. Eliminate arbitrary numerical targets which are not supported by processes.

xii. Permit pride of workmanship for employees.

xiii. Encourage education of new skills and techniques.

Ms. Snehal Tandale Page 3


xiv. Top management commitment and action to improve continually.

Q1d. Explain the structure of quality management system.

Ans:

Generic view of quality management includes three tiers and three pillars.

Following are the pillars: -

1) 1st Tier (Quality Policy)


Quality policy sets the wish, intent and directions by the management about how
activities will be conducted by the organization.

2) 2nd Tier (Quality Objective)


Quality objectives are the measurements established by the management to define
progress and achievements in a numerical way.

3) 3rd Tier (Quality Manual)


Quality manual also termed as policy manual is established and published by the
management of the organization.

Following are the tiers: -

1) Quality processes/ Quality procedures/ Work Instructions


They are defined at an organization level by the functional area experts, and at
project and function level by the experts in those areas separately.

2) Guidelines and Standards

Ms. Snehal Tandale Page 4


They are used by an organization’s project team for achieving quality goals for the
products and the services delivered to customers.

3) Format and templates


They are used for tracking a project, function, and department information within an
organization.

Q1e. How the quality and productivity are related with each other?

Ans: Quality improvement does not talk about product quality only but a process quality
used for making such a product. If the processes of development and testing are good, a bad
product will not be manufactured in the first place. It will reduce inspection, testing, rework,
cost/price. Thus quality must improve productivity by reducing wastage.

Following shows how quality and productivity are related to each other:-

1) Improvement in quality directly leads to improved productivity.


2) The hidden factory producing scrap, rework, sorting, repair and customer complaint is
closed.
3) Quality improvements lead to cost reduction.
4) Employee involvement in quality improvement.
5) Proper communication between management and employee is essential.
6) Employees participate and contribute in improvement process.
7) Employees share responsibility for innovation and quality improvement.

Q1f. Write a short note on continual improvement cycle.

Ans: Continual (continuous) improvement cycle is based on systematic sequence of Plan-Do-


Check-Act activities representing a never ending cycle of improvements.

Stages of Continual (Continuous) improvement through PDCA are: -

1) Plan: - An organization must plan for improvements on the basis of its vision and
mission definition. Planning includes answering all questions like who, when, where,
why, what, how etc.

2) Do: - Plan is not everything but a roadmap. Actual execution of a plan can determine
whether the results as expected are achieved or not. Do process need inputs like

Ms. Snehal Tandale Page 5


resources, hardware, software, training, etc. for execution of a plan.

3) Check: - An organisation must compare actual outcome of Do stage with reference or


expected results which are planned outcomes. It must be done periodically to assess
whether the progress is in proper direction or not and whether the plan is right or
not.

4) Act: - If any deviations (positive or negative) are observed in actual outcome with
respect to planned results, the organization may need to decide actions to correct the
situations.

Question 2

Q2a. Explain the lifecycle of software testing.

Ans: Except for small programs, systems should not be tested as a single, monolithic unit.
Large systems are built out of sub-systems that are built out of modules, which are
composed of procedures and functions. The testing process should therefore proceed in
stages where testing is carried out incrementally in conjunction with system implementation.
The most widely used process consists of five stages:

1. Unit Testing: Individual components are tested to ensure that they operate correctly.
Each component is tested independently without other system components.

2. Module Testing: This involves the testing of independent components such as


procedures and functions. A module encapsulates related components so it can be
tested without other system modules.

3. Subsystem Testing: This phase involves testing collections of modules which have
been integrated into sub-systems. Sub-systems may be independently designed. The
most common problems which arise in large software systems are sub-system
interface mismatches. The sub-system test process should therefore concentrate on
the detection of interface errors by rigorously exercising the interfaces.

4. System Testing: Sub systems are integrated to make up the entire system. The
testing process is concerned with finding errors that result from unanticipated
interactions between sub-systems and system components. It is also concerned with
validating that the system meets its functional and non-functional requirements.

5. Acceptance Testing: This is the final stage in the testing process before the system is
accepted for operational use. The system is tested with data supplied by the system
procurer rather than simulated test data. Acceptance testing may reveal errors and
omissions in the system requirements definition because the real data exercises the
system in different ways from the test data. It may also reveal requirements problems
where the system's facilities do not really meet the user's needs or the system's
performance is not acceptable.

Ms. Snehal Tandale Page 6


Q2b. Write a note on requirement traceability matrix.

Ans: Requirement Traceability Matrix or RTM captures all requirements proposed by the
client or software development team and their traceability in a single document delivered at
the conclusion of the life-cycle.

In other words, it is a document that maps and traces user requirement with test cases. The
main purpose of Requirement Traceability Matrix is to see that all test cases are covered so
that no functionality should miss while doing Software testing.

Following parameters should be included in requirement traceability matrix: -

a. Requirement ID
b. Requirement Type and Description
c. Test Cases with Status

Types of Traceability Test Matrix

Traceability matrix can be divided into three major components as: -

 Forward traceability: This matrix is used to check whether the project progresses in
the desired direction and for the right product. It makes sure that each requirement is
applied to the product and that each requirement is tested thoroughly. It maps
requirements to test cases.
 Backward or reverse traceability: It is used to ensure whether the current product
remains on the right track. The purpose behind this type of traceability is to verify
that we are not expanding the scope of the project by adding code, design elements,
test or other work that is not specified in the requirements. It maps test cases to
requirements.
 Bi-directional traceability (Forward Backward): This traceability matrix ensures that
all requirements are covered by test cases. It analyzes the impact of a change in
requirements affected by the Defect in a work product and vice versa.

Q2c. State and explain any 5 principles of software testing.

Ans: Following are the principles of software testing: -

Ms. Snehal Tandale Page 7


 Define the expected output or result for each tests case executed, to understand if
expected and actual output matches or not.

 Developers should not test their own programs. Development teams must not test
their own products. Blindfolds cannot be removed in self testing.

 Inspect the results of each test completely and carefully. It would help in root cause
analysis and can be used to find weak processes.

 Include test cases for invalid or unexpected conditions which are feasible during
production. Testers needs to protect the users from any unreasonable failure so that
one can ensure that the system works properly.
 Test the program to see if it does what it is not supposed to do as well as what it I
supposed to do.

 Avoid disposable test cases unless the program itself is disposable. Reusability of test
case is important for regression.

 Do not plan tests assuming that no errors will be found. There must be targeted
number of defects for testing.

 The probability of locating more errors in any one module is directly proportional to
the number of errors already found in that module.

 Initiate actions for correction, corrective action and preventive actions.

Q2d. Explain the relationship between error, defect and failure with a proper example.

Ans: Error is a human action that produces an incorrect result. It is deviation from
actual and expected value. The mistakes made by programmer is known as an ‘Error’. This
could happen because of the following reasons: -
 Some confusion in understanding the requirement of the software
 Some miscalculation of the values
 Or/And Misinterpretation of any value, etc.

A Defect is a deviation from the Requirements. A Software Defect is a condition in a software


product which does not meet a software requirement (as stated in the requirement
specifications) or end-user expectations. In other words, a defect is an error in coding or
logic that causes a program to malfunction or to produce incorrect/unexpected result. This
could be hardware, software, network, performance, format, or functionality.

Failure is a deviation of the software from its intended purpose. It is the inability of a system
or a component to perform its required functions within specified performance
requirements. Failure occurs when fault executes.

Q2e. Discuss the challenges in software testing.

Ans: Challenges in software testing are different on different fonts. On one font, it
needs to tackle with problem associated with development team. On second font, it has
customer to tackle with.
Management may have problems with understanding testing approach and may
consider it as an obstacle to be crossed before delivering the product to the customer.
There may be problems related to the customer. There may be problems related to
testing process as well as development process.

Major challenges faced by test teams are as follows: -

Ms. Snehal Tandale Page 8


 Requirements are not clear, complete, consistent, measurable and testable

 Requirements are wrongly documented and interpreted by business analyst


and system analyst

 Code logic may be difficult to capture

 Error handling may be difficult to capture

Other challenges in testing

 Badly written code introduces many defects

 Bad architecture of software cannot implement good requirement statement


 Testing is considered as a negative activity

 Testers find themselves in Lose - Lose situation.

Q2f. Describe the structure of a testing team.

Ans:

The common organizational structure where both the test group and the development
group report to the manager of the project.
In this arrangement, the test group often has its own lead or manager whose interest and
attention is focused on the test team and their work. This independence is a great
advantage when critical decisions are made regarding the software's quality.
The test team's voice is equal to the voices of the programmers and other groups
contributing to the product.
The downside, however, is that the project manager is making the final decision on
quality. This may be fine, and in many industries and types of software, it's perfectly
acceptable.
In the development of high-risk or mission-critical systems, however, it's sometimes
beneficial to have the voice of quality heard at a higher level.

Ms. Snehal Tandale Page 9


Question 3

Q3a. Explain boundary value testing and its guidelines.

Ans: Boundary Value Analysis is applied to see if there are any bugs at the boundary of the
input. Boundary value Analysis helps in testing the value of boundary between both valid and
invalid boundary partitions. With this technique, the boundary values are tested by the
creation of test cases for a particular input field.

Boundary Value Testing is also called as “Input Domain Testing” and is the best-known
specification-based testing technique.

Guidelines for boundary value testing are: -

 The test methods based on the input domain of a function are the most rudimentary
of all specification-based testing methods.
 The common assumption about Boundary Value analysis is that the input variables
are independent and when this assumption is wrong, the methods generate
unsatisfactory test cases.
 The tester should develop test cases to check that error messages are generated
when they are appropriate, and are not falsely generated.
 Boundary value analysis can also be used for internal variables, such as loop control
variables, indices and pointers.
 Robustness testing is a good choice for testing internal variables.

Q3b. Write a note on improved equivalence class testing.

Ans: The key of equivalence class testing is the choice of the equivalence classes.

Consider a function, F, of two variables x1 and x2. When F is implemented as a program, the
input variables x1 and x2 will have the following boundaries, and intervals within the
boundaries:
a ≤ x1 ≤ d, with intervals [a, b), [b, c), [c, d]
e ≤ x2 ≤ g, with intervals [e, f), [f, g]
where square brackets and parentheses denote, respectively, closed and open interval
endpoints.

The equivalence classes of valid values are: -


V1 = {x1: a ≤ x1 < b}, V2 = {x1: b ≤ x1 < c}, V3 = {x1: c ≤ x1 ≤ d}, V4 = {x2: e ≤ x2 < f },
V5 = {x2: f ≤ x2 ≤ g}

The equivalence classes of invalid values are: -


NV1 = {x1: x1 < a}, NV2 = {x1: d < x1}, NV3 = {x2: x2 < e}, NV4 = {x2: g < x2}
The equivalence classes V1, V2, V3, V4, V5, NV1, NV2, NV3, and NV4 are disjoint, and
their union is the entire plane.

Q3c. Describe the decision table testing technique in detail

Ans: To identify test cases with decision tables, we interpret conditions as inputs and actions
as outputs.
Ms. Snehal Tandale Page 10
Examples of don’t care entries and impossible rule usage is shown in the table. If the integers
a, b, and c do not constitute a triangle, we do not even care about possible equalities, as
indicated in the first rule. In rules 3, 4, and 6, if two pairs of integers are equal, by transitivity,
the third pair must be equal; thus, the negative entry makes these rules impossible.

c1: a, b, c form a
triangle? F T T T T T T T T

c2: a = b? — T T T T F F F F

c3: a = c? — T T F F T T F F

c4: b = c? — T F T F T F T F

a1: Not a triangle X

a2: Scalene X

a3: Isosceles X X X

a4: Equilateral X

a5: Impossible X X X

Q3d. Write a note on DD path testing.

Ans: DD-Path is also known as a decision-to-decision path.

A DD-path is a sequence of nodes in a program graph such that: -

Case 1: It consists of a single node with indeg = 0.


Case 2: It consists of a single node with outdeg = 0.

Case 3: It consists of a single node with indeg ≥ 2 or outdeg ≥ 2.

Ms. Snehal Tandale Page 11


 Case 4: It consists of a single node with indeg = 1 and outdeg = 1.
Case 5: It is a maximal chain of length ≥ 1.

Given a program written in an imperative language, its DD -path graph is the directed graph
in which nodes are DD-paths of its program graph, and edges represent control flow
between successor DD-paths.

DD-Paths are also known as segments.

Example

Nodes DD-Path Case of Definition


4 First 1
5-8 A 5
9 B 3
10 C 4
11 D 4
12 E 3
13 F 3
14 H 3
15 I 4
16 J 3
17 K 4
18 L 4
19 M 3
20 N 3
21 G 4
22 O 3
23 Last 2

Ms. Snehal Tandale Page 12


Q3e. Explain the concept and significance of cause and effect graphing technique.

Ans: In the early years of computing, the software community borrowed many ideas from
the hardware community. In some cases, this worked well, but in others, the problems of
software just did not fit well with established hardware techniques. Cause-and-effect
graphing is a good example of this.

Cause-and-effect graphs shows unit inputs on the left side of a drawing, and using AND, OR,
and NOT “gates” to express the flow of data across stages of a unit.

In Cause-and-effect graphs if there is any problem at an output, the path(s) back to the
inputs that affected the output can be retraced.

Q3f. Compare weak robust and strong robust equivalence class testing.

Ans: Weak Robust Equivalence class testing

The word ‘weak’ means ‘single fault assumption’.

This type of testing is accomplished by using one variable from each equivalence class in a
test case.

Ms. Snehal Tandale Page 13


Strong Robust Equivalence class testing

This type of testing is based on the multiple fault assumption theory.

Strong equivalence class testing is based on the Cartesian Product of the partition subsets.

The Cartesian product guarantees that we have a notion of “completeness” in two senses:

1)Covers all equivalence classes.

2) Have one of each possible combination of inputs.

Question 4

Q4a. Explain different methods of verification.

Ans: The different methods of verification are as follow: -

Self-Review: - Self review may not be referred as an official way of review in most of the
software verification descriptions, as it assumed that everybody does a self-check before
giving work product for further verification.

Peer Review: - Peer review is the most informal type of review where an author and peer are
involved. It is a review done by a peer and review records are maintained. A peer may be a
fellow developer or tester as the case may be. There is also a possibility of superior review
where peer is a supervisor with better knowledge and experience.

Walkthrough: - A walkthrough is conducted by the author of the ‘document under review


who takes the participants through the document and his or her thought processes, to
achieve a common understanding and to gather feedback. This is especially useful if people
from outside the software discipline are present, who are not used to, or cannot easily
understand software development documents.

Ms. Snehal Tandale Page 14


Inspection(Formal review):-It is usually led by a trained moderator (certainly not by the
author).The document under inspection is prepared and checked thoroughly by the
reviewers before the meeting, comparing the work product with its sources and other
referenced documents, and using rules and checklists. In the inspection meeting the defects
found are logged.

Audits: - Audit is a formal review based on samples. Audits are conducted by auditors who
may or may not be an expert in the given work product.

Q4b. Explain the steps involved in management of verification and validation.

Ans: Verification and validation techniques are complementary to each other. The steps
involved in verification and validation are as follow: -

Defining the processes for verification and validation: - The processes involved are:

a) Software quality assurance process


b) Software quality control process
c) Software development process
d) Software life cycle definition

Prepare plan for execution of process: - When a project proposal is made it must contain a
definition of what is meant by a successful delivery, and how quality of deliverables will be
achieved, ensured and tested during life cycle of the project.

Initiate implementation plan: - The plan made at the time of proposal/contract must be
implemented during development life cycle of a project. Whenever required, verification and
validation plans must be changed to accommodate the changes in development activities,
scope, customer expectations and so on.

Monitor Execution plan: - The verification and validation activities must be monitored during
development life cycle execution. Corrective/Preventive actions must be planned when
discrepancies are observed with respective planned arrangements, or defects are logged, or
non-conformances are observed in the processes/work products.

Analyze problems discovered during execution: - Execution of verification and validation


processes may bring out many problems in the product as well as process. Root cause
analysis of problems and planning for improvement actions are essential parts of continuous
improvement.

Report progress of the process: - The outcome of verification and validation activities as plan
must be formally reported to management, customer, and development team to make them
aware of the project progress.

Ensure product satisfies requirements: - The requirements specified by the customer and
defined by the organization must be tested fully to ensure that these have been achieved in
the product offered by the customer.

Q4c. Describe the benefits of review technique.

Ans: Review is a way of static testing technique done before dynamic testing. Review is
manual examination of software work product (including code) without execution of software
and make comments about it.

Review can be performed on any of the software works like requirement specification, design
specification, code, test plans, test specification, test cases, test scripts, user guides or web

Ms. Snehal Tandale Page 15


pages. Typical defects that are easier to find in review than in dynamic testing are deviation
from standards, requirement defects, design defects, insufficient maintainability and incorrect
interface specifications.

Benefits of Review techniques are as follow: -

e) Early defect detection and correction – It is much cheaper to remove errors


when found during review than finding errors by running tests on execution
code.
f) Development productivity improvements and reduced development
timescales.
g) Reduced testing cost and time.
h) Lifetime cost reduction.
i) Fewer defects and improved communication.
j) Can find Omissions (e.g. in requirements, which are likely to be found in
dynamic testing

Q4d. List and explain how the formal review is carried out.

Ans: Inspection is a very formal way of reviewing the work product. A formal review is carried
out in following phases: -

a) Planning for Inspection: - Planning for Inspection involves selecting people for
inspection, allocating roles to other people (such as facilitator, recorder, and
presenter). Defining the entry and exit criteria for inspection, and selecting which
parts of artifacts are to be looked at.

b) Kick-off Inspection: - Kick-off Inspection may start by distributing artifacts, explaining


the objectives of inspection, process to be followed for inspection, and checking entry
criteria for the artifacts as well as inspection process.

c) Individual preparation: - Participants must come prepared for the inspection. Work
must be reviewed and comments by each of the participants must be ready before
the inspection meeting.

d) Inspection Meeting: - The participants of the meeting may simply note defects, make
recommendations for handling the defects or make decisions about the defects.
Recorder shall note these comments.

e) Decision on comments: - It is not necessary that all the comments will be accepted.
Comments may be rejected, differed or may undergo another iteration of inspection.
For accepted comments, the fixing of defects is done by the author.

f) Follow Up: - Completing the actions as identified in minutes of meeting like checking
that the defects have been addressed and whether exit criteria has been met or not.
the findings can be used to gather statistics about work product, project or process.

Ms. Snehal Tandale Page 16


Q4e. Explain the VV model of testing.

Ans:

VV Model is also known as verification and validation activities associated with software
development during entire life cycle.

Business Requirement Analysis


This is the first phase in the development cycle where the product requirements are
understood from the customer’s perspective. This phase involves detailed communication
with the customer to understand his expectations and exact requirement. The acceptance
test design planning is done at this stage as business requirements can be used as an input
for acceptance testing.

System Design
Once you have the clear and detailed product requirements, it is time to design the
complete system. The system test plan is developed based on the system design. Doing this
at an earlier stage leaves more time for the actual test execution later.

Architectural Design
Architectural specifications are understood and designed in this phase. Usually more than
one technical approach is proposed and based on the technical and financial feasibility the
final decision is taken. This is also referred to as High Level Design (HLD).

Module Design
In this phase, the detailed internal design for all the system modules is specified, referred to
as Low Level Design (LLD). It is important that the design is compatible with the other
modules in the system architecture and the other external systems. The unit tests are an
essential part of any development process and helps eliminate the maximum faults and
errors at a very early stage. These unit tests can be designed at this stage based on the
internal module designs.

Ms. Snehal Tandale Page 17


Coding Phase
The actual coding of the system modules designed in the design phase is taken up in the
Coding phase. The best suitable programming language is decided based on the system
and architectural requirements.

Validation Phases
The different Validation Phases in a V-Model are explained in detail below.

Unit Testing
Unit testing is the testing at code level and helps eliminate bugs at an early stage, though
all defects cannot be uncovered by unit testing.

Integration Testing
Integration testing is associated with the architectural design phase. Integration tests are
performed to test the coexistence and communication of the internal modules within the
system.

System Testing
System testing is directly associated with the system design phase. Most of the software and
hardware compatibility issues can be uncovered during this system test execution.

Acceptance Testing
Acceptance testing is associated with the business requirement analysis phase and involves
testing the product in user environment It also discovers the non-functional issues such as
load and performance defects in the actual user environment.

Q4f. What are the roles and responsibilities of a reviewer?

Ans: Role and responsibilities of a reviewer includes: -

a) Manager: - Manager is the person responsible for getting the work product
inspected. For a project he may be the project manager. Manager decides on the
execution of inspection, defines the schedule, allocates time, defines the objectives to
be met, and determines if the inspection objectives have been met or not at the end
of inspection process.

b) Moderator: - Moderator is the person who leads the inspection of the artifacts,
including planning the inspection, running the meeting and follow-up after the
meeting. Moderator is also called ‘Facilitator’ as he facilitates the entire process.

c) Author: - Author is the writer or the person with chief responsibility of the artifact to
be inspected. He is the person who has created the artifacts, and will be taking action
based on the outcome of the inspection.

d) Reviewers (Checkers/Inspectors): - Individuals with specific technical or business


background who, after necessary preparation, identify and describe findings in the
work product under inspection in the form of comments. Reviewers must be chosen

Ms. Snehal Tandale Page 18


to represent different perspectives and roles in the inspection process and must take
part in meetings.

e) Recorder: - Recorder is the person who documents all the issues, problems, and open
points that are identified during the meeting. The presence of a recorder is a must
because if reviewer or inspectors are discussing something, they have to decide
collectively about the comment. Recorder is also expected to write minutes of
meeting.
Question 5

Q5a. What is integration testing? Explain the Big bang approach.

Ans:

Integration testing involves integration of units to make a module, then integration of


modules to make a system.

Integration testing may start at module level where different units and components come
together to form a module, and go up to system level.

It is considered as a Structural testing.

Integration testing also tests the functionality of software but the main aim of integration
testing is on the interfaces between different modules and systems.

Integration testing mainly focuses on input/ output protocols, and parameters passing
between different units, models and or system.

There are various approaches of integration testing depending upon how the system is
integrated and they are as follow: -
a) Bottom-Up Testing
b) Top-Down Testing

Ms. Snehal Tandale Page 19


Q5b. What is the need of a Security Testing?
Ans: Security testing is a special type of testing intended to check the level of
security and protection offered by an application to the users against
unfortunate incidences.

The incidences could be loss of privacy, loss of data, etc.

Some definition associated with security are given below: -

a) Vulnerability: -The weaker parts of the system represent the


vulnerabilities in the systems. These parts of system are less
protected. One must take precaution not to expose these weak points
to outsiders.

b) Threats: -Threats represent the possible attacks on the system from


outsiders with malicious intentions. Threat is defined as an
exploitation of vulnerabilities of the system.

c) Perpetrators: -Perpetrators are the entities who are unwelcomed


guests in the system. They can create a problem in the system by
doing something undesirable like loss of data and making changes in
the system. Perpetrators can be people, other system, viruses, etc.

d) Points of penetrations: -The point where the system can be


penetrated or where the system is least guarded represents the point
of penetration. These points represent the vulnerabilities in the
system

Q5c. What is performance testing? List different types of performance testing.


Ans: Performance testing is intended to find whether the system meets its performance
requirements under normal load or normal level of activities.

Normal load must be defined by the requirement statement.

Performance criteria must be expressed in numerical terms.

Performance criteria must be measureable in quantitative terms such as time in


‘milliseconds’.

Some of the examples of performance testing are as below: -

a) Adding a new record in database must take maximum five milliseconds. It means
that when the record is added in database, it may take time lesser or equal to five
milliseconds.
b) Searching of a record in a database containing one million records must not take
more than one second. One must add one million records in the system, and then
use the search criteria to test this.
c) Sending information of 1MB size across the system with a network of 512 KBPS
must not take more than one minute.

Ms. Snehal Tandale Page 20


Generally, automated tools are used where the performance requirements are very stringent
and human senses may not be capable of capturing them exactly.

Q5d. Explain the concept of inter system testing and its Importance
Ans No system works alone. There are possibilities that the system developed may have to
interact with many other supporting systems.

Testing of interfaces between two or more systems is essential to make sure that they work
correctly and information is transferred between different systems.

System testing is designed to determine whether: -

a) Parameters and data are correctly passed between application and other
systems.

b) Documentation for the involved system must be accurate and


complete and must be matching expected inputs and outputs.

c) System testing must be conducted whenever there is a change in the


parameters.

d) Representative set of test transactions is prepared in one


system and passed on to another system.

e) Manual verification of documentation is done to understand


the relationship between different systems.

Q5e. Explain the significance of Usability testing.

Ans: Usability testing is done to check ‘ease of use’ of an application to a common user who
will use the application in production environment.

It involves using user guides and help manuals available with application. It is applied to
determine whether: -

a) It is simple to understand application usage through look, feel and support


available like online help.
b) It is easy to execute an application process from user interface provided.

Usability testing is done by: -

a) Direct observation of people using system.


b) Conducting usability surveys.
c) Beta testing or business pilot of application is user environment.

Usability testing checks for human factor problems such as: -

a) Whether outputs from the system such as printouts, reports etc. are meaningful
or not.
b) Is error diagnostic straightforward or not.
c) Error messaging must help common users using the system.
d) Does User Interface have conformity to the syntax, format, style observations etc.
e) Is the application easy to use?

Ms. Snehal Tandale Page 21


f) Is there an exit option in all choices so that user can exit the system at any
moment?
g) System must not annoy intended user
h) System taking control from user without indicating when it will be
returned can be a problem.
i) System must provide online help or user manual.
j) Consistent in its function and overall design.

Q5f. Explain Commercial off-the-shelf software testing.

Ans: ‘COTS’ stands for ‘commercially of the shell’ software. This software is readily available in
the market and user can buy and use them directly.

The reason software organizations use COTS is: -

 Line of Business: - An organization may not be in a line of business of making such


software which it requires.

 Cost-Benefit Analysis: -Sometimes, it is very costly to develop software in-house due


to various limitations like knowledge, skills, resources, etc.

 Expertise/Domain knowledge: -An organization may have knowledge about how to


use the software that it needs but it may not have knowledge to build such software
in-house.

 Delivery Schedule: -COTS are available across the table by paying a price while
developing such software in-house may take a very long time and huge efforts.

Following are the features of COTS:-

 COTS' are developed with general requirements


 Some `COTS' may need changing business processes to suit the
`COTS' implementation.
 Sometimes COTS may need configuration of software or system.

---------------------------x---------------------------

Ms. Snehal Tandale Page 22

You might also like