Software Engineering MCA - Unit - 2
Software Engineering MCA - Unit - 2
Syllabus:
Software Engineering Practice: Software Engineering Practice, communication practices,
Planning Practices, Modeling Practices, Construction Practices, Deployment.
Testing Tactics: Software Testing Fundamentals, Black Box and White Box Testing, White Box
Testing, Basis Path Testing, Control Structure Testing, Black Box Testing.
INTRODUCTION
Software Engineering is a rapidly evolving field, and new tools and technologies are
constantly being developed to improve the software development process. By following the
principles of software engineering and using the appropriate tools and methodologies, software
developers can create high-quality, reliable, and maintainable software that meets the needs of
its users.
In this part you’ll learn about the principles, concepts, and methods that comprise software
engineering practice.
People who create computer software practice the art of craft or discipline that is software
engineering. But what it software engineering “practice”?
What is it? Practice is a broad array of concepts, principles, methods, and tools that you must
consider as software planned and developed. It represents the details-the technical considerations and
how to—that are below the surface of the software process: the things that you’ll need to actually
build high-quality computer software
Who does it? The practice of software engineering is applied by software engineers and their
managers.
Why is it important? The software process provides everyone involved in the creation of a
computer-based system or product with a road map for getting to a destination successfully.
Practice provides you with the detail you’ll need to drive along the road. It tells you where the
bridges, the roadblocks, and the forks are located. It helps you understand the concepts and
principles that must be understood and followed to drive safely and rapidly. It instructs you on
how to drive, where to slow down, and where to speed up. In the context of software
engineering, practice is what you do day in and day out as software evolves from an idea to a
reality.
What are the steps? Three elements of practice apply regardless of the process models that is
chosen. They are: concepts, principles, and methods. A fourth element of practice—tools—
supports the application of methods.
What is the work product? Practice encompasses the technical activities that produce all
work products that are defined by the software process model that has been chosen.
We introduced a generic software process model composed of a set of activities that establish
a framework for software engineering practice.
The essence of Practice
In a classic book, How to Solve It, written before modern computes existed, George Polya
outlined the essence of problem solving, and consequently, the essence of software
engineering practice:
In the context of software engineering, these common sense steps lead to a series of essential
questions.
The dictionary defines the word principle as “an important underlying law or assumption
required in a system of thought.” Throughout this book we discuss principles at many different
levels of abstraction. Some focus on software engineering as a whole, others consider a specific
generic framework activity (e.g., customer communication), and still others focus on software
engineering actions (e.g., architectural design) or technical tasks (e.g., write a usage scenario)..
David Hooker has proposed seven core principles that focus on software engineering practice
as a whole. They are reproduced below.
software system we build is used by someone else. So always specify, design and implement
knowing someone else will have to understand what you are doing.
Design, by keeping the implementers in mind. Code with concern for those who must maintain
and extend the system. Someone may have to debug the code you write, and that makes them
a user of your code. Making their job easier adds value to the system.
A system with a long lifetime has more value. In today’s computing environments, where
specifications change on a moment’s notice and hardware platforms are obsolete after just a
few months, software lifetimes are typically measured in months instead of years, however ,
true “ industrial- strength” software systems must endure far longer. To do this successfully,
these systems must be ready to adapt to these and other changes. This could very possibly
lead to the reuse of an entire system.
COMMUNICATION PRACTICES
Before customer requirements can be analysed, modelled, or specified they must be gathered
through a communication (also called requirements elicitation) activity. A customer has a
problem that may be amenable to a computer- based solution. A developer responds to the
customer’s request for help. Communication has begun. But the road from communication to
understanding is often full of patholes.
Effective communication (among technical peers, with the customer and other
stakeholders, and with project managers) is among the most challenging activities that
confront software engineer. In this context, we discuss communication principles that apply
equally to all forms of communication that occur within a software project.
Principle #8: (a) Once you agree to something, move on; (b) If you can’t agree to
something, move on; (c) If a feature or function is unclear and cannot be clarified at the
moment move on. Communication, like any software engineering activity, takes time.
Rather than iterating endlessly, the people who participate should recognize that many topics
require discussion and that “moving on” is sometimes the best way to achieve
communication agility.
Principle #9: Negotiation is not a contest or a game. It works best when both parties
win.
There are many instances in which the software engineer and the customer must negotiate
functions and features, priorities, and delivery dates. If the team has collaborated well, all
parties have a common goal. Therefore, negotiation will demand compromise from all
parties.
PLANNING PRACTICES
The communication activity helps a software team to define its overall goals and
objectives However, understanding these goals and objectives is not the same as defining a
plan for getting there. The planning activity encompasses a set of management and technical
practices that enable the software team to define a road map as it travels towards its strategic
goal and technical objectives.
Regardless of the rigor with which planning is conducted, the following principles always
apply.
Principle #1: Understand the scope of the project. It’s impossible to use a road map if you
don’t know where you’re going. Scope provides the software.
Principle #2: Involve the customer in planning activity. The customer defines priorities
and establishes the project constraints.
Principle #3: Recognize that planning is iterative. As work begins, it is very likely that
things will change. As a consequence, the plan must be adjusted to accommodate these
changes. In addition, iterative and incremental process models dictate re-planning based on
feedback received from users.
Principle #4: Estimate based on what you know. The intent of estimation is to provide an
indication of effort, cost, and task duration, based on the team’s current understanding of the
work to be done.
Principle #5: Consider risk as you define the plan. If the team has defined risks that have
high impact and high probability, contingency planning is necessary.
Principle #6: Be realistic. People don’t work 100 percent every day. Noise always enters
into any human communication. Omission and ambiguity are facts of life. Change will occur.
Even the best software engineers make mistakes. These and other realities should be
considered as a project plan is established.
Principle #7: Adjust granularity as you define the plan. Granularity refers to the level of
detail that is introduced as a project plan is developed. A “fine granularity” plan provides
significant work detail that is planned over relatively short time increments.
Principle #8: Define how you intend to ensure quality. The plan should identify how the
software team intends to ensure quality. If formal technical reviews are to be conducted, they
should be scheduled.
Principle #9: Describe how you intend to accommodate change. Even the best planning
can be obviated by uncontrolled change. The software team should identify how changes are
to be accommodated as software engineering work proceeds.
Principle #10: Track the plan frequently and make adjustments are required. Software
project falls behind schedule one day at a time. Therefore, it makes sense to track progress on
a daily basis, looking for a problem areas and situation in which scheduled work does not
confirm to actual work conducted. When slippage is encountered, the plan is adjusted
accordingly. The following is called as W5HH principle.
Why is the system being developed? All parties should assess the validity of business
reasons for the software work. Stated in another way, does the business purpose justify the
expenditure of people, time, and money?
What will be done? Identify the functionality to be built, and by implication, the task
required to get the job done.
When will it be accomplished? Establish a workflow and timeline for key project tasks and
identify the milestones required by the customer.
Who is responsible for a function? The role and responsibility of each member of the
software tam must be defined.
Where they are organizationally located? Not all roles and responsibilities reside within
the software tam itself. The customer, users, and other stakeholders also have responsibilities.
How will the job be done technically and managerially? Once product scope is
established, a management and technical strategy for the project must be defined.
How much of each resource is needed? The answer to this question is derived by
developing estimates based on answers to earlier questions.
The answers to the above questions are important regardless of the size of complexity of a
software project. But how does the planning process begin?
MODELING PRACTICE
The models are created to gain better understanding of actual entity to be built. When
the entity is a physical thing, we can build a model that is identical in form of shape but smaller
in scale. However, when the entity is software, our model must take a different form. It must
be capable of representing the information that software transforms, the architecture and
functions that enable the transformation to occur, the features that user’s desire, and the
behaviour of the system as the transformation is taking place.
Two classes of models are created:
1. Analysis models- represent the customer requirements by depicting the software in three
different domains: the information domain, the functional domain
2. The Behavioural domain. -Design models represent characteristics of the software that help
practitioners to construct it effectively.
A large number of analysis modelling methods have been developed. Each analysis methods
has unique point of view.
Principle #1: The information domain of a problem must be represented and understood.
The information domain compasses the data that flow into the system and the data stores that
collect and organize persistent data objects.
Principle #2: The functions that the software performs must be defined.
Software functions provide direct benefit to visible end-user. Some functions transform data
that flow into the system; in other cases, functions effect some level of control over internal
software processing or external system elements.
The behaviour of computer software is driven by its interaction with the external environment.
Input provided by end-users, control data provided by an external system, or monitoring data
collected over a network all cause the software to behave in a specific way.
Principle #4: The models that depict information, function, and behaviour must be
partitioned in a manner that uncovers detail in a layered fashion.
Analysis modelling is the first step in software engineering problem solving. It allows the
practitioner to understand the problem better and establishes a basis for the solution (design).
Complex problems are difficult to solve in their entirety. For this reason, we use a divide and
conquer strategy. A large, complex problem is divided into sub-problems until each sub-
problem is relatively easy to understand. This concept is called partitioning, and it is a key
strategy in analysis modelling.
Principle #5: The analysis task should move from essential information toward
implementation detail.
Analysis modelling begins by describing the problem from the end-user’s perspective. The
“essence” of a problem is described without any consideration of how a solution will be
implemented.
The design model created for software provides a variety of different views of system. There
is no shortage of methods for deriving various elements of a software design.
Some methods are data-driven, allowing the data structure to dictate the program architecture
and the resultant processing component.
Others are pattern-driven, using information about the problem domain (the analysis model)
to develop architectural styles and processing patterns- a set of design principles that can be
applied regardless of the method that is used.
The analysis model describes the information domain of the problem, uses visible functions,
system behaviour, and a set of analysis classes that package business objects with the
methods that service them. The design model translates this information into an
architecture, a set of subsystems that implement major functions, and a set of component-
level designs that realize analysis classes.
Software architecture is the skeleton of the system to be built. It affects interfaces, data
structures, program control flow behaviour, the manner in which testing can be conducted and
the maintainability of resultant system.
Data design is an essential element of architectural design. The manner in which data objects
are realized within the design cannot be left to chance. A well-structured data design helps to
simplify program flow, makes design and implementation of software components easier, and
makes overall processing more efficient.
Principle #4: Interfaces (both internal and external) must be designed with care.
The manner in which data flow between the components of a system has much to do with
processing efficiency, error propagation, and design simplicity, A well designed interface
integration easier and assists the tester in validating Components functions.
Principle #5: User interface design should be tuned the needs of the end-user.
However, in every case, it should be stress free and easy to use. The user interface is the visible
manifestation of the software. A poor interface design often leads to the perception that the
software is “bad”.
Note: Cohesive means the degree to which elements within a module work together to fulfil a
single, well defined purpose that is, it should focus on one and only one function or sub-
function.
Principle #7: Components should be loosely coupled to one another and to the external
environment.
Coupling is achieved in many ways-via component inter-face, by messaging and through global
data. As the level of coupling increases, the likelihood of error propagation also increases and
the overall maintainability of the software decreases. Therefore, component coupling should
be kept as low as is reasonably possible.
Note: Coupling: coupling means the degree of interdependency between the modules
High coupling + low cohesion => can makes system difficult to change & test.
Low coupling + High cohesion =>can make system easier to maintain.
The purpose of design is to communicate information to practitioners who will generate code,
to those who will test the software, and others who may maintain the software in the future. If
the design is difficult to understand, it will not serve as an effective communication medium.
With each iteration, the designer should strive for greater simplicity. Like most of the creative
activities, design occurs iteratively. The first iteration works to refine the design and correct
errors, but later iterations should strive to make the design as simple as is possible.
CONSTRUCTION PRACTICE
The construction activity encompasses a set of coding and testing task that lead operational
software that is ready for delivery to the customer or end-user.
Preparation principles:
Coding principles:
Validation principles:
Testing Principles:
In a classic book on software testing, Glen Myers states a number of rules that can
serve well as testing objectives:
Davis suggests a set of testing principles that have been adapted for use in this book:
The objective of software testing is to uncover errors. It follows that thee most server defects
(from the customer’s point of view) are those that cause the program to fail to meet its
requirements/goals.
Test planning can begin as soon as the analysis model is complete. Detailed definition of test
cases can begin as soon as the design model has been solidified. Therefore, all tests can be
planned and designed before any code has been generated.
Principle #4: Testing should begin “in the small” and progress toward testing “in the
large”.
The first tests planned and executed generally focus on individual components. As testing
progresses, focus shifts in an attempt to find error in integrated clusters of components and
ultimately in the entire system.
The number of path permutations for even a moderately sized program is exceptionally large.
For this reason, it is impossible to execute every combination of paths during testing. It is
possible, however, to adequately cover program logic and to ensure that all conditions in the
component- level design have been exercised.
Deployment practices
The deployment activity encompasses three actions delivery, support, and feedback.
Because modern software process models are evolutionary in nature, deployment happens not
once, but a number of times as software moves towards completion. Each delivery cycle
provides the customer and end-users with an operational software increment that provides
usable functions and features. The delivery of a software increment represents an important
milestone for any software project.
A number of key principles should be followed as the team prepares to deliver an increment:
Principle #3: A support regime must be established before the software is delivered.
An end-user expects responsiveness and accurate information when a question or problem
arises. Support should be planned, support material should be prepared, and appropriate
record keeping mechanism should be established so that the software team can conduct a
categorical assessment of the kinds of support requested required.
Testing Tactics:
Software testability is measured with respect to the efficiency and effectiveness of testing.
Efficient software architecture is very important for software testability. Software testing is
a time-consuming, necessary activity in the software development lifecycle, and making
this activity easier is one of the important tasks for software companies as it helps to reduce
costs and increase the probability of finding bugs. There are certain metrics that could be
used to measure testability in most of its aspects. Sometimes, testability is used to mean
how adequately a particular set of tests will cover the product.
• Testability helps to determine the efforts required to execute test activities.
• Less the testability larger will be efforts required for testing and vice versa.
3. Controllability: “The better we can control the software, the more the testing can be
automated and optimized.”
• All possible outputs can be generated through some combination of inputs. Software
and hardware states and variables can be controlled directly by the test engineer. Tests
can be conveniently specified, automated & reproduced.
• Input and output formats are consistent and structured.
4. Decomposability: “By controlling the scope of testing, we can more problems and
perform smarter retesting.” quickly isolate
• The software system is built from independent modules.
• Software modules can be tested independently.
5. Simplicity: “The less there is to test, the more quickly we can test it.”
• Functional simplicity (e.g., the feature set is the minimum necessary to meet
requirements).
• Structural simplicity (e.g., architecture is modularized to limit the propagation of
faults).
• Code simplicity (e.g., a coding standard is adopted for ease of inspection and
maintenance).
6. Stability: “The fewer the changes, the fewer the disruptions to testing.” Changes to the
software are infrequent.
• Changes to the software are controlled.
• Changes to the software do not invalidate existing tests. The software recovers well
from failures.
8. Availability: “The more accessible objects are the easier to design test cases”.
It is all about the accessibility of objects or entities for performing the testing, including
bugs, source code, etc.
Test Characteristics.
1.A good test has a high probability of finding an error.
To achieve this goal, the tester must understand the software and attempt to develop
a mental picture of how the software might fail. Ideally, the classes of failure are
probed. For example, one class of potential failure in a graphical user interface is
the failure to recognize proper mouse position. A set of tests would be designed to
exercise the mouse in an attempt to demonstrate an error in mouse position
recognition.
2.A good test is not redundant.
Testing time and resources are limited. There is no point in conducting a test that
has the same purpose as another test. Every test should have a different purpose
(even if it is subtly different).
3.A good test should be “best of breed”
In a group of tests that have a similar intent, time and resource limitations may
mitigate toward the execution of only a subset of these tests. In such cases, the test that has
the highest likelihood of uncovering a whole class of errors should be used.
4.A good test should be neither too simple nor too complex.
Although it is sometimes possible to combine a series of tests into one test case, the
possible side effects associated with this approach may mask errors. In general, each test
should be executed separately.
Software Testing is a type of investigation to find out if there is any default or error present in
the software so that the errors can be reduced or removed to increase the quality of the
software and to check whether it fulfils the specifies requirements or not.
White-box Testing
White-box testing is a testing technique in which software’s internal structure, design, and
coding are tested to verify input-output flow and improve design, usability, and security.
In white box testing, code is visible to testers, so it is also called Clear box testing, Open
box testing, Transparent box testing, Code-based testing, and Glass box testing.
White box Testing is Performed in 2 Steps:
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them.
1. Control Flow Graph – A control flow graph (or simply, flow graph) is a directed graph
which represents the control structure of a program or module. A control flow graph (V, E)
has V number of nodes/vertices and E number of edges in it. A control graph can also have
:
• Junction Node – a node with more than one arrow entering it.
• Decision Node – a node with more than one arrow leaving it.
• Region – area bounded by edges and nodes (area outside the graph is also counted as a
region.).
Below are the notations used while constructing a flow graph :
Sequential Statements –
Graph Matrices:
➢ A graph matrix is a square matrix whose size (i.e., number of rows and columns) is
equal to the number of nodes on the flow graph.
➢ Each row and column corresponds to an identified node, and matrix entries
correspond to connections (an edge) between nodes.
➢ each node on the flow graph is identified by numbers, while each edge is identified
by letters. A letter entry is made in the matrix to correspond to a connection between
two nodes.
➢ The graph matrix is nothing more than a tabular representation of a flow graph.
However, by adding a link weight to each matrix entry, the graph matrix can
become a powerful tool for evaluating program control structure during testing.
➢ The link weight provides additional information about control flow.
➢ In its simplest form, the link weight is 1 (a connection exists) or 0 (a connection
does not exist).
CHECK NOTES FOR EXAMPLE FOR BASIS PATH TESTING & GRAPH
MATRICES
1.Condition Testing:
Condition testing is a test cased design method, which ensures that the logical condition
and decision statements are free from errors.
The errors present in logical conditions can be incorrect Boolean operators, missing
parenthesis in a Boolean expression, error in relational operators, arithmetic expressions,
and so on.
The common types of logical conditions that are tested using condition testing are-
1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and
‘OP’ is an operator.
2. A simple condition like any relational expression preceded by a NOT (~) operator. For
example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions, Boolean operator,
and parenthesis. For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic
expression and ‘&’ and ‘|’ denote AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR,
NOT. For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands
and | denotes OR operator.
The data flow test method chooses the test path of a program based on the locations of the
definitions and uses all the variables in the program. The data flow test approach is depicted
as follows suppose each statement in a program is assigned a unique statement number and
that theme function cannot modify its parameters or global variables.
For every variable x and node i in a way that x has a global declaration in node I, pick a
comprehensive path including the def-clear path from node i to
2. All c-uses:
For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including the def-clear path from node i to all nodes j having a global c-
use of x in j.
3. All p-uses:
For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including the def-clear path from node i to all edges (j,k) having p-use of
x on edge (j,k).
4. All p-uses/Some c-uses:
it is similar to all p-uses criterion except when variable x has no global p-use, it reduces to
some c-uses criterion as given below
5. Some c-uses:
For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including the def-clear path from node i to some nodes j having a global
c-use of x in node j.
6. All c-uses/Some p-uses:
it is similar to all c-uses criterion except when variable x has no global c-use, it reduces to
some p-uses criterion as given below:
7. Some p-uses:
For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including def-clear paths from node i to some edges (j,k) having a p-use
of x on edge (j,k).
8. All uses:
it is a combination of all p-uses criterion and all c-uses criterion.
9. All du-paths:
For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including all du-paths from node i
There are 8 statements in this code. In this code we cannot cover all 8 statements in a single
path as if 2 is valid then 4, 5, 6, 7 are not traversed, and if 4 is valid then statement 2 and 3
will not be traversed.
Hence we will consider two paths so that we can cover all the statements.
x= 1
Path – 1, 2, 3, 8
Output = 2
Set x= -1
Path = 1, 2, 4, 5, 6, 5, 6, 5, 7, 8
Output = 2
x is set as 1 then it goes to step 1 to assign x as 1 and then moves to step 2 which is false as x
is smaller than 0 (x>0 and here x=-1). It will then move to step 3 and then jump to step 4; as 4
is true (x<=0 and their x is less than 0) it will jump on 5 (x<1) which is true and it will move
to step 6 (x=x+1) and here x is increased by 1.
So,
x=-1+1
x=0
x become 0 and it goes to step 5(x<1),as it is true it will jump to step
6 (x=x+1)
x=x+1
x= 0+1
x=1
x is now 1 and jump to step 5 (x<1) and now the condition is false and it will jump to step 7
(a=x+1) and set a=2 as x is 1. At the end the value of a is 2. And on step 8 we get the output
as 2.
3) Loop Testing:
• Loops are widely used and these are fundamental to many algorithms hence, their testing
is very important. Errors often occur at the beginnings and ends of loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
➢ Skip the loop entirely
➢ Only one pass through the loop
➢ 2 passes
➢ m passes, where m < n
➢ n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop
and this is worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each. If they’re not independent, treat them like nesting.
Black-Box Testing:
Black-box testing, also called behavioural testing, focuses on the functional requirements of
the software. That is, black-box testing techniques enable you to derive sets of input
conditions that will fully exercise all functional requirements for a program. Black-box
testing is not an alternative to white-box techniques.
Black-box testing attempts to find errors in the following categories:
(1) incorrect or missing functions,
(2) interface errors,
(3) errors in data structures or external database access,
(4) behavior or performance errors, and
(5) initialization and termination errors.
Black-Box Testing techniques:
The symbolic representation of a graph is shown in Figure 18.8a. Nodes are represented as
circles connected by links that take a number of different forms. A directed link
(represented by an arrow) indicates that a relationship moves in only one direction. A
bidirectional link, also called a symmetric link, implies that the relationship applies in both
directions. Parallel links are used when a number of different relationships are established
between graph nodes.
2) Equivalence Partitioning
This technique is also known as Equivalence Class Partitioning (ECP). In this technique,
input values to the system or application are divided into different classes or groups based on
its similarity in the outcome.
Hence, instead of using each and every input value, we can now use any one value from the
group/class to test the outcome. This way, we can maintain test coverage while we can reduce
the amount of rework and most importantly the time spent.
For Example:
As present in the above image, the “AGE” text field accepts only numbers from 18 to 60.
There will be three sets of classes or groups.
The basic principle behind this technique is to choose input data values:
• Just below the minimum value
• Minimum value
• Just above the minimum value
• A Normal (nominal) value
• Just below the maximum value
• Maximum value
• Just above the maximum value
Example #1: Input box (say for accepting age) accepts values between 21 and 55.
Runs It is the number of rows which represents the number of test conditions to be
performed.
• As the rows represent the number of test conditions (experiment test) to be performed,
the goal is to minimize the number of rows as much as possible.
• Factors indicate the number of columns, which is the number of variables.
• Levels represent the maximum number of values for a factor (0 – levels – 1).
Together, the values in Levels and Factors are called LRUNS (Levels**Factors).
Example 1
We provide our personal information like Name, Age, Qualification, etc., in various
registration forms like first-time app installation or any other Government websites.
The following example is from this kind of application form. Consider that there are four
fields in a registration form (webpage) which have certain sub-options in it.
Age field
• Less than 18
• More than 18
• More than 60
Gender field
• Male
• Female
• NA
Highest Qualification
• High School
• Graduation
• Post-Graduation
Mother Tongue
• Hindi
• English
• Other
Step 1: Determine the number of independent variables. There are four independent variables
(Fields of the registration form) = 4 Factors.
Step 2: Determine the maximum number of values for each variable. There are three values
(There are three sub-options under each field) = 3 Levels.
Step 3: Determine the Orthogonal Array with 4 Factors and 3 Levels. Referring to the
link we have derived the number of rows required i.e. 9 Rows.
The orthogonal array follows the pattern L Runs(LevelsFactors). Hence, in this example, the
Orthogonal Array will be L9(34).
Run 1 0 0 0 0
Run 2 0 1 2 1
Run 3 0 2 1 2
Run 4 1 0 2 2
Run 5 1 1 1 0
Run 6 1 2 0 1
Run 7 2 0 1 1
Run 8 2 1 0 2
Run 9 2 2 2 0
Step no. 4: Map the Factors and Levels of the Array generated.
After mapping the Factors and Levels, the Orthogonal Array will look as shown Above.
Step no. 5:
Each Run in the above table represents the test scenario to be covered in testing. Each run is
changed to a test condition.
Limitations of OATS
None of the testing techniques provides a guarantee of 100% coverage. Each technique has
its way of selecting the test conditions. On similar lines, there are some limitations to using
this technique:
• Testing will fail if we fail to identify the good pairs.
• Probability of not identifying the most important combination which can result in
losing a defect.
• This technique will fail if we do not know the interactions between the pairs.
• Applying only this technique will not ensure complete coverage.
• It can find only those defects which arise due to pairs, as input parameters.
It is a way of software testing in which the It is a way of testing the software in which
internal structure or the program or the the tester has knowledge about the internal
code is hidden and nothing is known about structure or the code or the program of the
it. software.
Implementation of code is not needed for Code implementation is necessary for white
black box testing. box testing.
No knowledge of implementation is
Knowledge of implementation is required.
needed.
This testing can be initiated based on the This type of testing of software is started
requirement specifications document. after a detail design document.
It is the behavior testing of the software. It is the logic testing of the software.
Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.
Test Coverage: Graph-based testing aims to Test Coverage: Orthogonal testing ensures that
ensure that all possible paths and transitions various combinations of factors are tested
within the graph are tested, including valid and efficiently, providing good coverage without
invalid scenarios. testing every possible combination.
Complexity: It is particularly useful for complex Simplicity: It simplifies the test case design
systems but can be time-consuming to design process and often leads to a smaller set of test
and execute test cases. cases compared to exhaustive testing.
Automation: Automation tools are often used Manual or Automated: Orthogonal testing can be
to generate and execute test cases based on performed manually or using specialized tools to
the graph model.. generate test cases based on orthogonal arrays..