Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views29 pages

Software Engineering MCA - Unit - 2

The document outlines essential practices and principles of software engineering, focusing on communication, planning, and modeling practices necessary for successful software development. It emphasizes the importance of understanding stakeholder needs, iterative planning, and effective communication strategies to ensure high-quality software outcomes. Additionally, it introduces core principles that guide software engineers in their daily practices, aiming to enhance the overall software development process.

Uploaded by

Hima bindu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views29 pages

Software Engineering MCA - Unit - 2

The document outlines essential practices and principles of software engineering, focusing on communication, planning, and modeling practices necessary for successful software development. It emphasizes the importance of understanding stakeholder needs, iterative planning, and effective communication strategies to ensure high-quality software outcomes. Additionally, it introduces core principles that guide software engineers in their daily practices, aiming to enhance the overall software development process.

Uploaded by

Hima bindu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

UNIT – II

Syllabus:
Software Engineering Practice: Software Engineering Practice, communication practices,
Planning Practices, Modeling Practices, Construction Practices, Deployment.
Testing Tactics: Software Testing Fundamentals, Black Box and White Box Testing, White Box
Testing, Basis Path Testing, Control Structure Testing, Black Box Testing.

INTRODUCTION
Software Engineering is a rapidly evolving field, and new tools and technologies are
constantly being developed to improve the software development process. By following the
principles of software engineering and using the appropriate tools and methodologies, software
developers can create high-quality, reliable, and maintainable software that meets the needs of
its users.

SOFTWARE ENGINEERING PRACTICE

In this part you’ll learn about the principles, concepts, and methods that comprise software
engineering practice.

People who create computer software practice the art of craft or discipline that is software
engineering. But what it software engineering “practice”?

What is it? Practice is a broad array of concepts, principles, methods, and tools that you must
consider as software planned and developed. It represents the details-the technical considerations and
how to—that are below the surface of the software process: the things that you’ll need to actually
build high-quality computer software

Who does it? The practice of software engineering is applied by software engineers and their
managers.

Why is it important? The software process provides everyone involved in the creation of a
computer-based system or product with a road map for getting to a destination successfully.
Practice provides you with the detail you’ll need to drive along the road. It tells you where the
bridges, the roadblocks, and the forks are located. It helps you understand the concepts and
principles that must be understood and followed to drive safely and rapidly. It instructs you on
how to drive, where to slow down, and where to speed up. In the context of software
engineering, practice is what you do day in and day out as software evolves from an idea to a
reality.

What are the steps? Three elements of practice apply regardless of the process models that is
chosen. They are: concepts, principles, and methods. A fourth element of practice—tools—
supports the application of methods.

What is the work product? Practice encompasses the technical activities that produce all
work products that are defined by the software process model that has been chosen.

We introduced a generic software process model composed of a set of activities that establish
a framework for software engineering practice.
The essence of Practice

In a classic book, How to Solve It, written before modern computes existed, George Polya
outlined the essence of problem solving, and consequently, the essence of software
engineering practice:

1. Understand the problem (communication and analysis).


2. Plan a solution (modelling and software design).
3. Carry out the plan (code generation).
4. Examine the result for accuracy (testing and quality assurance).

In the context of software engineering, these common sense steps lead to a series of essential
questions.

Understand the problem:


• Who has a stake in the solution to the problem?
That is, who are the stakeholders?
• What are the unknowns?
What data, functions, features, and behaviour are required to properly solve the
problem?
• Can the problem be compartmentalized?
Is it possible to divide problems into smaller units that may be easier to understand?
Can the problem be represented graphically?
Can an analysis model be created to better view of problem?

Plan the solution:


• Have you seen similar problems before?
Are there patterns that are recognizable in a potential solution?
Is there existing software that implements the data, functions, features, and behaviour
that are required?
• Has a similar problem been solved?
If so, are elements of the solution reusable?
• can sub problems be defined?
If so, are elements of solutions readily apparent for the sub-problems?
• Can you represent a solution in a manner that leads to effective implementation?
Can a design model be created?

Carry out the plan:


• Does the solution conform to the plan?
Is source code traceable to the design model?
• Is each component part of the solution probably correct?
Has the design and code been received, or better, has correctness proof been applied
to the algorithm?

Examine the result:


• Is it possible to test each component part of the solution?
Has a reasonable testing strategy been implemented?
• Does the solution produce results that conform to the data Functions, features and behaviour
that are required?
Has the software been validated against all stakeholder requirements?
CORE PRINCIPLES

The dictionary defines the word principle as “an important underlying law or assumption
required in a system of thought.” Throughout this book we discuss principles at many different
levels of abstraction. Some focus on software engineering as a whole, others consider a specific
generic framework activity (e.g., customer communication), and still others focus on software
engineering actions (e.g., architectural design) or technical tasks (e.g., write a usage scenario)..

David Hooker has proposed seven core principles that focus on software engineering practice
as a whole. They are reproduced below.

The First Principle: The Reason It All Exists


A software system exists for one reason: to value to its users. All decisions should be made
with this in mind. Before specifying a system requirement, before noting a piece of system
functionality, before determining the hardware platforms or development processes, ask
yourself questions such as: Does this add real value to the system? If the answer is no, don’t
do it. All other principles support this one.

The Second Principle: KISS (Keep It Simple, Stupid)


Software design is not a haphazard process. There are many factors to consider in any design
effort. All design should be as simple as possible, but no simpler. This facilities having a more
easily understood, and easily maintained system.
In the name of simplicity, we don’t need to discard to essential features. Simple also does not
mean “quick and dirty.” The pay-off is software that is more maintainable and less error-prone.

The Third Principle: Maintain the Vision


A clear vision is essential to the success of a software project.
Compromising the architectural vision of a software system weakens and will eventually break
even a well-designed the system. Having an empowered architect who can hold the vision and
enforce compliance helps ensure a very successful software project.

The fourth principle: what you produce, others will consume

software system we build is used by someone else. So always specify, design and implement
knowing someone else will have to understand what you are doing.
Design, by keeping the implementers in mind. Code with concern for those who must maintain
and extend the system. Someone may have to debug the code you write, and that makes them
a user of your code. Making their job easier adds value to the system.

The fifth principles: Be Open to the future.

A system with a long lifetime has more value. In today’s computing environments, where
specifications change on a moment’s notice and hardware platforms are obsolete after just a
few months, software lifetimes are typically measured in months instead of years, however ,
true “ industrial- strength” software systems must endure far longer. To do this successfully,
these systems must be ready to adapt to these and other changes. This could very possibly
lead to the reuse of an entire system.

The Sixth Principle: plan Ahead for Reuse


Reuse saves time and effort. Achieving a high level of reuse is arguably the hardest goal to
accomplish in developing a software system. The reuse of code and design has been proclaimed
as a major benefit of using object-oriented technologies. There are many techniques to realize
reuse at every level of the system development process & are well known and documented.
New literature is addressing the reuse of design in the form of software patterns.
Planning ahead for reuse reduces the cost and increases the value of both the reusable
components and the systems into which they are incorporated.

The Seventh Principle: Think!


This last principle is probably the most overlooked. Placing clear, complete thought before
action almost always produces better results.
When you think about something, you are more likely to do it right. You also gain
knowledge about how to do it right again. If you do think about something and still do it
wrong, it becomes valuable experience.
When clear thought has gone into a system, value comes out. Applying the first Six
Principles requires intense thought, for which the potential rewards are enormous.
If every software engineer and every software team simply followed hooker’s seven
principles, many of the difficulties we experience in building complex computer based
systems would be eliminated.

COMMUNICATION PRACTICES

Before customer requirements can be analysed, modelled, or specified they must be gathered
through a communication (also called requirements elicitation) activity. A customer has a
problem that may be amenable to a computer- based solution. A developer responds to the
customer’s request for help. Communication has begun. But the road from communication to
understanding is often full of patholes.
Effective communication (among technical peers, with the customer and other
stakeholders, and with project managers) is among the most challenging activities that
confront software engineer. In this context, we discuss communication principles that apply
equally to all forms of communication that occur within a software project.

Principle #1: Listen.


Try to focus on the speaker’s words, rather than formulating your response to those words.
Ask for clarification if something is unclear, but avoid constant interruptions. Never become
contentions in your words or actions (e.g., rolling your eyes or shaking your head) as a person
is talking.

Principle #2: Prepare before you communicate.


Spend the time to understand the problem before you meet with others. If necessary, do some
research to understand business domain jargon. If you have responsibility for conducting a
meeting, prepare an agenda in advance of the meeting.

Principle #3: Someone should facilitate the activity.


Every communication meeting should have a leader (facilitator) to keep the conversation
moving in a productive direction: (2) to mediate any conflict that does occur; (3) to ensure
that other principles are followed.

Principle #4: Face–to-face communication is best.


But it usually works better when some other representation of the relevant information is
present. For example, a participant may create a drawing or a “strawman” document that
serves as a focus for discussion.
Principle #5: Take notes and document decisions:
Things have a way of falling into the cracks. Someone participating in the communication
should serves. Someone participating in the communication should serve as a “recorder” &
write down all important points & decisions.

Principle #6: strive for collaboration.


collaboration occur when collective knowledge of members of team is combined to describe
product or system functions or features. Each small collaboration serves to build trust among
team members and creates a common goal for the team.

Principle #6: Stay focused, modularize your discussion.


The more the people involved in any communication, the more likely that discussion will
bounce from one topic to the next The facilitator should keep the conversation modular,
leaving one topic only after it has been resolved.

Principle #7: If something is unclear, draw a picture.


Verbal communication goes only so far. A sketch or drawing can often provide clarity when
words fail to do the job.

Principle #8: (a) Once you agree to something, move on; (b) If you can’t agree to
something, move on; (c) If a feature or function is unclear and cannot be clarified at the
moment move on. Communication, like any software engineering activity, takes time.
Rather than iterating endlessly, the people who participate should recognize that many topics
require discussion and that “moving on” is sometimes the best way to achieve
communication agility.

Principle #9: Negotiation is not a contest or a game. It works best when both parties
win.
There are many instances in which the software engineer and the customer must negotiate
functions and features, priorities, and delivery dates. If the team has collaborated well, all
parties have a common goal. Therefore, negotiation will demand compromise from all
parties.

PLANNING PRACTICES

The communication activity helps a software team to define its overall goals and
objectives However, understanding these goals and objectives is not the same as defining a
plan for getting there. The planning activity encompasses a set of management and technical
practices that enable the software team to define a road map as it travels towards its strategic
goal and technical objectives.
Regardless of the rigor with which planning is conducted, the following principles always
apply.

Principle #1: Understand the scope of the project. It’s impossible to use a road map if you
don’t know where you’re going. Scope provides the software.

Principle #2: Involve the customer in planning activity. The customer defines priorities
and establishes the project constraints.

Principle #3: Recognize that planning is iterative. As work begins, it is very likely that
things will change. As a consequence, the plan must be adjusted to accommodate these
changes. In addition, iterative and incremental process models dictate re-planning based on
feedback received from users.

Principle #4: Estimate based on what you know. The intent of estimation is to provide an
indication of effort, cost, and task duration, based on the team’s current understanding of the
work to be done.

Principle #5: Consider risk as you define the plan. If the team has defined risks that have
high impact and high probability, contingency planning is necessary.

Principle #6: Be realistic. People don’t work 100 percent every day. Noise always enters
into any human communication. Omission and ambiguity are facts of life. Change will occur.
Even the best software engineers make mistakes. These and other realities should be
considered as a project plan is established.

Principle #7: Adjust granularity as you define the plan. Granularity refers to the level of
detail that is introduced as a project plan is developed. A “fine granularity” plan provides
significant work detail that is planned over relatively short time increments.

Principle #8: Define how you intend to ensure quality. The plan should identify how the
software team intends to ensure quality. If formal technical reviews are to be conducted, they
should be scheduled.

Principle #9: Describe how you intend to accommodate change. Even the best planning
can be obviated by uncontrolled change. The software team should identify how changes are
to be accommodated as software engineering work proceeds.

Principle #10: Track the plan frequently and make adjustments are required. Software
project falls behind schedule one day at a time. Therefore, it makes sense to track progress on
a daily basis, looking for a problem areas and situation in which scheduled work does not
confirm to actual work conducted. When slippage is encountered, the plan is adjusted
accordingly. The following is called as W5HH principle.

Why is the system being developed? All parties should assess the validity of business
reasons for the software work. Stated in another way, does the business purpose justify the
expenditure of people, time, and money?

What will be done? Identify the functionality to be built, and by implication, the task
required to get the job done.

When will it be accomplished? Establish a workflow and timeline for key project tasks and
identify the milestones required by the customer.

Who is responsible for a function? The role and responsibility of each member of the
software tam must be defined.

Where they are organizationally located? Not all roles and responsibilities reside within
the software tam itself. The customer, users, and other stakeholders also have responsibilities.

How will the job be done technically and managerially? Once product scope is
established, a management and technical strategy for the project must be defined.

How much of each resource is needed? The answer to this question is derived by
developing estimates based on answers to earlier questions.
The answers to the above questions are important regardless of the size of complexity of a
software project. But how does the planning process begin?

MODELING PRACTICE

The models are created to gain better understanding of actual entity to be built. When
the entity is a physical thing, we can build a model that is identical in form of shape but smaller
in scale. However, when the entity is software, our model must take a different form. It must
be capable of representing the information that software transforms, the architecture and
functions that enable the transformation to occur, the features that user’s desire, and the
behaviour of the system as the transformation is taking place.
Two classes of models are created:

1.Analysis models 2. Design models.

1. Analysis models- represent the customer requirements by depicting the software in three
different domains: the information domain, the functional domain

2. The Behavioural domain. -Design models represent characteristics of the software that help
practitioners to construct it effectively.
A large number of analysis modelling methods have been developed. Each analysis methods
has unique point of view.

However, all analysis methods are related by a set of operational principles.

Principle #1: The information domain of a problem must be represented and understood.

The information domain compasses the data that flow into the system and the data stores that
collect and organize persistent data objects.

Principle #2: The functions that the software performs must be defined.

Software functions provide direct benefit to visible end-user. Some functions transform data
that flow into the system; in other cases, functions effect some level of control over internal
software processing or external system elements.

Principle #3: The behaviour of the software must be represented.

The behaviour of computer software is driven by its interaction with the external environment.
Input provided by end-users, control data provided by an external system, or monitoring data
collected over a network all cause the software to behave in a specific way.

Principle #4: The models that depict information, function, and behaviour must be
partitioned in a manner that uncovers detail in a layered fashion.

Analysis modelling is the first step in software engineering problem solving. It allows the
practitioner to understand the problem better and establishes a basis for the solution (design).
Complex problems are difficult to solve in their entirety. For this reason, we use a divide and
conquer strategy. A large, complex problem is divided into sub-problems until each sub-
problem is relatively easy to understand. This concept is called partitioning, and it is a key
strategy in analysis modelling.
Principle #5: The analysis task should move from essential information toward
implementation detail.

Analysis modelling begins by describing the problem from the end-user’s perspective. The
“essence” of a problem is described without any consideration of how a solution will be
implemented.

Design Modelling Principles

The design model created for software provides a variety of different views of system. There
is no shortage of methods for deriving various elements of a software design.

Some methods are data-driven, allowing the data structure to dictate the program architecture
and the resultant processing component.

Others are pattern-driven, using information about the problem domain (the analysis model)
to develop architectural styles and processing patterns- a set of design principles that can be
applied regardless of the method that is used.

Principle #1: Design should be traceable to the analysis model.

The analysis model describes the information domain of the problem, uses visible functions,
system behaviour, and a set of analysis classes that package business objects with the
methods that service them. The design model translates this information into an
architecture, a set of subsystems that implement major functions, and a set of component-
level designs that realize analysis classes.

Principle #2: Always consider the architecture of the system to be built.

Software architecture is the skeleton of the system to be built. It affects interfaces, data
structures, program control flow behaviour, the manner in which testing can be conducted and
the maintainability of resultant system.

Principle #3: Design of data is as important as design of processing functions.

Data design is an essential element of architectural design. The manner in which data objects
are realized within the design cannot be left to chance. A well-structured data design helps to
simplify program flow, makes design and implementation of software components easier, and
makes overall processing more efficient.

Principle #4: Interfaces (both internal and external) must be designed with care.

The manner in which data flow between the components of a system has much to do with
processing efficiency, error propagation, and design simplicity, A well designed interface
integration easier and assists the tester in validating Components functions.

Principle #5: User interface design should be tuned the needs of the end-user.

However, in every case, it should be stress free and easy to use. The user interface is the visible
manifestation of the software. A poor interface design often leads to the perception that the
software is “bad”.

Principle #6: Components should be functionally independent.


Functional independence is a measure of the “single- mindedness” of a software component.
The functionally that is delivered by a component should be cohesive.

Note: Cohesive means the degree to which elements within a module work together to fulfil a
single, well defined purpose that is, it should focus on one and only one function or sub-
function.

Principle #7: Components should be loosely coupled to one another and to the external
environment.

Coupling is achieved in many ways-via component inter-face, by messaging and through global
data. As the level of coupling increases, the likelihood of error propagation also increases and
the overall maintainability of the software decreases. Therefore, component coupling should
be kept as low as is reasonably possible.

Note: Coupling: coupling means the degree of interdependency between the modules
High coupling + low cohesion => can makes system difficult to change & test.
Low coupling + High cohesion =>can make system easier to maintain.

Principle #8: Design representation(model) should be easily understandable.

The purpose of design is to communicate information to practitioners who will generate code,
to those who will test the software, and others who may maintain the software in the future. If
the design is difficult to understand, it will not serve as an effective communication medium.

Principle #9: The design should be developed iteratively.

With each iteration, the designer should strive for greater simplicity. Like most of the creative
activities, design occurs iteratively. The first iteration works to refine the design and correct
errors, but later iterations should strive to make the design as simple as is possible.

CONSTRUCTION PRACTICE

The construction activity encompasses a set of coding and testing task that lead operational
software that is ready for delivery to the customer or end-user.

In modern software engineering work, coding may be:


(1) the direct creation of programming language source code;
(2) the automatic generation of source code using an intermediate design-like representation of
the component to be built;
(3) the automatic generation of executable code using a fourth generation programming
language.

Coding Principle and Concepts


The principles and concepts that guide the coding task are closely aligned
programming style, programming languages, and programming methods. However, there are
a number of fundamental principles that can be stated:

Preparation principles:

Before you write one line of code, be sure you:


1. Understand the problem you’re trying to solve.
2. Understand basic design principles and concepts.
3. Pick a programming language that meets the needs of the software to the hilt and the
environment in which it will operate.
4. Select a programming environment that provides tools that will make your work easier.
5. Create a set of unit tests that will be applied once the component you code is completed.

Coding principles:

As you begin writing code, be sure you:


1. Constraint your algorithm by following structured programming practice.
2. Select data structure that will meet the needs of the design.
3. Understand the software architecture and create interfaces that are consistent.
4. Keep conditional statement as simple as possible.
5. Create nested loops in a way that makes them easily testable.
6. Select meaningful variable names and follow other local coding standards.
7. Write code that is self-documenting.
8. Create a visual layout that aids understanding.

Validation principles:

After you’ve completed your first coding pass, be sure you:


1. Conduct a code walkthrough when appropriate.
2. Perform unit tests & correct errors you’ve uncovered..
3. Refactor the code.

Testing Principles:
In a classic book on software testing, Glen Myers states a number of rules that can
serve well as testing objectives:

• Testing in a process of executing with the intent of finding an error.


• A good test case is one that has a high probability of finding as as-yet undiscovered error.
• A successful test is one that uncovers an as- yet –undiscovered error.
These objectives imply a dramatic change in viewpoint for some software developers.
They move counts to the commonly held view that a successful test is one in which no errors
are found. Our objective is to design tests that systematically uncover different classes of
errors and to do with a minimum amount of time and effort. These errors can be corrected
subsequently.

Davis suggests a set of testing principles that have been adapted for use in this book:

Principle #1: All tests should be traceable to customer requirements.

The objective of software testing is to uncover errors. It follows that thee most server defects
(from the customer’s point of view) are those that cause the program to fail to meet its
requirements/goals.

Principle #2: Tests should be planned long before testing begins.

Test planning can begin as soon as the analysis model is complete. Detailed definition of test
cases can begin as soon as the design model has been solidified. Therefore, all tests can be
planned and designed before any code has been generated.

Principle #3: The pare to principle applies to software testing.


Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during
testing will likely be traceable to 20 percent of all program components. The problem, of
course, is to isolate these suspect components and to thoroughly test them.

Principle #4: Testing should begin “in the small” and progress toward testing “in the
large”.

The first tests planned and executed generally focus on individual components. As testing
progresses, focus shifts in an attempt to find error in integrated clusters of components and
ultimately in the entire system.

Principle #5: Exhaustive testing is not possible.

The number of path permutations for even a moderately sized program is exceptionally large.
For this reason, it is impossible to execute every combination of paths during testing. It is
possible, however, to adequately cover program logic and to ensure that all conditions in the
component- level design have been exercised.

Deployment practices

The deployment activity encompasses three actions delivery, support, and feedback.
Because modern software process models are evolutionary in nature, deployment happens not
once, but a number of times as software moves towards completion. Each delivery cycle
provides the customer and end-users with an operational software increment that provides
usable functions and features. The delivery of a software increment represents an important
milestone for any software project.
A number of key principles should be followed as the team prepares to deliver an increment:

Principle #1: Customer expectations for the software must be managed.


The customer expects more than the team has promised to deliver and disappointment occurs
immediately. This results in feedback that is not productive and which ruins team morale.

Principle #2: A complete delivery package should be assembled and tested.


A CD_ ROM or other media containing all executable software, support data files, support
document, and other relevant information must be assembled and thoroughly beta- tested
with actual users.
All installation scripts & other operational features should be thoroughly exercised in all
possible computing configurations (i.e., hardware, operating systems, peripheral devices,
networking arrangements )

Principle #3: A support regime must be established before the software is delivered.
An end-user expects responsiveness and accurate information when a question or problem
arises. Support should be planned, support material should be prepared, and appropriate
record keeping mechanism should be established so that the software team can conduct a
categorical assessment of the kinds of support requested required.

Principle #4. Appropriate instructional materials must be provided to end-users.


The software team delivers more than the software itself. Appropriate training aids should be
developed, trouble-shooting guidelines should be provided and a “what’s- different about- this-
software-increment” description should be published.

Principle #5: Buggy software should be fixed first, delivered later.


Under time pressure, some software organizations deliver low-quality increments with a
warning to the customer that bugs “will be fixed in the next release”. This is a mistake. There’s
a saying in the software business: “Customer will forget you delivered a high- quality product
a few days late, but they will never the problems that a low-quality product caused them. The
software reminds them every day.”

Testing Tactics:

Software testing fundamentals

Software testability is measured with respect to the efficiency and effectiveness of testing.
Efficient software architecture is very important for software testability. Software testing is
a time-consuming, necessary activity in the software development lifecycle, and making
this activity easier is one of the important tasks for software companies as it helps to reduce
costs and increase the probability of finding bugs. There are certain metrics that could be
used to measure testability in most of its aspects. Sometimes, testability is used to mean
how adequately a particular set of tests will cover the product.
• Testability helps to determine the efforts required to execute test activities.
• Less the testability larger will be efforts required for testing and vice versa.

Factors of Software Testability

Below are some of the metrics to measure software testability:


1. Operability: “The better it works, the more efficiently it can be tested.”
• If a system is designed & implemented with quality in mind, relatively few bugs will
block execution of tests, allowing testing to progress without delay.

2. Observability: “What you see is what you test.”


• Distinct output is generated for each input.
System states and variables are visible or queriable during execution. All factors
affecting the output are visible.
• Incorrect output is easily identified.
• Internal errors are automatically detected & reported. Source code is accessible.

3. Controllability: “The better we can control the software, the more the testing can be
automated and optimized.”
• All possible outputs can be generated through some combination of inputs. Software
and hardware states and variables can be controlled directly by the test engineer. Tests
can be conveniently specified, automated & reproduced.
• Input and output formats are consistent and structured.

4. Decomposability: “By controlling the scope of testing, we can more problems and
perform smarter retesting.” quickly isolate
• The software system is built from independent modules.
• Software modules can be tested independently.

5. Simplicity: “The less there is to test, the more quickly we can test it.”
• Functional simplicity (e.g., the feature set is the minimum necessary to meet
requirements).
• Structural simplicity (e.g., architecture is modularized to limit the propagation of
faults).
• Code simplicity (e.g., a coding standard is adopted for ease of inspection and
maintenance).

6. Stability: “The fewer the changes, the fewer the disruptions to testing.” Changes to the
software are infrequent.
• Changes to the software are controlled.
• Changes to the software do not invalidate existing tests. The software recovers well
from failures.

7. Understandability: “The more information we have, the smarter we will test.”


• Dependencies between internal, external, and shared components are well understood.
• Changes to the design are communicated. Technical documentation is instantly
accessible, well organized, specific, accurate and detailed.

8. Availability: “The more accessible objects are the easier to design test cases”.
It is all about the accessibility of objects or entities for performing the testing, including
bugs, source code, etc.

Test Characteristics.
1.A good test has a high probability of finding an error.
To achieve this goal, the tester must understand the software and attempt to develop
a mental picture of how the software might fail. Ideally, the classes of failure are
probed. For example, one class of potential failure in a graphical user interface is
the failure to recognize proper mouse position. A set of tests would be designed to
exercise the mouse in an attempt to demonstrate an error in mouse position
recognition.
2.A good test is not redundant.
Testing time and resources are limited. There is no point in conducting a test that
has the same purpose as another test. Every test should have a different purpose
(even if it is subtly different).
3.A good test should be “best of breed”
In a group of tests that have a similar intent, time and resource limitations may
mitigate toward the execution of only a subset of these tests. In such cases, the test that has
the highest likelihood of uncovering a whole class of errors should be used.
4.A good test should be neither too simple nor too complex.
Although it is sometimes possible to combine a series of tests into one test case, the
possible side effects associated with this approach may mask errors. In general, each test
should be executed separately.

Software Testing Strategies


Software testing is the process of evaluating a software application to identify if it meets
specified requirements and to identify any defects. The following are common testing
strategies:
1. Black box testing – Tests the functionality of the software without looking at the internal
code structure.
2. White box testing – Tests the internal code structure and logic of the software.
3. Unit testing – Tests individual units or components of the software to ensure they are
functioning as intended.
4. Integration testing – Tests the integration of different components of the software to
ensure they work together as a system.
5. Functional testing – Tests the functional requirements of the software to ensure they are
met.
6. System testing – Tests the complete software system to ensure it meets the specified
requirements.
7. Acceptance testing – Tests the software to ensure it meets the customer’s or end-user’s
expectations.
8. Regression testing – Tests the software after changes or modifications have been made to
ensure the changes have not introduced new defects.
9. Performance testing – Tests the software to determine its performance characteristics
such as speed, scalability, and stability.
10. Security testing – Tests the software to identify vulnerabilities and ensure it meets
security requirements.

Software Testing is a type of investigation to find out if there is any default or error present in
the software so that the errors can be reduced or removed to increase the quality of the
software and to check whether it fulfils the specifies requirements or not.

According to Glen Myers, software testing has the following objectives:


• The process of investigating and checking a program to find whether there is an error or
not and does it fulfill the requirements or not is called testing.
• When the number of errors found during the testing is high, it indicates that the testing
was good and is a sign of good test case.
• Finding an unknown error that wasn’t discovered yet is a sign of a successful and a good
test case.

White-box Testing
White-box testing is a testing technique in which software’s internal structure, design, and
coding are tested to verify input-output flow and improve design, usability, and security.
In white box testing, code is visible to testers, so it is also called Clear box testing, Open
box testing, Transparent box testing, Code-based testing, and Glass box testing.
White box Testing is Performed in 2 Steps:
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them.

Reasons to perform WBT?


➢ To ensure:
(1) that all independent paths within a module have been exercised at least once,
(2) All logical decisions on their true and false sides,
(3) All loops at their boundaries and within their operational bounds, and
(4) Internal data structures to ensure their validity.
➢ To discover the following types of bugs:
• Logical error tends to creep into our work when we design and implement functions,
conditions or controls that are out of the program
• The design errors due to difference between logical flow of the program and the
actual implementation
• Typographical errors and syntax checking

White-Box Testing Techniques:


Here we discuss about two white-box testing techniques they are:
1.Basis Path Testing
2.Control structure Testing

Basis Path Testing:


Basis Path Testing is a white-box testing technique based on the control structure of a
program or a module. Using this structure, a control flow graph is prepared and the various
possible paths present in the graph are executed as a part of testing. since this testing is
based on the control structure of the program, it requires complete knowledge of the
program’s structure.
To design test cases using this technique, four steps are followed :

1. Construct the Control Flow Graph


2. Compute the Cyclomatic Complexity of the Graph
3. Identify the Independent Paths
4. Design Test cases from Independent Paths

1. Control Flow Graph – A control flow graph (or simply, flow graph) is a directed graph
which represents the control structure of a program or module. A control flow graph (V, E)
has V number of nodes/vertices and E number of edges in it. A control graph can also have
:
• Junction Node – a node with more than one arrow entering it.
• Decision Node – a node with more than one arrow leaving it.
• Region – area bounded by edges and nodes (area outside the graph is also counted as a
region.).
Below are the notations used while constructing a flow graph :

Sequential Statements –

2. Cyclomatic complexity is a software metric that provides a quantitative measure of


the logical complexity of a program. When used in the context of the basis path testing
method, the value computed for Cyclomatic complexity defines the number of
independent paths in the basis set of a program and provides you with an upper bound
for the number of tests that must be conducted to ensure that all statements have been
executed at least once. Cyclomatic complexity has a foundation in graph theory and
provides you with an extremely useful software metric. Complexity is computed in one
of three ways:
1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.
V(G) = number of regions in the graph
2. Cyclomatic complexity V(G) for a flow graph G is defined as
V(G) = E- N + 2
where E is the number of flow graph edges and N is the number of flow graph
nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also defined as
V(G) =P + 1
Where P is the number of predicate nodes contained in the flow graph G.
Hence, using all the three above formulae, the Cyclomatic complexity obtained remains
same. All these three formulae can be used to compute and verify the Cyclomatic
complexity of the flow graph.
Note –
1. For one function [e.g. Main ( ) or Factorial( ) ], only one flow graph is constructed.
2. If in a program, there are multiple functions, then a separate flow graph is constructed
for each one of them. Also, in the Cyclomatic complexity formula, the value of ‘p’ is set
depending of the number of graphs present in total.
3. If a decision node has exactly two arrows leaving it, then it is counted as one decision
node. However, if there are more than 2 arrows leaving a decision node, it is computed
using this formula:
d = k – 1, Here, k is number of arrows leaving the decision node

3) Independent Program Paths


An independent path is any path through the program that introduces at least one new
set of processing statements or a new condition.
When stated in terms of a flow graph, an independent path must move along at least one
edge that has not been traversed before the path is defined.
4) Design Test Cases:
Finally, after obtaining the independent paths, test cases can be designed where each test
case represents one or more independent paths.

Graph Matrices:
➢ A graph matrix is a square matrix whose size (i.e., number of rows and columns) is
equal to the number of nodes on the flow graph.
➢ Each row and column corresponds to an identified node, and matrix entries
correspond to connections (an edge) between nodes.
➢ each node on the flow graph is identified by numbers, while each edge is identified
by letters. A letter entry is made in the matrix to correspond to a connection between
two nodes.
➢ The graph matrix is nothing more than a tabular representation of a flow graph.
However, by adding a link weight to each matrix entry, the graph matrix can
become a powerful tool for evaluating program control structure during testing.
➢ The link weight provides additional information about control flow.
➢ In its simplest form, the link weight is 1 (a connection exists) or 0 (a connection
does not exist).

CHECK NOTES FOR EXAMPLE FOR BASIS PATH TESTING & GRAPH
MATRICES

Basis Path Testing can be applicable in the following cases:


1. More Coverage – Basis path testing provides the best code coverage as it aims to
achieve maximum logic coverage instead of maximum path coverage. This results in an
overall thorough testing of the code.
2. Maintenance Testing – When a software is modified, it is still necessary to test the
changes made in the software which as a result, requires path testing.
3. Unit Testing – When a developer writes the code, he or she tests the structure of the
program or module themselves first. This is why basis path testing requires enough
knowledge about the structure of the code.
4. Integration Testing – When one module calls other modules, there are high chances of
Interface errors. In order to avoid the case of such errors, path testing is performed to
test all the paths on the interfaces of the modules.
5. Testing Effort – Since the basis path testing technique takes into account the
complexity of the software (i.e., program or module) while computing the Cyclomatic
complexity, therefore it is intuitive to note that testing effort in case of basis path testing
is directly proportional to the complexity of the software or program.

Control structure testing


Control structure testing is used to increase the coverage area by testing various control
structures present in the program. The different types of testing performed under control
structure testing are as follows-
1. Condition Testing
2. Data Flow Testing
3. Loop Testing

1.Condition Testing:
Condition testing is a test cased design method, which ensures that the logical condition
and decision statements are free from errors.
The errors present in logical conditions can be incorrect Boolean operators, missing
parenthesis in a Boolean expression, error in relational operators, arithmetic expressions,
and so on.
The common types of logical conditions that are tested using condition testing are-
1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and
‘OP’ is an operator.
2. A simple condition like any relational expression preceded by a NOT (~) operator. For
example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions, Boolean operator,
and parenthesis. For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic
expression and ‘&’ and ‘|’ denote AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR,
NOT. For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands
and | denotes OR operator.

2. Data Flow Testing:

The data flow test method chooses the test path of a program based on the locations of the
definitions and uses all the variables in the program. The data flow test approach is depicted
as follows suppose each statement in a program is assigned a unique statement number and
that theme function cannot modify its parameters or global variables.

For example, with S as its statement number.


DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
Steps of Data Flow Testing

• creation of a data flow graph.


• Selecting the testing criteria.
• Classifying paths that satisfy the selection criteria in the data flow graph.
• Develop path predicate expressions to derive test input.

The life cycle of data in programming code

• Definition(DEF): it includes defining, creation and initialization of data variables and


the allocation of the memory to its data object.
• Usage(USE): It refers to the user of the data variable in the code. Data can be used in
two types as a predicate(P-USE) or in the computational form(C-USE).
• Deletion(KILL): Deletion of the Memory allocated to the variables.

Types of Data Flow Testing

• Static Data Flow Testing


No actual execution of the code is carried out in Static Data Flow testing. Generally,
the definition, usage and kill pattern of the data variables is scrutinized through a
control flow graph.

• Dynamic Data Flow Testing


The code is executed to observe the transitional results. Dynamic data flow
testing includes:

• Identification of definition and usage of data variables.


• Identifying viable paths between definition and usage pairs of data variables.
• Designing & crafting test cases for these paths.
Advantages of Data Flow Testing

• Variables used but never defined,


• Variables defined but never used,
• Variables defined multiple times before actually used,
• DE allocating variables before using.

Data Flow Testing Limitations

• Testers require good knowledge of programming.


• Time-consuming
• Costly process.

Data Flow Testing Coverage

Following are the test selection criteria


1. All-defs:

For every variable x and node i in a way that x has a global declaration in node I, pick a
comprehensive path including the def-clear path from node i to

• Edge (j,k) having a p-use of x or


• Node j having a global c-use of x

2. All c-uses:

For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including the def-clear path from node i to all nodes j having a global c-
use of x in j.
3. All p-uses:

For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including the def-clear path from node i to all edges (j,k) having p-use of
x on edge (j,k).
4. All p-uses/Some c-uses:

it is similar to all p-uses criterion except when variable x has no global p-use, it reduces to
some c-uses criterion as given below
5. Some c-uses:

For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including the def-clear path from node i to some nodes j having a global
c-use of x in node j.
6. All c-uses/Some p-uses:

it is similar to all c-uses criterion except when variable x has no global c-use, it reduces to
some p-uses criterion as given below:
7. Some p-uses:

For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including def-clear paths from node i to some edges (j,k) having a p-use
of x on edge (j,k).
8. All uses:
it is a combination of all p-uses criterion and all c-uses criterion.
9. All du-paths:

For every variable x and node i in a way that x has a global declaration in node i, pick a
comprehensive path including all du-paths from node i

• To all nodes j having a global c-use of x in j and


• To all edges (j,k) having a p-use of x on (j,k).

Let us understand this with the help of an example.

There are 8 statements in this code. In this code we cannot cover all 8 statements in a single
path as if 2 is valid then 4, 5, 6, 7 are not traversed, and if 4 is valid then statement 2 and 3
will not be traversed.
Hence we will consider two paths so that we can cover all the statements.

x= 1

Path – 1, 2, 3, 8

Output = 2

If we consider x = 1, in step 1; x is assigned a value of 1 then we move to step 2 (since, x>0


we will move to statement 3 (a= x+1) and at end, it will go to statement 8 and print x =2.
For the second path, we assign x as 1

Set x= -1

Path = 1, 2, 4, 5, 6, 5, 6, 5, 7, 8

Output = 2

x is set as 1 then it goes to step 1 to assign x as 1 and then moves to step 2 which is false as x
is smaller than 0 (x>0 and here x=-1). It will then move to step 3 and then jump to step 4; as 4
is true (x<=0 and their x is less than 0) it will jump on 5 (x<1) which is true and it will move
to step 6 (x=x+1) and here x is increased by 1.
So,

x=-1+1

x=0
x become 0 and it goes to step 5(x<1),as it is true it will jump to step

6 (x=x+1)

x=x+1

x= 0+1

x=1

x is now 1 and jump to step 5 (x<1) and now the condition is false and it will jump to step 7
(a=x+1) and set a=2 as x is 1. At the end the value of a is 2. And on step 8 we get the output
as 2.

3) Loop Testing:

• Loops are widely used and these are fundamental to many algorithms hence, their testing
is very important. Errors often occur at the beginnings and ends of loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
➢ Skip the loop entirely
➢ Only one pass through the loop
➢ 2 passes
➢ m passes, where m < n
➢ n-1 ans n+1 passes

2. Nested loops: For nested loops, all the loops are set to their minimum count and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop
and this is worked outwards till all the loops have been tested.

3. Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each. If they’re not independent, treat them like nesting.

4. Unstructured loops – This type of loops should be redesigned, whenever possible, to


reflect the use of unstructured the structured programming constructs

Black-Box Testing:
Black-box testing, also called behavioural testing, focuses on the functional requirements of
the software. That is, black-box testing techniques enable you to derive sets of input
conditions that will fully exercise all functional requirements for a program. Black-box
testing is not an alternative to white-box techniques.
Black-box testing attempts to find errors in the following categories:
(1) incorrect or missing functions,
(2) interface errors,
(3) errors in data structures or external database access,
(4) behavior or performance errors, and
(5) initialization and termination errors.
Black-Box Testing techniques:

1)Graph-Based Testing Methods


Each and every application is a build-up of some objects. All such objects are identified and
the graph is prepared. From this object graph, each object relationship is identified and test
cases are written accordingly to discover the errors.
To accomplish these steps, you begin by creating a graph—a collection of nodes that
represent objects, links that represent the relationships between objects, node weights that
describe the properties of a node (e.g., a specific data value or state behavior), and link
weights that describe some characteristic of a link.

The symbolic representation of a graph is shown in Figure 18.8a. Nodes are represented as
circles connected by links that take a number of different forms. A directed link
(represented by an arrow) indicates that a relationship moves in only one direction. A
bidirectional link, also called a symmetric link, implies that the relationship applies in both
directions. Parallel links are used when a number of different relationships are established
between graph nodes.

2) Equivalence Partitioning
This technique is also known as Equivalence Class Partitioning (ECP). In this technique,
input values to the system or application are divided into different classes or groups based on
its similarity in the outcome.
Hence, instead of using each and every input value, we can now use any one value from the
group/class to test the outcome. This way, we can maintain test coverage while we can reduce
the amount of rework and most importantly the time spent.

For Example:
As present in the above image, the “AGE” text field accepts only numbers from 18 to 60.
There will be three sets of classes or groups.

Two invalid classes will be:


a) Less than or equal to 17.
b) Greater than or equal to 61.
A valid class will be anything between 18 and 60.
We have thus reduced the test cases to only 3 test cases based on the formed classes
thereby covering all the possibilities. So, testing with any one value from each set of the class
is sufficient to test the above scenario.

3) Boundary Value Analysis (BVA)


This is one of the widely used test case design techniques, as it is widely believed that the
input values at the extreme ends (boundaries) of the input domain cause more errors in the
system. These boundary values (extreme ends) could be maximum-minimum, lower-upper,
start-end, just inside–just outside.
More application errors occur at the boundaries of the input domain, meaning more
failures occur at lower and upper limit values of input data.
The ‘Boundary Value Analysis’ testing technique is used to identify errors at boundaries
rather than finding those that exist in the center of the input domain. This kind of testing of
boundary values (or boundaries) is also referred to as ‘boundary testing’.

The basic principle behind this technique is to choose input data values:
• Just below the minimum value
• Minimum value
• Just above the minimum value
• A Normal (nominal) value
• Just below the maximum value
• Maximum value
• Just above the maximum value

Example #1: Input box (say for accepting age) accepts values between 21 and 55.

Valid input values: 21, 22, 35, 54, and 55


• Minimum boundary value: 21
• Maximum boundary value: 55
• Value at the center (nominal value): 35
Invalid input values: 20, 56
Test cases for input box accepting values between 21 and 55 using Boundary value
analysis:
1. Invalid value test case: Enter value = 20
2. Valid value test case: Enter value = 21
3. Valid value test case: Enter value = 22
4. Valid value test case: Enter value = 35 (additional test case to check value almost at
the center, also called nominal value)
5. Valid value test case: Enter value = 54
6. Valid value test case: Enter value = 55
7. Invalid value test case: Enter value = 56

4)Orthogonal Array Testing


Orthogonal Array Testing (OAT) is software testing technique that uses orthogonal arrays
to create test cases. It is statistical testing approach especially useful when system to be tested
has huge data inputs. Orthogonal array testing helps to maximize test coverage by pairing and
combining the inputs and testing the system with comparatively less number of test cases for
time saving.
Terminologies for Orthogonal Array Testing
Before understanding the actual implementation of Orthogonal Array Testing, it is essential
to understand the terminologies related to it.
Term Description

Runs It is the number of rows which represents the number of test conditions to be
performed.

Factors It is the number of columns which represents in the number of variable to be


tested

Levels It represents the number of values for a Factor

• As the rows represent the number of test conditions (experiment test) to be performed,
the goal is to minimize the number of rows as much as possible.
• Factors indicate the number of columns, which is the number of variables.
• Levels represent the maximum number of values for a factor (0 – levels – 1).
Together, the values in Levels and Factors are called LRUNS (Levels**Factors).
Example 1
We provide our personal information like Name, Age, Qualification, etc., in various
registration forms like first-time app installation or any other Government websites.
The following example is from this kind of application form. Consider that there are four
fields in a registration form (webpage) which have certain sub-options in it.

Age field
• Less than 18
• More than 18
• More than 60
Gender field
• Male
• Female
• NA
Highest Qualification
• High School
• Graduation
• Post-Graduation
Mother Tongue
• Hindi
• English
• Other
Step 1: Determine the number of independent variables. There are four independent variables
(Fields of the registration form) = 4 Factors.
Step 2: Determine the maximum number of values for each variable. There are three values
(There are three sub-options under each field) = 3 Levels.
Step 3: Determine the Orthogonal Array with 4 Factors and 3 Levels. Referring to the
link we have derived the number of rows required i.e. 9 Rows.
The orthogonal array follows the pattern L Runs(LevelsFactors). Hence, in this example, the
Orthogonal Array will be L9(34).

Thus the Orthogonal Array will look as given below.


Runs Factor 1 Factor 2 Factor 3 Factor 4

Run 1 0 0 0 0

Run 2 0 1 2 1

Run 3 0 2 1 2

Run 4 1 0 2 2

Run 5 1 1 1 0

Run 6 1 2 0 1

Run 7 2 0 1 1

Run 8 2 1 0 2

Run 9 2 2 2 0
Step no. 4: Map the Factors and Levels of the Array generated.

Runs AGE Gender Highest Qualification Mother Tongue

Run 1 Less than 18 Male High School Hindi

Run 2 Less than 18 Female Post-Graduation English

Run 3 Less than 18 NA Graduation Other

Run 4 More than 18 Male Post-Graduation Other

Run 5 More than 18 Female Graduation Hindi


Runs AGE Gender Highest Qualification Mother Tongue

Run 6 More than 18 NA High School English

Run 7 More than 60 Male Graduation English

Run 8 More than 60 Female High School Other

Run 9 More than 60 NA Post-Graduation Hindi

• “Factor 1” will be replaced by AGE.


• “Factor 2” will be replaced by Gender.
• “Factor 3” will be replaced by Highest Qualification.
• “Factor 4” will be replaced by Mother Tongue.
• 0, 1, 2 will be replaced by each sub-option under their respective Factor (field).

After mapping the Factors and Levels, the Orthogonal Array will look as shown Above.
Step no. 5:
Each Run in the above table represents the test scenario to be covered in testing. Each run is
changed to a test condition.

Advantages of Orthogonal Array Testing


This technique is beneficial when we have to test with a huge amount of data having many
permutations and combinations.
• Less number of Test conditions that require less implementation time.
• Less Execution time.
• Easy Analysis of Test conditions due to less number of Test conditions.
• High coverage of codes.
• Increased overall productivity and ensures that the quality test is performed.

Limitations of OATS
None of the testing techniques provides a guarantee of 100% coverage. Each technique has
its way of selecting the test conditions. On similar lines, there are some limitations to using
this technique:
• Testing will fail if we fail to identify the good pairs.
• Probability of not identifying the most important combination which can result in
losing a defect.
• This technique will fail if we do not know the interactions between the pairs.
• Applying only this technique will not ensure complete coverage.
• It can find only those defects which arise due to pairs, as input parameters.

Differences between Black Box Testing vs White Box Testing:


Black Box Testing White Box Testing

It is a way of software testing in which the It is a way of testing the software in which
internal structure or the program or the the tester has knowledge about the internal
code is hidden and nothing is known about structure or the code or the program of the
it. software.

Implementation of code is not needed for Code implementation is necessary for white
black box testing. box testing.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is
Knowledge of implementation is required.
needed.

It can be referred to as outer or external It is the inner or the internal software


software testing. testing.

It is a functional test of the software. It is a structural test of the software.

This testing can be initiated based on the This type of testing of software is started
requirement specifications document. after a detail design document.

It is mandatory to have knowledge of


No knowledge of programming is required.
programming.

It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of It is generally applicable to the lower levels


testing of software. of software testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for algorithm


It is suitable for algorithm testing.
testing.

Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.

Example: Search something on google by Example: By input to check and verify


using keywords loops

It is less exhaustive as compared to white It is comparatively more exhaustive than


box testing. black box testing.
Differences between Graph-Based Testing & Orthogonal Based Testing

Graph-Based Testing Orthogonal Based Testing

Focus: Graph-based testing focuses on Focus: Orthogonal testing focuses on


modeling the software's behavior or systematically combining different test inputs
requirements as a graph or network of states, (parameters) to design test cases that cover
transitions, or dependencies. various combinations.

Usage: It is applied when the software's behavior


Usage: It is commonly used for systems with
depends on multiple factors, and testing all
complex state-based behavior, such as user
possible combinations of these factors is
interfaces, control systems, and protocols.
impractical.

Modeling: Testers create a graphical Modeling: Testers define orthogonal arrays,


representation of the system's behavior, often which specify the factors and their possible
using tools like finite state machines (FSMs) or values. These arrays guide the creation of test
state transition diagrams. cases.

Test Coverage: Graph-based testing aims to Test Coverage: Orthogonal testing ensures that
ensure that all possible paths and transitions various combinations of factors are tested
within the graph are tested, including valid and efficiently, providing good coverage without
invalid scenarios. testing every possible combination.

Complexity: It is particularly useful for complex Simplicity: It simplifies the test case design
systems but can be time-consuming to design process and often leads to a smaller set of test
and execute test cases. cases compared to exhaustive testing.

Automation: Automation tools are often used Manual or Automated: Orthogonal testing can be
to generate and execute test cases based on performed manually or using specialized tools to
the graph model.. generate test cases based on orthogonal arrays..

You might also like