Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views96 pages

Software Testing Summary

Uploaded by

mrdogan86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views96 pages

Software Testing Summary

Uploaded by

mrdogan86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

SOFTWARE TESTING

SUMMARY
• Software testing : it's a process of evaluating and verifying that a software
product or application functions correctly, securely, and efficiently according to
its specific requirements
SOFTWARE TESTING ADVANTAGES
• Software testing helps ensuring if there are any bugs or errors in the software

• bugs can be identified early and can be solved before delivery of the software
product

• ensures a high quality , free bugs product is delivered to customers

• supports applying multiple types of tetsing [Ex- ApI tetsing]

• ensure that software meets the user specifications

• we can check about the security of software againist unauthorized access


show presence of defects

Exhaustive testing is impossible

Early testing

Pesticide Paradox

• Software testing
Defect Clustering
principles
Testing is context dependent

Absence of Error – fallacy


SHOW PRESENCE OF DEFECTS

• helps in early detection of defects , it reduces the probability of undiscovered defects


EXHAUSTIVE TESTING IS IMPOSSIBLE

• Exhaustive testing is not possible, we need the optimal amount of testing based on
the risk assessment of the application.
EARLY TESTING

• testing should start as early as possible , this will help reduce [1- tetsing effort ,2- defect
fixing cost, 3- testing time estimation]
PESTICIDE PARADOX

• Pesticide Paradox: means that the test cases need to be reviewed & revised

• helps in adding new & different test cases to help find more defects
DEFECT CLUSTRING

• Defect Clustering : means that we state that a small number of modules contain most
of the defects detected

• this will help increase the probability of finding new bugs


TESTING IS CONTEXT DEPENDENT

• testing is done differently depending on the [1-context,-2 resources ,3- types of testing]
ABSENCE OF ERROR – FALLACY

• testing doesn't help if the software doesn't fulfill the userneeds


ERROR V.S DEFECT

ERROR DEFECT

it's used to ensure that software variation in the outcome of software against the
reason invalidthe
code , spelling mistakes
meets specfications expected

When code development test execution

QA team , developer Qc team


Who
QA V.S QC

QUALITY ASSURANCE (QA) QAULITY CONTROL(QC)

it's it's
a procedure to ensure
used to ensure the
that quality of
software
Goal it's a procedure used for finding and fixing defects in the
software products or services provided
meets the specfications product, building high quality product
to the customers by an organization

product -oriented
Access proccess-oriented

When before product is build after product is build

Who QA team Qc team , developers


RISK ASSESSMENT

• risk assessment :means identifying, analyzing, and prioritizing potential failures or issues
in a system so testing efforts focus on what matters most
SOFTWARE TESTING LIFE CYCLE (STLC)

• STLC: sequence of testing activities used to ensure about the quality of the
software

• STLC involves both verification and validation activities

• It shows how testing should be carried out in a well structured way


Software testing cycle


Requirement analysis

Test cycle closure


Test palnning

Test execution

Test design

Test environemnt
steup
gather the requirement

Prepare requirement traceability matrix(RTM)

identify the testable requirements

• requirment analysis
activites
identify the required environment for testing
TEST PLANNING

• Test lead manager creates the test plan

• test lead manger assign roles/responsibilities to test team members

• test team make test effort estimation for each included task in test plan

• we identify the deliverables [1- before ,2- during ,3- after] testing
• Test plan : low-level document , which used to determine the scope & objective
of the applied sowftare testing process

• we use the test plan to determine the entry & exit criteria

• Test plan is created by Test lead manager


ENTRY CRITERIA EXIT CRITERIA

definition it's used to ensure that software it's set of postconditions that we must reach to be able
it's set of preconditions that must exist to
bemeets
able tothe
startspecfications
testing to set testing is completed

1- requirements are gathered 1- all test cases are executed


exmaples 2- test environemnt is setup 2- no critical bugs are opened
3- test data is prepared 3- test coverage is above 90%
4- all test results are documenetd
Introduction

in scope & out of scope

roles /responsibilities

entry & exit & suspension cretria


• test plan template
test methdology

resources & environment setup

risks & migrations

Deliverables
TEST DESIGN

• we create test scenarios [- positive ,2- negative] based on the created test plan to
test the software

• we use test data [1- valid ,2- invalid] to help in designing test cases

• we design test cases either manually or automatically


• test scenario : high-level document , used to descibe how testing will be applied

• test scenario includes positive and negative scenario

• we also determine the number of test cases required for each test scenario
TEST SCENARIO TEMPLATE

test scenario id test scenario description testcase id

requirement number requirement test case


• Test case : set of conditions that are used to verify the software against the
expected outcome

• we use test cases to apply positive & negative testing


TEST CASE TEMPLATE

• test case id : identify the id of included test case

• test case description: add description for test case to help identify it's purpose

• precondtions : conditions that must met to start desiging testcase

• postconditions: conditions that must met to set testcase as finished

• test data: data [1- valid ,2- invalid] needed for testing

• test steps : detailed steps for how testing will be applied

• expected result: expected result to get after test execution

• actual result: actual result that actually got after test execution

• status : it's set based on the comparison between the expected and actual result
• Test case status

passed failed
TEST ENVIRONMENT SETUP

• we define the tools & resources required for testing

• we define the required [1- hardware ,2- software] conditions to start testing

• we setup these required conditions


TEST EXECUTION

• we execute the test cases either manually or automatically

• we report the test execution results [1- number of testcases passed ,2- number of testcases failed]

• we assign the result to each executed test case

• we report &document the detcted defects during test execution


TEST CYCLE CLOSURE

• we make test summary reports for all test reuslts [1- number of executed test cases
,2- number of testcases passed ,3- number of testcases failed]

• we create metrics to help evaluate the test coverage of our testing process

• we make a Defect summary report for all detected defects during test execution

• we assign priority & severity for each detcted defect in the defect summary report
VERIFICATION V.S VALIDATION

VERIFICATION VALIDATION

it's used to ensure that software


software it's used to validate the software
definition
meets the specifications
specfications based on the customer requirements

Focus Process-oriented Product-oriented

Testing type Static testing Dynamic testing

Methods reviews , walkthrough ,inspections Dynamic analysis (code execution)

Purpose Are we are building the product right Are we are building the right product
TDD V.S BDD

TDD BDD

software development approach in


it's used software development approach in which test
definition which test to ensure
cases that software
are developed to
cases are developed validate the behavior of the
meetsand
specify thevalidate
specfications
what the code will
software
do

test cases are written then we It encourages collaboration between


How it works refactor the code to pass these test developers, QA, and non-technical
cases stakeholders by using human language

concern white box ,black box test the behavior of the software

avoid duplication of code, reduces


Production
advantages bugs

tools manaul & automation tools Cucumber


• Software testing types

Functional non-functional UAT testing Maintenance testing

API testing
regression sanity
FUNCTIONAL TESTING

• Functional testing : it's a type of software testing where we test the software against
functional requirements

• Functional testing can be applied using manual or automated test cases

• Funactional testing is used to ensure that each fucntion in the software is working
correctly
FUNCTIONAL TESTING

Q: How is Functional testing applied?

Sol:-

1- we understand the functional requirements

2- we define test data needed for test cases

3- we design test cases to test each feature in the fucntional requirements

4- we compare the expected results with the actual results


NON-FUNCTIONAL TESTING

• non-Functional tetsing : it's type of software testing where we test the software
against non-functional requirements

• non-Functional testing includes [1- performance ,2- security ,3- reliablity ]


NON-FUNCTIONAL TESTING

Q: How is non-functional testing applied?

Sol:-

1- we understand the non-functional requirements

2- we decide which type of non-functional testing to be applied based on the requirements

3- we take the actual results after applying chosen non-functional testing type

4- we compare the expected results with the actual results


• common non-functional Software testing types

Performance Security usability stability recovery


PERFORMANCE TESTING

Performance testing : it's a type of non-functional software testing used for testing performance
of software

we can apply Perfromance testing using [1- Jmeter , 2- loadrunner]


• Importance of performance testing:-

1-we test response time

2- supports (stability , reliability, scalability) ensurracnce

3- supports applying load and stress testing


• Performance testing types

Load Stress spike Endurance testing


• Load testing : checks the performance of sowftware application under specific
user loads
• Stress testing : involves testing an application under extreme/heavy workloads to
see how it handles high traffic or data processing.

• Stress tetsing helps identifying the breaking point of an application

• we can apply stress tetsing by [1-increasing the numebr of users,2- loop count]
• Spike testing : makes sure the software can handle unexpected
increase/deacrease in user's load
• Endurance testing : makes sure the software can handle the expected load over
a long period of time
SECURITY TESTING

Secuity testing : it's a type of non-functional software testing used for ensure that software is secure against
unauthorized acess
USABILITY TESTING

Usability testing : it's a type of non-functional software testing used for test how easy is the software to use
STABILITY TESTING

Stability testing : it's a type of non-functional software testing used for test how stable is the software against
any crahses
RECOVERY TESTING

Recovery testing : it's a type of non-functional software testing used for test how is the software able to
recover after any failures
API TESTING

ApI testing is used to validate API responses(response status,response data, cookies, response headers

ApI testing is used to validate the performance , security of API server

API testing allows us to test multiple ApI requests [1- GET ,2- POST ,3- PUT,4- PATCH,5- DELETE]
UAT TESTING

UAT testing : it's a type of software testing where we validate whether the software meets the user
requirements

UAT testing is usually done by Client or End users

UAT testing is normally done using Beta testing


MAINTAINANCE TESTING

maintainance testing : it's a type of software testing where we verify the software after any changes

the changes we apply maintainance testing:-

1- code changes

2- adding new feature

3- bug fixing
• maintainance testing types

regression sanity
REGRESSION TESTING V.S SANITY TESTING

REGRESSION TESTING SANITY TESTING

it's a type of software testing that is it's a subset of regression testing that done after any
definition done after any changes to ensure that changes to ensure that all related functioanlites are
all functionalities are working fine working fine

all existing functionalities new functionalities, related


Access
functionalities
LEVELS OF SOFTWARE TESTING

Accep
testing

System testing

Integration testing

Unit testing
UNIT TESTING

Unit testing : it's a level of software testing ,where each individual unit is tested separately

unit testing is usually done by developers


INTEGRATION TESTING

Q: how is integration testing applied?

Sol:-

1- units are integrated as modules

2- we test the data flow between these modules


• Integration testing approaches

Big Bang Incremental


BIG BANG

• modules are integarted together as one complete unit, we test this completed unit

• Big bang is useful for small systems


• incremental integration testing

Top-down Bottom-up sandwich


• Q: how is Top-down applied?

Sol:-

1-higher level modules are tested first to facilitate testing the lower-level modules

2-higher level modules are boken into smaller modules

3-we test these smaller modules


• Q: how is Bottom-up applied?

Sol:-

1-lower level modules are tested first to facilitate testing the higher level modules

2- lower level modules are integrated together into higher modules

3-we test these higher-level modules


SYSTEM TESTING

• System testing : it's a level of software testing , where we test the whole integrated system

Q: how system testing is applied ?

Sol:-
1-we verify that whole integrated system works as expected

2-we can use [1-Black Box ,2-White Box ] techquines

3-system testing is usually done by QA team


• System testing types

Performance security Usability


• System testing techniques

White box Black box gray box


WHITE BOX V.S BLACK BOX TESTING

WHITE-BOX TESTING BLACK-BOX TESTING

definition it'sit's usedofto


a type ensuretesting
software that software
that's used it's a type of software testing that's used to test the
meets
to test the the specfications
internal structure of software behavior of the software

source code, architecture,internal inputs and expected outputs


Access
documentation

Goal
validate the software based on the
test the structure , data flow
requirements

internal
required not required
knownledge
• White-box testing techniques

statement coverage branch coverage decision coverage path coverage


• Black-box testing techniques

equavilance
boundary value partitioning decsion table state transition

error guessing
GRAY-BOX TESTING

• Gary-box testing : its a combination of both white & black box testing
ACCEPTANCE TESTING

• Acceptance testing : it's a level of software testing , where we test the whole software against
the customer requirements

Q: why is acceptance testing applied ?

Sol:-
1-we can involve the user to verify whether the developed software meets his requirements

2- Using Acceptance testing, we can catch errors that developer might miss before deploying the
product

3- Acceptance testing gives stakeholders confidence that the product is ready for production
• Acceptance testing types

Alpha Beta
ALPHA V.S BETA

ALPHA TESTING BETA TESTING

it's used to identify bugs before it's used to test software by Clients or End
definition it's used to ensure that software
releasing the software product to the Users who are not employees of the
meets
real the specfications
users organization

tests the products in customer’s


helps finding defects that real user
advantage environment
might miss

Testing types White box ,Black box Black box

Environment Development Production

carried by Testers End-users


RTM

• Requirement Traceability Matrix (RTM): it's a document that maps and traces user
requirements with test cases

• Requirement Traceability Matrix is used to validate that all requirements are checked
with test cases such that no functionality is unchecked during Software testing
RTM TEMPLATE

Req No Req description testcase ID status

requirement number requirement test case pass/fail


CODE COVERAGE

• Code coverage: it's a measure which describes how much of the source code of the
program has been tested

• It is one form of white box testing

• code coverage includes [1- statement coverage ,2 - branch coverage ,3- decision
coverage,4- path coverage]

• code coverage is important for finding the areas of the program not exercised by test
cases
• code coverage methods

statement branch coverage decision path coverage


coverage coverage
DEFECT MANAGEMENT

• Defect managment: it includes set of activities for handling detected defect till
this defect is successfully fixed
Defect management cycle


Discovery

Defect reporting
Categorization

closure

Defect resolution

Verification
DISCOVERY

• Q: what do we do in this phase ?

sol:-

we discover defects as early as possible before the end-user discover them


CATEGORIZATION

• Q: what do we do in this phase ?

sol:-

1- we set with the test manager to categorize the detected defects

2- we assign priority for each detected defect


DEFECT RESOLUTION

• Q: what do we do in this phase ?

sol:-

1- we assign the defects to developers

2- developers start working on the assigned defects till they are fixed

3- developers send report to test manager when the defects are successfully fixed
VERIFICATION

• Q: what do we do in this phase ?

sol:-

1- we take the reports from the developers

2- we verify that the defects have been successfully fixed


CLOSURE

• Q: what do we do in this phase ?

sol:-

• we change the status of defects to closed


DEFECT REPORTING

• Q: what do we do in this phase ?

sol:-

1- we make a detailed defect report to all detetced defetcs with status

2- test manger take this defect report and send it to the management team to review
the feedback about testing team effort for defects handling
DEFECT DENSITY

• Defect Density: it's the number of defects confirmed in software/module during


specific period

• it helps decide whether the software is ready to be released or not

• Defect Density = Defect count/size of the release


DEFECT REPORT TEMPLATE
• defect id : identify the id of detected defect

• defect description: add description for this defect

• Assigned to : includes which developer this defect is assigned

• detected by: includes the name of tester who detect this defect

• steps to reproduce :steps that cause the defect to arise

• expected result : the expected result before this defect is arised

• Attachments : incldues [1- screenshots, 2- files] capturing this detected defect

• serverity : determines the impact of this defect on the software

• priority: determine how important the defect is to be solved

• status: it's used to determine either the defect is fixed or not


• Priority levels

high medium low


• Severity levels

critical major minor


DEFECT STATUS

• new : means that is defect is detected now

• Assigned: means that defect is assigned to developer

• opened : means that assigned developer starts working on the defect

• Fixed: means that is fixed from the developer

• reopen: means that the defect is reopened again

• pending retest: mean that is waiting for approval to retest again to ensure it's fixed successfully

• verified: means that the defect is reviewd by test lead afte retest and it is successfully fixed

• closed: means that the defect is fixed successully and closed by the QA team

• duplicated : means that the defect has been already detected before by another tester in the QA team
POSITIVE TESTING V.S NEGATIVE TESTING

POSITIVE TESTING NEGATIVE TESTING

it's type of software testing where we


definition it's used to ensure that software it's type of software testing where we test the software
test the software using positive
meets the specfications
scenario using negative scenario

test data valid invalid

Goal
ensure that software is woking fine under ensures that system can handle unexpcted
normal conditions conditions
SMOKE TESTING V.S SANITY TESTING

SMOKE TESTING SANITY TESTING

it's type of software testing where we it's type of software testing that is done after any
definition it's used to ensure that software
test the most critical functionalities changes to ensure that all related functioanlites are
meets
of the specfications
the software working fine

Access critical functionalities new functionalities

Goal stability of the sofware rationality of the sofware

carried by devlopers or Testers testers


RETESTING TESTING V.S CONFIRMATION TESTING

RETESTING TESTING CONFIRMATION TESTING

it'sit's used
type to ensure
of software that that
testing software
's used it's type of software testing that 's used to ensure that
Definition bug is successfully fixed and other existing
ensure the
to meets that specfications
bug is successfully fixed
functionalities are still working fine

failed test cases failed test cases , existing functionalities


Access

Goal bug fixes


bug fixes & checking other functionalities
afte bug fix

when After bug fix After bug fix


MONKEY TESTING V.S GORALLIA TESTING

MONKEY TESTING GORALLIA TESTING

definition it'sit's
typeused to ensure
of software that,where
testing software
tester it's type of software testing , where tester tests sepecifc
testmeets the specfications
the software randomly mdoule repeatdly to ensure that it is working correctly
and there is no bug in that module

we use radnom inputs as test data to test


how it works tester tests specfic module repeatdly
cases

concern All existing modules specfic module

find the bugs and errors in the software


ensure that it is working correctly and there
purpose application using experimental
techniques is no bug in that module
MANUAL TESTING V.S AUTOMATION TESTING

MANUAL TESTING AUTAOMATION TESTING

definition it's used to ensure that software it's type of software testing , where tester tests the
it's type of software testing ,where tester
testmeets the specfications
the software manually by himself software using automation scripts

tester exeutes the testcases using


how it works tester executes the test cases using automation tools
manual tools

programming doesn't require technical or


skills programming skills
requires technical or programming skills

types of functional & exploratory testing repetitive & regression testing


testing
PRIORITY V.S SEVERITY

PRIORITY SEVERITY

definition it's used to ensure that software it's used to determine the impact of the bug on the
it's used to determine how important the
bug meets the
is to be specfications
solved software

values high ,low ,medium critical ,major ,minor

how it works
Priority status is set based on customer Severity status is set based on the technical
requirements aspect of the product
ADHOC V.S EXPLORATORY

ADHOC TESTING EXPLORATORY TESTING

it'sit's
a type
usedoftosoftware
ensure testing where we
that software
Definition test test the without any it's a type of software testing where tester use his own
meets thesoftware
specfications experience in testing the software
documentation

How it works Random inputs Tester experience

programming
not required
required
skiils

Structure informal semi formal

You might also like