Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views23 pages

SQT - 1

The document outlines the differences between Software Quality Assurance (SQA), Quality Control (QC), and Testing, emphasizing their distinct roles in the software development lifecycle. It also discusses the importance of software quality, McCall's and Boehm's Quality Models, and the Cost of Quality (COQ) components. Additionally, it covers test case creation, including examples and key considerations for effective test planning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views23 pages

SQT - 1

The document outlines the differences between Software Quality Assurance (SQA), Quality Control (QC), and Testing, emphasizing their distinct roles in the software development lifecycle. It also discusses the importance of software quality, McCall's and Boehm's Quality Models, and the Cost of Quality (COQ) components. Additionally, it covers test case creation, including examples and key considerations for effective test planning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

R22 SQT-MINOR-1–QUESTIONS

Unit-1

1. Differentiate between SQA, Quality Control (QC), and Testing?


Ans) Difference Between SQA, Quality Control (QC), and Testing
Software Quality Assurance (SQA), Quality Control (QC), and Testing are three different but
closely related processes in software development. While they all aim to ensure software quality,
their scope, timing, and responsibilities differ.

1. Software Quality Assurance (SQA):-


• Definition: SQA is a proactive process that ensures quality is built into the software from the
very beginning of the development life cycle.
• Focus: Preventing defects by improving processes and following quality standards.
• Nature: Process-oriented.
• Activities:
• Defining quality policies and procedures.
• Conducting audits and process reviews.
• Ensuring compliance with industry standards like ISO, CMMI.
• When Performed: Throughout the entire software development life cycle (SDLC).

2. Quality Control (QC):-


• Definition: QC is a reactive process that focuses on detecting defects in the final product
before release.
• Focus: Identifying defects after the product is developed.
• Nature: Product-oriented.
• Activities:
• Inspection of deliverables.
• Verifying that the product meets specified requirements.
• Running acceptance checks.
• When Performed: After development, but before product delivery to the customer.
3. Testing
• Definition: Testing is a subset of QC that involves executing the software with the intent to
find defects.
• Focus: Validating that the software functions correctly under specified conditions.
• Nature: Activity-based.
• Activities:
• Designing test cases.
• Executing test scripts.
• Reporting and tracking bugs.
• When Performed: Usually in the testing phase of SDLC, but can also be integrated in Agile
through continuous testing.

4. Key Differences Table:-

Aspect SQA Quality Control (QC) Testing

Process to ensure quality in Process to verify the quality Activity to execute software
Definition the whole SDLC of the product and find defects

Nature Process-oriented Product-oriented Activity-based

Focus Prevent defects Identify defects Identify defects by execution

After development is
Timing Throughout the SDLC complete During/after development

Improve process for long- Ensure final product meets Verify functionality and
Goal term quality requirements correctness

Code review guidelines, Final inspection of software Unit testing, integration


Example process audits build testing
2. Define Software Quality. Why is it important in SDLC?
Ans) 1. Definition of Software Quality:-
Software Quality refers to the degree to which a software
product meets the specified requirements, customer expectations, and industry standards.
It is often defined as “conformance to requirements and fitness for use.”
In simple terms, a high-quality software product is:
• Free from defects.
• Reliable and performs consistently.
• User-friendly and meets customer needs.
• Maintainable and easy to enhance in the future.
Software quality is generally assessed based on attributes like functionality, performance, security,
reliability, usability, maintainability, and portability.

2. Importance of Software Quality in SDLC:-


Ensuring software quality is critical throughout the Software Development Life Cycle
(SDLC) because it directly affects customer satisfaction, business success, and maintenance costs.

Reasons why software quality is important:-


1. Customer Satisfaction
• High-quality software meets or exceeds user expectations.
• Reduces complaints and improves brand reputation.
2. Cost Reduction
• Detecting and fixing defects early in the SDLC is cheaper than after deployment.
• Poor quality leads to expensive rework and long-term maintenance costs.
3. Reliability and Performance
• Quality software performs well under expected conditions.
• Reduces downtime and increases user trust.
4. Compliance with Standards
• Many industries require adherence to standards like ISO 9001, CMMI, or IEEE for
safety and reliability.
• Non-compliance can lead to legal and financial penalties.
5. Security
• High-quality software includes strong security measures to prevent data breaches and
cyber-attacks.
6. Maintainability and Scalability
• Quality code is easier to update, scale, and adapt to future needs.

3. Role of Quality in Different SDLC Phases:-


• Requirement Phase – Ensuring requirements are clear, complete, and testable.
• Design Phase – Creating efficient, secure, and maintainable architecture.
• Development Phase – Writing clean, well-documented, and optimized code.
• Testing Phase – Detecting and fixing defects before release.
• Deployment Phase – Delivering a stable and reliable product.
• Maintenance Phase – Providing smooth updates without breaking existing functionality.

3. Describe McCall’s Quality Model with its categories?


Ans) McCall's Quality Model is a hierarchical model for software quality that was developed in 1977. It was
one of the first models to bridge the gap between software developers and users by defining software quality
from both perspectives. The model is based on a hierarchy of three levels: quality factors, quality criteria, and
metrics. It organizes 11 quality factors into three main categories.

Categories of McCall's Quality Model:-


The model's three categories represent different aspects of the software's life cycle and are viewed from
the perspective of different stakeholders:
1. Product Operation:-
This category focuses on the day-to-day operation of the software. These factors are most important
to the end-users.
• Correctness: The extent to which the software meets its specified requirements.
• Reliability: The ability of the software to perform its intended functions without failure under
specified conditions.
• Efficiency: The amount of computing resources (e.g., CPU, memory, storage) required for the
software to perform its functions.
• Integrity: The extent to which the software controls access to data and resources to prevent
unauthorized use.
• Usability: The effort required to learn, operate, and understand the software.

2. Product Revision:-
This category addresses the software's ability to be changed or modified. These factors are most
important to developers and maintenance staff.
• Maintainability: The effort required to locate and fix a defect in the software.
• Flexibility: The effort required to modify or adapt the software to new requirements.
• Testability: The ease with which the software can be tested to ensure it meets its requirements.

3. Product Transition
This category deals with the software's ability to adapt to new environments. These factors are
important for those who manage the deployment and integration of the software.
• Portability: The effort required to transfer the software from one platform or environment to
another.
• Reusability: The extent to which parts of the software can be used in other applications.
• Interoperability: The effort required to couple the software with another system.

4. Describe Boehm’s Quality Model. Compare it with McCall’s Model. Boehm’s Model (1978)?
Ans) Boehm’s Quality Model (1978) and Comparison with McCall’s Model
1. Introduction:-
Barry W. Boehm proposed his Software Quality Model in 1978 as an
enhancement over models, including McCall’s. While McCall’s model focused more on operational
and maintenance perspectives, Boehm’s model emphasized quality characteristics based on user
needs and maintainability.
Boehm organized software quality into three levels:
1. High-Level Characteristics (Basic quality aspects).
2. Intermediate-Level Characteristics (Quality attributes).
3. Primitive Characteristics (Measurable metrics).
2. Boehm’s Quality Model Structure
A. High-Level Characteristics
These define the overall goals for software quality:
1. As-is Utility – How well the software meets operational requirements.
2. Maintainability – How easy it is to modify and improve the software.
3. Portability – How easily the software can be adapted to a different environment.

B. Intermediate-Level Characteristics
These are specific quality attributes linked to high-level goals:
• Portability: Device independence, self-containedness, data independence.
• As-is Utility: Reliability, efficiency, human engineering (usability).
• Maintainability: Testability, understandability, modifiability.

C. Primitive Characteristics
These are measurable metrics used to evaluate the quality attributes, such as:
• Consistency.
• Accuracy.
• Completeness.
• Simplicity.
• Execution efficiency.

3. Key Features of Boehm’s Model


• Focuses on user-oriented quality attributes.
• Links abstract quality goals to measurable metrics.
• Emphasizes maintainability and adaptability along with operational quality.
4. Comparison: Boehm’s Model vs McCall’s Model

Aspect Boehm’s Model McCall’s Model

• Year
Introduced 1978 1977

User needs, maintainability, Product operation, revision, and


• Focus adaptability transition

Three levels: High-level → Three categories: Product Operation,


• Structure Intermediate → Primitive Revision, Transition

Uses primitive characteristics as Links quality attributes to measurable


• Metrics measurable indicators factors

• Maintainability Strong emphasis Moderate emphasis

• Adaptability Explicitly addressed under portability Addressed indirectly

Balanced between user and developer


• Perspective User-centric perspectives

5. Summary:-
Both Boehm’s and McCall’s models aim to define and measure software quality but
differ in structure and focus. McCall’s model organizes quality factors into three broad categories,
while Boehm’s model uses a hierarchical structure linking high-level quality goals to measurable
metrics, with a stronger focus on maintainability and portability.
5. Explain Cost of Quality (COQ) and its components?
Ans)Definition:
Cost of Quality (COQ) is the total cost incurred to ensure a software product meets the required quality
standards. It includes all expenses related to preventing defects, measuring quality, and fixing
defects found during or after development.

Importance of COQ:-
• Identifies where resources are spent in quality management.
• Balances prevention costs with failure costs.
• Helps in process improvement and defect reduction.
• Reduces long-term maintenance costs by focusing on prevention.

Components of COQ:-
COQ consists of four main components:-
1. Prevention Costs – Costs to prevent defects before they occur.
• Quality planning.
• Training and skill development.
• Process documentation.
• Purchase of quality tools.
2. Appraisal Costs – Costs of measuring and monitoring quality.
• Inspections and reviews.
• Testing activities.
• Quality audits.
• Verification and validation.
3. Internal Failure Costs – Costs from defects found before release.
• Rework of code.
• Debugging and fixing issues.
• Retesting after bug fixes.
4. External Failure Costs – Costs from defects found after release.
• Customer complaints.
• Patch releases and updates.
• Warranty claims.
• Loss of reputation.

Formula(Conceptual):-
COQ = Prevention Costs + Appraisal Costs + Internal Failure Costs + External Failure Costs

KeyPoint:-
A well-managed COQ strategy shifts spending from failure costs to prevention costs, improving
customer satisfaction and reducing overall development expenses.

Unit-2

1. What is a Test Case? Write an example for ATM withdrawal?


Ans) Definition of a Test Case:-
A test case is a set of conditions, inputs, actions, and expected results
created to verify whether a software application functions correctly and meets its requirements.

A test case contains details such as:-

• Test Case ID – Unique identifier for the test case.


• Test Description – What the test case is intended to verify.
• Preconditions – Conditions that must be met before execution.
• Test Steps – The sequence of actions to perform.
• Test Data – Input values used during the test.
• Expected Result – The anticipated outcome.
• Actual Result – The observed outcome after execution.
• Status – Pass/Fail.
Example Test Case for ATM Withdrawal

Field Details

Test Case ID ATM-TC-001

Test Description Verify successful cash withdrawal from ATM with sufficient balance.

- User has a valid ATM card.


- ATM is operational and connected to the bank network.
Preconditions - User has a sufficient account balance.

1. Insert the ATM card.


2. Enter valid PIN.
3. Select "Withdrawal" option.
4. Enter the withdrawal amount (e.g., ₹5000).
5. Confirm the transaction.
Test Steps 6. Collect cash and receipt.

Test Data Withdrawal Amount: ₹5000; PIN: 1234

- ATM dispenses ₹5000.


- Account balance is updated correctly.
Expected Result - Transaction receipt is printed.

Actual Result (To be filled after execution)

Status Pass/Fail

2. List and explain 5 key points to be considered while creating a test plan?
Ans) A test plan is a formal document that outlines the scope, approach, resources, and schedule for software
testing activities. It acts as a roadmap for the testing team, ensuring the testing process is structured and
effective. When creating a test plan, several important factors must be considered to ensure it
is clear, comprehensive, and achievable.
1. Scope of Testing:-
• Definition: The boundaries of testing — what features will be tested and what will not.
• Importance: Prevents unnecessary testing efforts and focuses resources on critical areas.
• Example: For an e-commerce application, testing might include login, product search, and
checkout, but exclude admin panel features.

2. Test Objectives:-
• Definition: The specific goals of the testing process.
• Importance: Helps the team understand what success looks like for the project.
• Example: “Verify that the payment gateway processes transactions correctly under high load.”

3. Resource Planning:-
• Definition: Identifying the team members, tools, and infrastructure needed for testing.
• Importance: Ensures the availability of skilled testers and appropriate tools before testing
starts.
• Example: Assigning automation testers for regression testing and manual testers for
exploratory testing.

4. Test Environment Setup:-


• Definition: The configuration of hardware, software, and network conditions under which tests
will run.
• Importance: Ensures that test results are accurate and reflect real-world usage.
• Example: Creating a staging environment that mirrors the production setup.

5. Risk Management:-
• Definition: Identifying and planning for potential issues that could affect testing.
• Importance: Allows the team to prepare mitigation strategies for possible failures.
• Example: If the integration API is not stable, the plan should include mock API testing as a
backup.
4. Differentiate between Positive and Negative Test Cases?
Ans) In software testing, test cases can be broadly categorized into positive and negative types.
• Positive Test Cases confirm that the system works as expected for valid input and correct
usage.
• Negative Test Cases verify that the system handles invalid input or unexpected user behavior
gracefully, without crashing or producing incorrect results.

1. Positive Test Cases:-

• Definition: Designed to validate that the application behaves correctly with valid inputs and
under normal conditions.
• Objective: Confirm that the system meets the specified requirements.
• Example: In a login page, entering a correct username and password should successfully log
the user in.
Characteristics:-
1. Based on valid scenarios.
2. Tests intended/expected use of the system.
3. Focuses on confirming correct functionality.

2. Negative Test Cases:-

• Definition: Designed to validate that the application can handle invalid inputs and abnormal
conditions without failure.
• Objective: Ensure the system is robust and error-handling mechanisms work correctly.
• Example: In a login page, entering an invalid password should display an error message
without granting access.

Characteristics:-
1. Based on invalid scenarios.
2. Tests unintended or incorrect usage.
3. Focuses on system stability and error messages.
3. Key Differences Table

Aspect Positive Test Cases Negative Test Cases

Validate error handling for invalid


• Definition Validate correct behavior for valid inputs inputs

• Objective Confirm the system meets requirements Ensure robustness and stability

• Focus Expected functionality Unexpected or error scenarios

Withdraw ₹500 from an ATM when Attempt withdrawal greater than


• Example balance is sufficient available balance

Error message or rejection without


• Outcome Successful operation crash

4. Write 3 positive and 3 negative test cases for user registration?


Ans) Scenario: Testing the user registration functionality of a web application.

Positive Test Cases (Valid scenarios ensuring correct functionality):-

Test Case ID Description Test Data Expected Result

Username: john123, User account created


• PTC- Register with valid Email: [email protected], successfully,
001 details Password: Pass@123 confirmation email sent

Register with a strong Username: mary_user, User registered


• PTC- password and unique Email: [email protected], successfully, redirected
002 email Password: Test@456 to login page
Test Case ID Description Test Data Expected Result

Register using all Username: alex99,


• PTC- required mandatory Email: [email protected], Registration successful
003 fields only Password: Alex@789 without optional fields

Negative Test Cases (Invalid scenarios ensuring error handling):-

Test Case ID Description Test Data Expected Result

Register with Username: john_new,


• NTC- already Email: [email protected], Error message: "Email already
001 registered email Password: Pass@123 exists"

Error message: "Password must


Username: weakpass, contain uppercase, lowercase,
• NTC- Register with Email: [email protected], numbers, and special
002 weak password Password: 12345 characters"

Register with Username: noemail,


• NTC- invalid email Email: abc@com, Error message: "Invalid email
003 format Password: Pass@123 format"

5. Explain the IEEE 829 Test Plan Format with its components?
Ans) IEEE 829 Test Plan Format and Its Components
1. Introduction:-
The IEEE 829 Standard for Software Test Documentation is a globally
recognized framework that defines various documents used in software testing.One of the most
important documents under this standard is the Test Plan — a structured document that defines
the scope, approach, resources, and schedule of the testing process.It ensures clarity, standardization,
and efficiency in test execution.
2. Components of the IEEE 829 Test Plan

Component Description

A unique ID or reference number to track the test plan. Helps in


1. Test Plan Identifier version control.

Overview of the project, testing objectives, and purpose of the


2. Introduction document.

List of software modules, features, or systems to be tested with


3. Test Items version details.

4. Features to be Tested Specifies the functionalities and features within the testing scope.

Lists functionalities excluded from the testing cycle, with reasons for
5. Features Not to be Tested exclusion.

Describes the testing strategy, types of testing (Unit, Integration,


System, Acceptance), and testing techniques (Black-box, White-box,
6. Approach Automation).

Defines the conditions under which a test is marked as passed or


7. Item Pass/Fail Criteria failed.

8. Suspension Criteria and Specifies when to pause testing and the conditions required to
Resumption Requirements resume.

Lists all outputs from the testing process, such as test cases, defect
9. Test Deliverables reports, and summary reports.

10. Testing Tasks Details all testing-related activities and assigns responsibilities.

Specifies the required hardware, software, network, and tools for


11. Environmental Needs testing.
Component Description

Defines the roles and duties of each team member in the testing
12. Responsibilities process.

13. Staffing and Training


Needs Identifies required personnel and any specialized training needed.

14. Schedule Provides the timeline for each testing activity.

15. Risks and Contingencies Lists potential risks and the contingency plans to address them.

Contains the names and signatures of stakeholders approving the


16. Approvals plan.

Unit-3
1. A) What is Boundary Value Analysis (BVA) in software testing, & what is its primary objective?
Ans) Boundary Value Analysis (BVA) in Software Testing

Definition:-
Boundary Value Analysis (BVA) is a black-box testing technique used to identify defects
at the boundaries of input ranges rather than within the middle of those ranges.
The idea is that errors often occur at the “edge cases” where input changes from one valid value to
another, or from valid to invalid.

Primary Objective:-
The main objective of BVA is to:-
• Detect defects at the boundaries of input domains.
• Ensure that the software works correctly for the minimum, maximum, just inside, and just
outside boundary values.
• Reduce the number of test cases while maintaining high defect detection efficiency.
Key Points:-
• Type: Black-box testing technique.
• Focus: Minimum & maximum input values and their neighbors.
• Benefit: Higher chance of catching boundary-related errors with fewer test cases.

Example:-

If a form accepts age between 18 and 60, BVA test cases will be:
• Just below minimum: 17 (invalid)
• Minimum: 18 (valid)
• Just above minimum: 19 (valid)
• Just below maximum: 59 (valid)
• Maximum: 60 (valid)
• Just above maximum: 61 (invalid)

B) Why is BVA considered an effective test design technique?


Ans) Reasons for Effectiveness:-

1. High Defect Detection Rate-


• Many software defects occur at the boundaries of input ranges.
• BVA specifically targets these critical points, increasing the chance of finding bugs.
2. Covers Edge Cases-
• Edge values are often missed during random testing.
• BVA ensures that minimum, maximum, and just-outside values are tested.
3. Optimized Number of Test Cases-
• Tests only selected boundary points instead of the entire range.
• Reduces testing effort while maintaining strong coverage.
4. Applicable to a Wide Range of Inputs-
• Works for numeric ranges, dates, sizes, text length, and more.
• Makes it versatile across different types of software systems.

5. Early Detection of Critical Failures-


• Identifies issues before they affect normal operations, improving software reliability.

2. What is Equivalence Partitioning (EP) in software testing, and what is its primary goal?
Ans) Equivalence Partitioning (EP) in Software Testing
3. Definition:-
Equivalence Partitioning (EP) is a black-box test design technique in which the input
data of a software application is divided into partitions (or classes) of equivalent data.The idea is
that test cases from one partition are expected to behave the same way, so testing just one value from
each partition is sufficient to represent the entire group.

2. Primary Goal:-
The main goal of Equivalence Partitioning is to:
• Minimize the number of test cases while still achieving maximum coverage.
• Ensure that all possible input scenarios are covered by testing only one representative
value from each partition.
• Reduce redundancy in testing and save time without sacrificing quality.

3. Key Points
• Type: Black-box testing technique.
• Focus: Dividing inputs into valid and invalid partitions.
• Benefit: Fewer test cases with broad coverage.
4. Example
If a form accepts age between 18 and 60, the partitions would be:

Partition Type Values in Partition Representative Test Value

Invalid (Below range) Age < 18 15

Valid 18 ≤ Age ≤ 60 30

Invalid (Above range) Age > 60 75

3. A) What is Decision Table Testing, and when is it particularly useful?


Ans) Decision Table Testing in Software Testing
1. Definition:-
Decision Table Testing is a black-box test design technique that uses a table
format to represent different combinations of inputs (conditions) and their corresponding system
actions (outputs).It is sometimes called a Cause-Effect Table because it maps causes (inputs) to effects
(outputs).

2. Purpose:-
The purpose of decision table testing is to:
• Identify and test all possible combinations of conditions and their results.
• Ensure that the software behaves correctly for every unique rule or decision.

4. When It Is Particularly Useful:-

Decision Table Testing is most effective when:


• The system has complex business rules with multiple conditions.
• The output depends on different combinations of inputs.
• You need to ensure that no combination of rules is missed.
• You are testing logical decision-making processes (e.g., loan approvals, insurance claims,
discount calculations).
4. Example:-
A bank’s loan approval rule:
• Condition 1: Customer has a stable income.
• Condition 2: Customer has no outstanding debt.
• Action: Loan is approved.
Decision Table:-

Condition Rule 1 Rule 2 Rule 3 Rule 4

Stable Income Y Y N N

No Debt Y N Y N

Action Approve Reject Reject Reject

B) What are the main components of a Decision Table? Explain each briefly.
Ans) Main Components of a Decision Table:-
A Decision Table is divided into specific sections to
clearly represent conditions, possible actions, and rules.

The four main components are:-

1. Conditions Stub:-
• Definition: A list of all conditions (inputs or decision factors) that can influence the outcome.
• Purpose: Clearly defines what factors are being tested.
• Example: “Customer has stable income”, “Customer has no debt”.

2. Condition Entries:-
• Definition: Possible values (usually Yes/No, True/False, or ranges) for each condition under
different rules.
• Purpose: Specifies how each condition is set for a particular rule.
• Example: For “Stable income” → Rule 1: Yes, Rule 2: No.

3. Actions Stub:-
• Definition: A list of all possible actions or outcomes the system can take based on the
conditions.
• Purpose: Describes what will happen when a specific combination of conditions is met.
• Example: “Approve loan”, “Reject loan”.

4. Action Entries:-
• Definition: The specific actions that correspond to each rule in the table.
• Purpose: Shows the decision/output for the combination of condition entries in that rule.
• Example: Rule 1 → Approve loan; Rule 2 → Reject loan.

4. List and briefly describe the common states in a typical Defect Life Cycle (Bug Life Cycle)?
Ans) Common States in a Typical Defect Life Cycle (Bug Life Cycle):-

The Defect Life Cycle (or Bug Life Cycle) describes the series of
states a defect goes through from its discovery to its closure.Each state indicates the current status of
the defect in the testing and fixing process.

1. New:-
• The defect is reported for the first time and logged into the defect tracking system.
• Awaiting review and verification.

2. Assigned:-
• The defect is assigned to a developer or responsible team for analysis and fixing.

3. Open:-
• The developer starts working on the defect to fix it.
• The bug is active and under investigation.
4. Fixed:-
• The developer has applied a code change to resolve the defect.
• The fix is ready for testing.

5. Pending Retest:-
• The defect is waiting for the tester to retest it after the fix is applied.

6. Retest:-
• The tester verifies if the defect is fixed by executing the relevant test cases.

7. Verified:-
• The tester confirms that the defect no longer exists and the fix works as intended.

8. Closed:-
• The defect is marked as closed when it is successfully verified and no longer reproducible.

9. Reopened (if applicable):-


• If the defect still exists after being fixed, it is moved back to an active state for further
investigation.

10. Deferred (if applicable):-


• The defect is postponed to be fixed in a future release due to lower priority or time constraints.

11. Rejected (if applicable):-


• The defect is invalid, not reproducible, or works as intended; hence no fix is needed.
5. What is the difference between "Severity" and "Priority" in bug tracking?
Ans) In bug tracking, both severity and priority describe important aspects of a defect, but they focus
on different perspectives — technical impact vs. business urgency.

1. Severity:-
• Definition: Indicates the impact of the defect on the system’s functionality.
• Focus: Measures the seriousness of the defect from a technical point of view.
• Decided by: Testers / QA team based on how badly the bug affects the software.
• Example: A system crash when clicking “Save” is High Severity.

2. Priority:-
• Definition: Indicates the order in which the defect should be fixed.
• Focus: Measures the urgency of resolving the defect from a business point of view.
• Decided by: Project managers / Product owners based on delivery deadlines and customer
needs.
• Example: A spelling mistake on the homepage before a product launch may be High
Priority even if it’s Low Severity.

3. Key Differences Table

Aspect Severity Priority

• Meaning Impact of the defect on the system. Urgency of fixing the defect.

• Focus Technical seriousness. Business urgency.

• Set By Tester / QA team. Project Manager / Product Owner.

• Time
Factor Independent of deadlines. Dependent on delivery schedule.

App crashes on login → High Typo in logo before launch → High


• Example Severity. Priority.

You might also like