Software testing interview questions
part 1
1.What is the purpose of software testing? Why do we
test software?
The purpose of software testing is to:
1. Verify that the application meets functional and non-
functional requirements – it should do what it's
supposed to and perform efficiently.
2. Ensure the software is of high quality – identifying
defects or bugs that could affect functionality,
performance, or user experience.
3. Validate that it meets user expectations – usability,
reliability, and security are checked from the end-user
perspective.
4. Reduce risks – testing helps identify critical issues
early, reducing the cost and effort of fixing problems
later.
5. Support continuous improvement – repeated testing
ensures the software remains stable and dependable
as changes are introduced.
2.Can you explain the difference between functional
testing and non-functional testing, with an example for
each?
Functional Testing:
Functional testing verifies that each feature of the
software works according to the requirements and
specifications. It focuses on what the system does — its
functions and workflows.
Example: Testing a login page by entering valid or invalid
credentials (username and password) to ensure that the
user is authenticated and redirected appropriately.
Non-Functional Testing:
Non-functional testing evaluates how the system
performs under various conditions. It checks aspects like
performance, usability, reliability, and security, rather
than specific behaviors.
Example: Performance testing the application by
simulating multiple users logging in at the same time to
check how the system responds and whether it maintains
speed and stability.
3.What are the different levels of testing? Briefly explain
each one with an example.
1. Unit Testing:
Unit testing is the process of testing individual
components or functions of the software in isolation to
ensure they work correctly.
Example: Testing the addition function separately to
confirm it returns correct results before integrating it
with other functions.
2. Integration Testing:
Integration testing focuses on verifying that multiple
modules or components interact and work together as
expected. It ensures that interfaces and data flow
between units are correct .
Example: Testing how the addition and subtraction
functions work together in a calculator application.
3. System Testing:
System testing tests the entire software as a whole,
validating both functional requirements (features) and
non-functional requirements (performance, usability,
etc.).
Example: Testing the complete calculator app, including
UI elements, data validation, performance, and
responsiveness.
4. User Acceptance Testing (UAT):
UAT is the final stage of testing where the software is
tested in a real-world environment by the clients or end
users to ensure it meets business requirements and user
expectations.
Alpha Testing: Performed internally with testers and
business clients before the software is released
externally. Focuses on identifying bugs and improving
stability.
Beta Testing: Performed externally with selected end
users along with clients. The goal is to gather feedback
on usability, performance, and overall experience
before final release.
4.What is the difference between severity and priority in
a defect? Explain with an example.
Severity:
Severity indicates how serious or critical a defect is in
terms of the system’s functionality or stability. It reflects
the impact on the application’s operation.
Example: If the login button doesn’t work, users cannot
access the system, blocking further testing. This is a high
severity defect .
Priority:
Priority refers to how soon the defect should be fixed
based on business requirements and user expectations. It
reflects how urgently it should be addressed.
Example: A mismatch in the company ’s logo doesn’t stop
users from using the application, but it affects branding
and customer perception. Therefore, it may be given a
high priority by the client .
5.What is regression testing and when is it performed?
Explain why it is important .
Regression Testing:
Regression testing is performed after changes are made
to the software — such as bug fixes, enhancements, or
updates — to ensure that the new changes haven’t
adversely affected existing functionalities. It involves
retesting not only the modified module but also other
related or interconnected modules to verify that the
application as a whole still works as expected.
Example:
In an e-commerce application, if the pricing display is
changed from US dollars to Indian rupees, regression
testing would involve verifying that:
The order page reflects the new pricing correctly.
The cart section updates the total price based on the
new currency.
Payment methods and checkout functionalities still
work as before.
Importance:
Regression testing is crucial because even small changes
in code can unintentionally affect other parts of the
application. By thoroughly testing both the updated
module and connected functionalities, it helps:
Maintain software stability.
Prevent new bugs from being introduced.
Ensure a seamless experience for users even after
updates or fixes.
6.What is the difference between smoke testing and
sanity testing? Provide examples to support your answer.
Smoke Testing:
Smoke testing is a preliminary test that checks whether
the critical and core functionalities of the application are
working after a new build or deployment . It ’s like a “build
verification test ” to ensure that the software is stable
enough for further detailed testing.
Example: In a login page, checking that the username
field, password field, and login button are functioning
correctly before proceeding with more in-depth tests.
Sanity Testing:
Sanity testing is a focused test conducted after changes
or bug fixes to ensure that the specific functionalities
affected by the changes are working as expected, without
testing the entire system. It confirms that the
modifications haven’t broken related functionalities.
Example: After adding a middle name field in a
registration form, checking that the first name and last
name fields are still accepting input and working
correctly.
7.What is a test case? What are the essential components
of a test case? Provide an example of a simple test case
and explain each part .
Test Case:
A test case is a documented set of conditions, inputs,
steps, and expected results designed to verify that a
particular functionality or feature of an application works
as intended. It helps testers systematically check
different parts of the software and ensure consistency in
testing.
Essential Components of a Test Case:
Test Case ID: A unique identifier for the test case (e.g.,
TC_001).
Test Scenario: The high-level functionality or feature
being tested (e.g., login functionality).
Test Case Name: A descriptive title for the test case
(e.g., Login Page Functionality Testing).
Module Name: The part of the application being tested
(e.g., Login Page).
Pre-requisites: Conditions or setup required before
executing the test (e.g., open the URL in a browser and
ensure the login page loads).
Test Case Description: A brief explanation of what the
test case is aiming to validate.
Test Steps: A sequence of actions that need to be
performed (e.g., open login page → enter valid
username → enter password → click submit).
Test Data (optional but important): Specific input data
like username and password.
Expected Results: What the system should do after
executing the test steps (e.g., user is redirected to the
home page).
Obtained Results: The actual result after running the
test case.
Priority: Indicates the importance or urgency of this
test case (e.g., high priority if it's a core function like
login).
Comments/Attachments: Additional notes, bugs,
screenshots, or observations found during testing.
Example Test Case:
Test Case ID: TC_001
Test Scenario: Login Testing
Test Case Name: Login Page Functionality Testing
Module Name: Login Page
Pre-requisites: Open the given URL in the Google Chrome
browser. Ensure the login page is displayed correctly.
Test Case Description: Verify that the login functionality
works properly when valid credentials are provided.
Test Steps:
1. Open the login page.
2. Enter a valid username.
3. Enter a valid password.
4. Click the submit button.
Expected Result: The user should be redirected to the
home page.
Obtained Result: (To be filled after testing)
Priority: High – login is a critical feature.
Comments: Any issues found during testing, such as
error messages or page loading issues, should be
documented here with screenshots.
8.What is the difference between verification and
validation in software testing? Explain with examples for
each.
Verification:
Verification is the process of evaluating the design,
documentation, and intermediate products of the
software development lifecycle to ensure that the system
is being built according to the specified requirements and
standards. It answers the question, “Are we building the
product right? ”
Example: In a login page, verification would involve
checking that the username and password fields are
present , correctly labeled, and aligned as per the design
specifications before actual user interaction.
Validation:
Validation is the process of testing the final product to
ensure that it meets the user ’s needs and requirements. It
answers the question, “Are we building the right
product? ”
Example: In the login page, validation would involve
entering valid and invalid credentials to check if the
authentication works and users are redirected
appropriately.
9.What is test data and why is it important in software
testing? Provide examples of how test data is used in
testing a login page.
Test Data:
Test data refers to the set of input values that are used
to execute test cases in order to verify the functionality,
performance, and behavior of the software. It is essential
for simulating real-world scenarios and validating how the
system responds to various inputs.
Importance of Test Data:
1. Helps Validate Different Scenarios:
2. Ensures Comprehensive Coverage
3. Supports Automation and Regression Testing
4. Improves Accuracy
Examples in Login Page Testing
1. Valid Test Data:
Username: [email protected]
Password: Password123!
➤ Used to verify that users can log in successfully.
2.Invalid Test Data:
Username: [email protected]
Password: wrongpass
➤ Used to ensure that users cannot log in with
incorrect credentials.
3.Boundary Test Data:
Username: Empty field
Password: Empty field
➤ Used to check how the system handles missing
input and displays error messages.
4.Special Character Test Data:
Username: user!@#
Password: pass!@#
➤ Used to verify how the system handles special
characters.
10.Can you explain what a defect is, and how it differs
from a bug? Also, why is it important to log defects
properly during testing?
Defect:
A defect is any flaw, error, or deviation from the
expected requirements or design in the software that can
cause it to behave incorrectly. Defects can occur in
various stages like requirements gathering, design,
development , or even configuration.
Example: A mismatch between the expected layout of the
login page and the actual design during development is a
defect .
Bug:
A bug is a defect that is discovered during testing, where
the software behaves in an unintended or incorrect way.
It ’s the tester ’s job to find these bugs by executing test
cases.
Example: If the login page allows a user to enter an
empty password and still logs them in, it ’s a bug
identified by the tester.
Key Difference:
A defect refers to any issue in the product , regardless
of when or where it is found.
A bug specifically refers to an issue discovered during
testing.
Importance of Logging Defects Properly:
1. Tracking & Prioritization: It helps developers
understand which defects need urgent fixing.
2. Accountability: Provides a record of issues and who
reported them.
3. Quality Improvement: Ensures that all known issues
are addressed before release, reducing the risk of
failure in production.
4. Regression Prevention: Helps ensure the same issue
doesn’t occur in future builds by providing a history.
11.What is test coverage, and why is it important in
software testing? How can you measure or improve test
coverage?
Test Coverage:
Test coverage is a metric that measures how much of the
application’s functionality, code, or requirements have
been tested through executed test cases. It helps to ensure
that the software is thoroughly tested and that no
important feature or area is left unchecked.
How It ’s Measured:
Test Coverage= ((Test Cases Executed/Total no. of. Test Cases) * 100)
For example, if there are 100 test cases and 80 have been
executed, the test coverage is 80%.
Test coverage can also be measured in different ways:
Requirement coverage: Ensuring all requirements are
covered by test cases.
Code coverage: Checking how much of the actual code
has been executed during testing.
Functionality coverage: Making sure every feature has
been tested.
Why Test Coverage is Important:
1. Ensures completeness
2. Identifies gaps
3. Improves quality
4. Supports decision-making
How to Improve Test Coverage:
1. Review requirements thoroughly and create test cases
for all scenarios.
2. Use tools to track which parts of the application have
been tested.
3. Include boundary, edge, and negative test cases.
4. Perform exploratory testing to find gaps not covered by
formal test cases.
5. Automate tests where possible to increase execution
speed and frequency.
12.What is the difference between a test plan and a test
case? Explain their purpose and how they are used in the
testing process.
Test Plan:
A test plan is a comprehensive document that outlines the
strategy, objectives, scope, resources, schedule, and
activities involved in the testing process. It defines what
will be tested, how it will be tested, who will test it, and
when. It serves as a guide for the testing team and helps
coordinate testing efforts.
Purpose of a Test Plan:
Define the scope and objectives of testing.
Identify resources, timelines, and roles.
Provide information about testing tools, environments,
and risks.
Ensure everyone involved is aligned on expectations and
processes.
Example Elements in a Test Plan:
Overview of the project .
Test objectives and scope.
Testing approach and methodologies.
Resources and team responsibilities.
Risks and mitigation plans.
Test schedule and deliverables.
Test Case:
A test case is a detailed document that describes the exact
steps, inputs, and expected results needed to verify a
specific functionality or feature of the software. It focuses
on how to test and ensures consistency when tests are
executed.
Purpose of a Test Case:
Guide testers to perform specific tests.
Document test inputs and expected outcomes.
Ensure repeatability and accuracy of tests.
Help identify defects when actual results differ from
expected ones.
13.What is exploratory testing and when would you use it
instead of scripted testing? Explain its advantages and
limitations.
Exploratory Testing:
Exploratory testing is a technique where testers actively
explore the application without predefined test cases or
scripts. Instead of following step-by-step instructions,
testers interact with the system dynamically, using their
experience, intuition, and creativity to discover defects,
gaps, or unusual behaviors.
When to Use It:
When time is limited and there’s no detailed
documentation available.
During early phases of testing to understand the system.
For usability testing to see how a user might navigate
the application.
To uncover hidden defects that structured tests might
miss.
Advantages of Exploratory Testing:
1. Flexibility: Testers can adapt their approach as they
learn more about the system.
2. Uncovers Hidden Issues: It helps identify bugs not
covered by scripted test cases.
3. User-Centric: Testers simulate real-world usage, helping
assess usability and user experience.
4. Encourages Creativity: Testers can experiment with
scenarios that may not have been anticipated during
planning.
5.
Limitations of Exploratory Testing:
1. Lack of Documentation: It can be hard to reproduce
bugs if proper notes aren’t kept.
2. Time-Consuming: Since it’s unscripted, it may take
longer to explore thoroughly.
3. Incomplete Coverage: Some areas may be overlooked
without a structured approach.
4. Depends on Tester’s Skill: Effectiveness is highly reliant
on the tester’s experience and knowledge.
14.Can you explain what boundary value analysis (BVA) is
and how it is useful in testing? Provide an example related
to testing a field that accepts age between 18 and 60.
Boundary Value Analysis (BVA):
Boundary Value Analysis is a test design technique that
focuses on testing the values at the boundaries or edges of
input ranges. This is because errors are more likely to occur
at boundary conditions rather than in the middle of the
range. By testing values just below, at, and just above the
boundaries, testers can effectively identify defects without
testing every possible input.
Why It’s Useful:
Reduces the number of test cases while still ensuring
thorough testing.
Helps catch errors that occur at limits or transitions.
Ensures both valid and invalid edge cases are tested.
Example – Age Field:
Suppose an input field accepts age values between 18 and
60 inclusive. Using boundary value analysis, the following
test cases are created to test edge cases:
✔ One below lower boundary: 17 (invalid input)
✔ At lower boundary: 18 (valid input)
✔ At upper boundary: 60 (valid input)
✔ One above upper boundary: 61 (invalid input)
This ensures that the application correctly handles
boundary inputs where most errors are likely to occur.
15.What is equivalence partitioning, and how is it different
from boundary value analysis? Provide an example using
the same age field between 18 and 60.
Equivalence Partitioning (EP):
Equivalence Partitioning is a test design technique that
divides the input data into partitions or groups where the
system behavior is expected to be the same. Instead of
testing every possible input, one representative value from
each partition is chosen to validate the functionality.
Why It’s Useful:
Reduces the number of test cases by grouping inputs
with similar behavior.
Ensures that all scenarios are covered without
exhaustive testing.
Helps in identifying defects efficiently.
Example – Age Field (18 to 60):
The valid range is between 18 and 60 inclusive, so we can
define partitions as:
1. Invalid range below lower limit: Less than 18 → Example
input: 10
2. Valid range: Between 18 and 60 → Example input: 36
3. Invalid range above upper limit: Greater than 60 →
Example input: 65
So, in equivalence partitioning, the test inputs would be:
10 → Invalid
36 → Valid
65 → Invalid