Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views10 pages

Manual Testing Questions and Answers

The document is a comprehensive guide on manual testing concepts aimed at freshers, covering various topics such as regression testing, retesting, and the software testing life cycle. It includes definitions, differences between testing types, and methodologies like boundary value analysis and equivalence partitioning. Additionally, it provides insights into testing techniques, bug reporting, and ensuring test coverage.

Uploaded by

nausheen khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views10 pages

Manual Testing Questions and Answers

The document is a comprehensive guide on manual testing concepts aimed at freshers, covering various topics such as regression testing, retesting, and the software testing life cycle. It includes definitions, differences between testing types, and methodologies like boundary value analysis and equivalence partitioning. Additionally, it provides insights into testing techniques, bug reporting, and ensuring test coverage.

Uploaded by

nausheen khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Manual Testing Q&A for Freshers

Contents
Back to Contents

1 What is Regression Testing? 2

2 What is Retesting? 2

3 What is the difference between Verification and Validation? 3

4 Smoke and Sanity Testing 3

5 Difference between Sanity Testing and Retesting 4

6 Difference between Regression Testing and Sanity Testing 4

7 What is Boundary Value Analysis (BVA)? 4

8 What is Equivalence Partitioning? 4

9 What is Compatibility Testing? 4

10 What is the Software Testing Life Cycle (STLC)? 5

11 What is a Defect Life Cycle? 5

12 What are the entry and exit criteria for a test cycle? 5

13 What is the difference between a bug, defect, error, and failure? 6

14 What if a test case fails during execution? 6

15 How do you ensure complete test coverage? 6

16 How do you prioritize test cases in manual testing? 7

17 What if the developer is not accepting the bug you reported? 7

18 What is the difference between Alpha and Beta Testing? 7

19 What is Usability Testing? 7

1
20 What is Monkey Testing? 8

21 What is the role of a Requirement Traceability Matrix (RTM)? 8

22 What is a Bug Report? 8

23 Testing Techniques in Software Testing 8

24 What is Shift Left Testing? 10

25 What is Accessibility Testing? 10

How to Use This Document


Back to Contents
This document is designed to be interactive. Use the hyperlinked table of contents to
navigate to specific questions. Each section includes a "Back to Contents" link for easy
navigation. In Word, apply "Heading" styles to section titles to enable collapsible sections
(View > Navigation Pane > check "Show Navigation Pane").

1. What is Regression Testing?


Back to Contents
Answer: Regression testing ensures that new code changes havent adversely affected
existing functionalities.

• Purpose: To ensure that new code changes haven’t introduced new defects or
negatively impacted existing functionality.
• Scope: Broader than retesting, potentially covering the entire application or spe-
cific modules impacted by the changes.
• Trigger: Performed after new features are added, code is modified, or bugs are
fixed.
• Focus: Detects unexpected side effects of changes, ensuring the software remains
stable.
• Test Cases: May involve re-executing existing test cases (including those that
previously passed) and potentially new test cases.
Back to Contents

2. What is Retesting?
Back to Contents
Answer: Retesting verifies that a previously identified bug has been fixed and the func-
tionality is working as expected.

2
• Purpose: To verify that a specific bug has been fixed and the functionality is
working as intended.
• Scope: Focused on the specific area or functionality where the bug was found.
• Trigger: Occurs after a bug has been reported and fixed by a developer.
• Focus: Ensures the original issue is resolved, not looking for new issues.
• Test Cases: Usually involves re-executing the test cases that originally failed due
to the bug.
Back to Contents

3. What is the difference between Verification and Validation?


Back to Contents
Answer: Verification helps in examining whether the product is built right according to
requirements, while validation helps in examining whether the right product is built to
meet user needs.

• Verification: Checks if the product is being built correctly (based on require-


ments/design).
• Validation: Checks if the right product is being built (meets user needs).
Back to Contents

4. Smoke and Sanity Testing


Back to Contents
Answer:

• Smoke Testing: Like a general health check-up for the application, ensuring that
the essential features work and the build is ready for further, detailed testing.
• Sanity Testing: More focused, like a specialized check-up, verifying that specific
new features or bug fixes work correctly and haven’t caused issues elsewhere in the
software.
Smoke testing is broad and shallow, executed frequently after builds to ensure overall
stability. Sanity testing is narrow and deep, focused on verifying rationality after targeted
changes have been made to the code.
Back to Contents

3
5. Difference between Sanity Testing and Retesting
Back to Contents
Answer: Sanity testing and retesting serve different purposes in software testing: sanity
testing verifies that specific functionalities or core modules work as expected after minor
changes or bug fixes, whereas retesting specifically checks that previously identified de-
fects, now claimed as fixed by developers, have actually been resolved.
Back to Contents

6. Difference between Regression Testing and Sanity Testing


Back to Contents
Answer: Regression testing is broader, ensuring that new changes havent broken ex-
isting functionalities across the application, while sanity testing is narrower, focusing on
verifying specific functionalities or bug fixes after minor changes. Regression testing of-
ten involves a larger set of test cases, whereas sanity testing targets a subset for quick
validation.
Back to Contents

7. What is Boundary Value Analysis (BVA)?


Back to Contents
Answer: BVA is a black-box test design technique where tests are designed at the edges
of input values. For example, for an input range 1100, you test 0, 1, 100, and 101.
Back to Contents

8. What is Equivalence Partitioning?


Back to Contents
Answer: It divides input data into valid and invalid partitions. You test only one value
from each partition, assuming others will behave similarly.
Back to Contents

9. What is Compatibility Testing?


Back to Contents
Answer: Compatibility testing checks how the software behaves across different:

• Browsers
• Operating Systems
• Devices

4
• Network environments
Back to Contents

10. What is the Software Testing Life Cycle (STLC)?


Back to Contents
Answer: STLC includes the following phases:

1. Requirement Analysis
2. Test Planning
3. Test Case Development
4. Environment Setup
5. Test Execution
6. Test Closure
Back to Contents

11. What is a Defect Life Cycle?


Back to Contents
Answer: The Defect Life Cycle is the journey of a defect through different states like:

• New → Assigned → Open → Fixed → Retest → Verified → Closed


• If not fixed, it may go to: Reopen → Deferred → Rejected → Duplicate
Back to Contents

12. What are the entry and exit criteria for a test cycle?
Back to Contents
Answer:
Entry Criteria: Conditions that must be met before testing begins.

• Test environment is ready


• Test cases are reviewed
• Requirements are finalized
Exit Criteria: Conditions that must be met to stop testing (e.g., 95% test cases passed).

• All critical test cases passed

5
• No high severity bugs open
• Test summary is completed
Back to Contents

13. What is the difference between a bug, defect, error, and


failure?
Back to Contents
Answer:

• Error: Mistake by a developer.


• Defect/Bug: Deviation found during testing.
• Failure: Deviation observed by the end-user in production.
Back to Contents

14. What if a test case fails during execution?


Back to Contents
Answer:

• Analyze the failure.


• Check for environment or data issues.
• Log a bug if it’s a genuine defect.
• Retest once fixed.
Back to Contents

15. How do you ensure complete test coverage?


Back to Contents
Answer:

• Use the Requirement Traceability Matrix (RTM).


• Review requirements and test cases.
• Perform peer reviews.
• Validate edge cases and alternate flows.
Back to Contents

6
16. How do you prioritize test cases in manual testing?
Back to Contents
Answer: Based on:

• Business impact
• Frequently used functionality
• Areas affected by recent changes (for regression)
• Risk of failure
Back to Contents

17. What if the developer is not accepting the bug you reported?
Back to Contents
Answer: You should:

• Reproduce the bug with clear steps.


• Provide evidence (logs/screenshots/videos).
• Map it to the requirement.
• If disagreement continues, escalate to the lead or use defect triage for resolution.
Back to Contents

18. What is the difference between Alpha and Beta Testing?


Back to Contents
Answer:

• Alpha Testing: Done by internal teams at the developers site.


• Beta Testing: Done by end-users in the real world before final release.
Back to Contents

19. What is Usability Testing?


Back to Contents
Answer: Usability testing checks how easy, efficient, and user-friendly the application is
for end users.
Back to Contents

7
20. What is Monkey Testing?
Back to Contents
Answer: Monkey testing involves inputting random data and performing random actions
to check if the application crashes or behaves unexpectedly.
Back to Contents

21. What is the role of a Requirement Traceability Matrix (RTM)?


Back to Contents
Answer: RTM maps each requirement to test cases, ensuring that all requirements are
covered by test cases and helps track test coverage.
Back to Contents

22. What is a Bug Report?


Back to Contents
Answer: A bug report documents the defect with details like:

• Summary
• Steps to reproduce
• Expected vs actual results
• Severity/Priority
• Attachments (logs/screenshots)
Back to Contents

23. Testing Techniques in Software Testing


Back to Contents
Answer:

1. Black Box Testing


Black box testing evaluates software functionality without knowledge of internal
code structure. Testers focus on inputs and expected outputs, simulating end-user
interactions.
Techniques:

• Equivalence Partitioning: Dividing input data into valid and invalid partitions
• Boundary Value Analysis: Testing at the boundaries of input ranges

8
• State Transition Testing: Testing system behavior through different states
• Decision Table Testing: Using tables to represent business logic combinations
2. White Box Testing
White box testing examines internal code structure, logic, and algorithms. It re-
quires programming knowledge to design test cases based on code paths and internal
mechanisms.
Techniques:

• Statement Coverage: Ensuring all code statements are executed


• Branch Coverage: Testing all decision branches in the code
• Path Coverage: Testing all possible execution paths
• Condition Coverage: Testing all logical conditions
• Loop Testing: Focusing on loop constructs and iterations
3. Grey Box Testing
Grey box testing combines elements of both black box and white box approaches.
Testers have partial knowledge of internal workings while focusing on functionality.
Applications:

• Integration testing
• Penetration testing
• API testing
• Security testing
4. Functional Testing
Functional testing verifies that software features work according to specified re-
quirements. It focuses on what the system does rather than how it performs.
Types of Functional Testing:

• Unit Testing: Testing individual components or functions


• Integration Testing: Testing interaction between integrated modules
• System Testing: Testing the complete integrated system
• User Acceptance Testing: Validating software meets user expectations
• Smoke Testing: Basic functionality verification after builds
• Sanity Testing: Focused testing after minor changes
• Regression Testing: Ensuring changes don’t break existing functionality
5. Non-Functional Testing
Non-functional testing evaluates software attributes like performance, security, and

9
usability. It focuses on how well the system performs rather than specific function-
ality.
Types of Non-Functional Testing:

• Performance Testing: Evaluating system speed and responsiveness


• Load Testing: Testing system behavior under expected load
• Stress Testing: Assessing performance under extreme conditions
• Security Testing: Identifying vulnerabilities and security flaws
• Usability Testing: Evaluating user-friendliness and user experience
• Compatibility Testing: Ensuring software works across different environments
• Scalability Testing: Testing system’s ability to scale up or down
Back to Contents

24. What is Shift Left Testing?


Back to Contents
Answer: Shift Left Testing is a strategic software development approach that emphasizes
moving testing activities earlier in the Software Development Life Cycle (SDLC), rather
than waiting until the traditional end-phase testing. The term "shift left" comes from
the visual representation of project timelines, where earlier phases appear on the left side
and later phases on the right side.
Back to Contents

25. What is Accessibility Testing?


Back to Contents
Answer: Accessibility testing evaluates whether digital productswebsites, mobile appli-
cations, and softwareare usable by people with disabilities. This testing ensures that
applications comply with accessibility standards and provide equal access to users with
various impairments, including visual, auditory, motor, and cognitive disabilities.
Back to Contents

10

You might also like