Manual Testing Interview Q&A - Corrected & Enhanced
Q1. What do you mean by Software Testing?
Answer: Software testing is a systematic process of evaluating and verifying that a software application
or system functions as intended and meets specified requirements. It involves executing software
components using manual or automated tools to identify defects, gaps, or missing requirements. Testing
ensures the software is reliable, secure, performant, and provides a good user experience before it
reaches end users.
Q2. Why is testing required?
Answer: Testing is essential because:
1. Quality Assurance - Ensures the product meets business and user requirements
2. Early Bug Detection - Identifies defects early in development, reducing cost and time to fix
3. Risk Mitigation - Prevents potential revenue loss, security breaches, and reputation damage
4. Compliance - Ensures adherence to industry standards and regulations
5. User Satisfaction - Delivers a reliable and user-friendly product
6. Cost Effectiveness - Fixing bugs post-release is significantly more expensive than during
development
Q3. When should we stop testing?
Answer: Testing can be stopped when:
1. All planned test cases are executed with acceptable pass percentage (typically 95-98%)
2. Critical and high-priority defects are resolved and retested
3. Testing deadlines are met with risk assessment approval
4. Code coverage and functional coverage reach defined thresholds
5. Exit criteria defined in test plan are satisfied
6. Stakeholders formally approve test completion based on risk analysis
7. Budget and time constraints are reached (with proper risk documentation)
Note: 100% testing coverage is theoretically impossible and practically not cost-effective.
Q4. What are the different levels of testing?
Answer:
1. Unit Testing - Testing individual components or modules in isolation
2. Integration Testing - Testing interfaces between integrated components
3. System Testing - Testing complete integrated system
4. Acceptance Testing - Validating system meets business requirements
Q5. What are the different types of testing?
Answer:
Functional Testing: Black box, white box, gray box, smoke, sanity, regression, user acceptance
Non-Functional Testing: Performance, load, stress, volume, security, usability, compatibility
Maintenance Testing: Regression testing after system modifications
Q6. What is the difference between Verification and Validation?
Answer:
Aspect Verification Validation
Definition "Are we building the product right?" "Are we building the right product?"
Focus Process compliance and standards Meeting user requirements
Method Static testing (reviews, inspections, walkthroughs) Dynamic testing (executing code)
When Throughout development phases After development completion
Purpose Ensures quality of development process Ensures end product meets user needs
Q7. What is the difference between Functional and Non-Functional testing?
Answer:
Functional Testing: Validates what the system does - verifies specific functions and features work as
expected
Non-Functional Testing: Validates how the system performs - tests performance, security, usability,
reliability, scalability
Q8. What is Black Box Testing?
Answer: Black box testing is a testing technique where the tester evaluates software functionality without
knowledge of internal code structure, implementation details, or internal paths. Testers focus on input-
output behavior based on requirements and specifications.
Q9. What is White Box Testing?
Answer: White box testing (also called clear box or glass box testing) is a testing technique where the
tester has complete knowledge of internal code structure, logic, and implementation. It involves testing
internal paths, conditions, and loops within the code.
Q10. What is Gray Box Testing?
Answer: Gray box testing combines both black box and white box testing approaches. Testers have
limited knowledge of internal code structure and use this information along with black box techniques to
design test cases. It's commonly used in integration testing.
Q11. What is the Software Testing Life Cycle (STLC)?
Answer: STLC is a systematic approach to testing with defined phases:
1. Requirement Analysis - Understanding and analyzing requirements
2. Test Planning - Creating test strategy and test plan
3. Test Case Development - Writing detailed test cases and test scripts
4. Test Environment Setup - Preparing test environment and test data
5. Test Execution - Running test cases and reporting defects
6. Test Closure - Test completion activities and lessons learned
Q12. What are some advantages of automation testing?
Answer:
1. Speed and Efficiency - Faster execution compared to manual testing
2. Reusability - Test scripts can be reused across different versions
3. Accuracy - Eliminates human errors in repetitive tasks
4. Cost-Effective - Reduces long-term testing costs for repeated test cycles
5. CI/CD Integration - Supports continuous integration and continuous deployment
6. Detailed Reporting - Provides comprehensive test reports and logs
7. 24/7 Execution - Can run unattended during off-hours
8. Better Test Coverage - Can execute more test scenarios in less time
Q13. What are the limitations of automation testing?
Answer:
1. High initial setup cost and time investment
2. Requires skilled resources for script development and maintenance
3. Not suitable for usability, exploratory, and ad-hoc testing
4. Maintenance overhead when application changes frequently
5. Cannot test user experience and look-and-feel aspects
6. Limited effectiveness for new features without established requirements
Q14. What is a Test Case?
Answer: A test case is a detailed document that specifies input data, execution conditions, testing
procedure, and expected results for testing a particular requirement or functionality. It typically includes:
Test Case ID
Test Description
Pre-conditions
Test Steps
Expected Results
Actual Results
Pass/Fail Status
Q15. What is a Test Plan?
Answer: A test plan is a comprehensive document that outlines the testing strategy, objectives, schedule,
resources, and scope for a testing project. It includes:
Test objectives and scope
Testing approach and strategy
Entry and exit criteria
Resource allocation
Risk assessment
Test deliverables and timelines
Q16. What is Bug Life Cycle?
Answer: Bug life cycle represents the stages a defect goes through from discovery to closure:
1. New - Bug is reported
2. Assigned - Assigned to developer
3. Open - Developer starts working on fix
4. Fixed - Developer completes the fix
5. Retest - Tester verifies the fix
6. Verified - Fix is confirmed working
7. Closed - Bug is officially closed
8. Reopened - If fix doesn't work, bug is reopened
Q17. What is Severity and Priority?
Answer:
Severity: Impact of the bug on system functionality (Critical, High, Medium, Low)
Priority: Urgency of fixing the bug from business perspective (P1, P2, P3, P4)
A bug can have high severity but low priority (cosmetic issue in rarely used feature) or low severity but
high priority (minor typo on homepage).
Q18. What are different test design techniques?
Answer:
1. Equivalence Partitioning - Dividing input data into equivalent classes
2. Boundary Value Analysis - Testing at boundary values
3. Decision Table Testing - Testing different combinations of conditions
4. State Transition Testing - Testing state changes in application
5. Use Case Testing - Testing based on user scenarios
6. Error Guessing - Testing based on experience and intuition
Q19. What is Smoke Testing?
Answer: Smoke testing (also called build verification testing) is a subset of test cases that covers basic
and critical functionalities. It's performed to verify that the most crucial functions of a system work
correctly after a new build deployment. If smoke tests fail, the build is considered unstable and returned
to development.
Q20. What is Sanity Testing?
Answer: Sanity testing is a narrow regression testing focused on a specific functionality or bug fix. It's
performed after receiving a relatively stable software build to verify that specific functionality works as
expected. It's more focused than smoke testing and typically unscripted.
Q21. What is the difference between Smoke and Sanity Testing?
Answer:
Aspect Smoke Testing Sanity Testing
Scope Wide and shallow Narrow and deep
Purpose Build stability verification Specific functionality verification
When After new build deployment After minor changes or bug fixes
Documentation Usually documented Usually undocumented
Performed by Developers or testers Typically testers
Q22. What is Regression Testing?
Answer: Regression testing is the process of testing existing functionality after code changes to ensure
that previously working features still function correctly. It helps detect if new code changes have broken
existing functionality.
Q23. What is Retesting?
Answer: Retesting is the process of testing specific functionality that previously failed to verify that
defects have been fixed. It involves re-executing the same test cases that initially failed.
Q24. What is the difference between Regression and Retesting?
Answer:
Aspect Regression Testing Retesting
Purpose Verify existing functionality after changes Verify specific bug fixes
Scope Broader - tests multiple functionalities Narrow - tests specific failed areas
Test Cases May include new and existing test cases Same test cases that previously failed
Timing After any code changes After bug fixes
Also known as Maintenance testing Confirmation testing
Q25. What is User Acceptance Testing (UAT)?
Answer: UAT is the final phase of testing where actual users test the system to verify it meets business
requirements and is ready for production use. Types include:
Alpha Testing - Internal testing by organization's users
Beta Testing - Testing by limited external users
Business Acceptance Testing - Validation against business requirements
Contract Acceptance Testing - Validation against contract criteria
Q26. What is API Testing?
Answer: API testing involves testing Application Programming Interfaces (APIs) to verify data exchange,
functionality, performance, and security between different software systems. It focuses on:
Data exchange accuracy
Response time and performance
Error handling
Security and authentication
Integration between systems
Q27. What is Database Testing?
Answer: Database testing involves validating database operations, data integrity, performance, triggers,
and procedures. Key areas include:
Data validity and integrity
CRUD operations (Create, Read, Update, Delete)
Database performance
Transaction management
Data security and access controls
Q28. What is Cross-browser Testing?
Answer: Cross-browser testing ensures web applications function correctly across different browsers
(Chrome, Firefox, Safari, Edge), browser versions, and operating systems. It validates:
Functionality consistency
UI/UX compatibility
Performance across browsers
JavaScript and CSS rendering
Q29. What is Compatibility Testing?
Answer: Compatibility testing verifies software works correctly across different:
Browser Compatibility - Different browsers and versions
Operating System Compatibility - Windows, macOS, Linux
Device Compatibility - Desktop, mobile, tablet
Backward/Forward Compatibility - Different software versions
Q30. What is Performance Testing?
Answer: Performance testing evaluates system performance under various load conditions. Types
include:
Load Testing - Normal expected load
Stress Testing - Beyond normal capacity
Volume Testing - Large amounts of data
Spike Testing - Sudden load increases
Endurance Testing - Extended periods
Q31. What is Security Testing?
Answer: Security testing identifies vulnerabilities and security flaws in applications. Key areas:
Authentication and authorization
Data protection and encryption
SQL injection and XSS vulnerabilities
Session management
Input validation
Access control mechanisms
Q32. What is Usability Testing?
Answer: Usability testing evaluates user experience and interface design to ensure the application is
user-friendly, intuitive, and meets user expectations. It focuses on:
Ease of use and navigation
User interface design
Content clarity and readability
Task completion efficiency
User satisfaction
Q33. What is Exploratory Testing?
Answer: Exploratory testing is an unscripted testing approach where testers simultaneously learn, design,
and execute test cases. Testers explore the application to discover defects through investigation and
learning rather than following predetermined test cases.
Q34. What is Ad-hoc Testing?
Answer: Ad-hoc testing is informal testing performed without specific test cases, documentation, or
requirements. Testers randomly test the application based on their understanding and intuition to find
defects that might be missed in formal testing.
Q35. What is Monkey Testing?
Answer: Monkey testing involves providing random inputs to the system to check system behavior and
identify crashes or unexpected behaviors. It can be:
Dumb Monkey - Completely random inputs
Smart Monkey - Inputs based on user behavior patterns
Brilliant Monkey - Inputs based on application knowledge
Q36. What is the difference between Release and Build?
Answer:
Aspect Build Release
Definition Compiled version of software for internal testing Stable version deployed to end users
Audience Internal team (developers, testers) End users/customers
Quality May contain bugs and incomplete features Thoroughly tested and stable
Frequency Multiple builds per day/week Less frequent, major versions
Documentation Build notes for internal use Release notes with features, fixes, known issues
Environment Test/staging environment Production environment
Q37. What is Configuration Management?
Answer: Configuration management is the process of managing and controlling changes to software,
hardware, documentation, and other project artifacts throughout the software development lifecycle. It
ensures consistency and traceability of all project components.
Q38. What is Risk-based Testing?
Answer: Risk-based testing prioritizes testing efforts based on risk assessment of different application
areas. High-risk areas receive more testing attention, while low-risk areas receive minimal testing. Factors
considered include:
Business impact
Technical complexity
Change frequency
Historical defect data
Q39. What is Test Data Management?
Answer: Test data management involves creating, maintaining, and managing data required for testing. It
includes:
Test data creation and preparation
Data privacy and security compliance
Data refresh and synchronization
Synthetic data generation
Data masking for sensitive information
Q40. What are Entry and Exit Criteria?
Answer:
Entry Criteria: Conditions that must be met before testing can begin (requirements finalized, test
environment ready, test cases prepared)
Exit Criteria: Conditions that must be met before testing can be considered complete (all test cases
executed, critical bugs fixed, coverage targets met)
Q41. What is Traceability Matrix?
Answer: A requirements traceability matrix (RTM) is a document that maps requirements to test cases,
ensuring all requirements are covered by testing. It helps track:
Requirement coverage
Test case mapping
Impact analysis of requirement changes
Testing completeness
Q42. What is Static Testing?
Answer: Static testing involves examining code, requirements, and design documents without executing
the software. Methods include:
Code reviews and walkthroughs
Requirements review
Design document inspection
Static analysis tools
Desk checking
Q43. What is Dynamic Testing?
Answer: Dynamic testing involves executing software code with test data to validate functionality. It
includes:
Functional testing
Non-functional testing
Running test cases
Observing system behavior
Comparing actual vs expected results
Q44. What is Mutation Testing?
Answer: Mutation testing is a white-box testing technique that introduces small changes (mutations) to
the source code to evaluate the quality of test cases. If test cases detect the mutations (mutants), they are
considered effective.
Q45. What is Accessibility Testing?
Answer: Accessibility testing ensures applications are usable by people with disabilities. It validates
compliance with accessibility standards (WCAG, Section 508) and checks:
Screen reader compatibility
Keyboard navigation
Color contrast and visual elements
Alternative text for images
Font size and readability
This document covers essential manual testing concepts and interview questions. Each answer is designed to
provide comprehensive understanding while remaining concise and interview-appropriate.