What is Software Quality?
Quality is conformance to requirements (Producer View), Quality is fit for use (Customer View) What is Software Testing? Software testing is the process of evaluating a software application to identify defects and ensure it meets quality standards.
Importance of Software Testing for Quality-Helps detect defects early , Ensures software meets business and user needs , Improves reliability, security, and maintainability , Reduces post-release failures and maintenance costs. How is Software Testing Done? Manual Testing:
Performed without automation tools / Automated Testing: Uses scripts and tools / Static Testing: Reviews and inspections - Dynamic Testing: Executing test cases. Basic Terminology in Software Testing-Bug/Defect: A flaw causing incorrect results /Test Case: Set of conditions
for testing /Test Plan: Strategy for testing / Validation: Ensures software meets user needs / Verification: Ensures software meets requirements. Types of Software Testing-Functional Testing: Unit Testing ,Integration Testing , System Testing ,User Acceptance Testing (UAT)/
Non-Functional Testing: Performance Testing ,Security Testing ,Usability Testing ,Compatibility Testing Software Testing Principles - Testing shows presence of defects, not absence , Exhaustive testing is impossible , Early testing saves time & cost ,Defect clustering, Pesticide
paradox ,Testing is context-dependent,Absence of errors is a fallacy. Software Testing Process - Requirement Analysis ,Test Planning ,Test Case Development,Test Execution ,Defect Reporting ,Test Closure Common Challenges in Software Testing -Changing requirements
,Time constraints ,Complexity in large software systems ,Defect leakage ,Need for skilled testers.
Introduction to Specification Based Testing Key Characteristics: • Focuses on what the system should do (not how). • Independent of internal structure (Black Box Testing/ Behaviour Based Testing). • Uses functional requirements as a reference. Why Use Specification-Based
Testing? •Ensures correctness. •Covers diverse inputs and outputs. •Helps catch missing or unclear requirements early. •Doesn’t require coding knowledge. Types of Specification Based Test Case Design Techniques • Equivalence partitioning • Boundary value analysis •
Decision Table. Equivalence partitioning works on certain assumptions • The system will handle all the test input variations within a partition in the same way. • If one of the input conditions passes, then all other input conditions within the partition will also pass. • If one of
the input conditions fails, then all other input conditions within the partition will also fail. How to do Equivalence Partitioning • Valid partitions are values that the component or system should accept under test. This partition is called a “Valid Equivalence Partition.” • Invalid
partitions are values that should be rejected by the component or system under test. This partition is called the “Invalid Equivalence Partition.” Drawbacks of Equivalence Partitioning • Depends on Correct Partitioning • Limited to Stated Requirements • No Insight into Code
Implementation. Drawbacks of Boundary Value Analysis •Boundary value and equivalence partitioning assume that the application will not allow you to enter any other characters or values This assumption is, however, not valid for all applications •Boundary value analysis
cannot handle situations where the decision depends on more than one input values.
Functional Testing • A type of testing that verifies that each function of the software application operates in conformance with the requirement specification. • Each functionality of the system is tested by providing appropriate input, verifying the output and comparing the
actual results with the expected results. Functional Testing • Unit Testing • Smoke Testing • Regression Testing • Sanity Testing • System Testing • User Acceptance Testing. Unit Testing • A process of testing the individual subprograms, subroutines, classes, or procedures in
a program. • Unit testing is a way of managing the combined elements of testing. • Unit testing eases the task of debugging. Smoke Testing • Testing the core functionality of a program. • The term smoke test in technology is broadly used to test product features in a limited
time. • Smoke testing is a subset of all defined test cases that cover the main functionality of a system. • Examples of capabilities tested during the smoke test; ➢ Access to the application ➢Logging in with a set of users ➢ Main modules of a particular application. Regression
Testing • Regression testing is testing an existing software application to ensure that a change or addition has not caused any errors with existing functionality. • Regression testing re-runs the testing scenarios that were originally scripted. • Regression testing typically requires
an automated testing tool. Sanity Testing • Performed after receiving a software build, with minor changes in code or functionality. • Checks whether the planned functionality is working as expected. • Sanity testing is typically non-scripted. • Sanity testing is a subset of
regression testing. System Testing • System testing compares the entire system or program to its original objectives. • Attempting to demonstrate how the entire system fails to meet its objectives. • Requires a set of measurable objectives for the product. User Acceptance
Testing• Process of comparing the program to its initial requirements and the current needs of the end users. • Usually is performed by the customer or end user. • Developer will conduct user tests during the development cycle prior to delivering the finished product. Non-
Functional Testing • A type of testing to check non-functional aspects of a software application. • Examples for non- functional aspects of a software; ➢Performance ➢Usability ➢Reliability • Explicitly designed to test the readiness of a system as per nonfunctional parameters
which are never addressed by functional testing. Performance Testing • Designed to test whether the program satisfies its performance objectives. • Measure the performance of each component to identify which components cause the system to perform poorly. • Performance
testing involves quantitative tests done in a laboratory environment. • Can compare the performance of two systems.performance testings Load Testing • Test a system with constantly increasing load until “the time to load” reaches to its threshold value. • Use to distinguish
the performance between two different systems. • Monitor the response time and staying power of an application, when it is performing under a heavy load. Stress Testing • Validate an application’s behavior when it is pushed beyond normal or peak load conditions. • Checks
the stability of software when the hardware resources are insufficient. • Determine failures of a system and identify how the system recovers from the failures. This quality is known as the recoverability. Spike Testing• Performed by suddenly increasing the number of users by
a very large amount • The main aim is to determine whether the system will be able to sustain the sudden heavy workload. Endurance Testing • Testing a system with an expected amount of load over a long period of time. • Test cases are executed to check the behavior of a
system. • Consider factors such as memory leaks or system fails or random behavior. Scalability Testing • Testing of a software application to determine the capability of scaling up in terms of any non-functional requirement. • Determines the peak of a system, when it has
reached a level which prevents from more scaling. Volume Testing • Testing a software application with a large amount of data. • Monitor the performance of the application under varying database volumes. Uses of Performance Testing • Improve user experience. • Gather
metrics useful for tuning the system. • Identify bottlenecks such as database configuration. • Determine if a new release is ready for production. • Provide reporting to business stakeholders regarding performance against expectations. Top Performance Testing Tools•
LoadRunner • Apache JMeter • NeoLoad • Rational Performance Tester • Loadster • QEngine (ManageEngine) • Testing Anywhere • Loadstorm. Compatibility Testing • Test an application to ensure that it is compatible across operating systems, hardware platforms, web
browsers. • Validation for compatibility requirements that have been set at the planning stage of the software. • Validates that the application runs properly in versions. Types of Compatibility Testing • Backward compatibility Testing Verify the behavior of the developed
application with the older versions of the application. • Forward compatibility Testing Verify the behavior of the developed application with the newer (upcoming) versions of the application. Security Testing • Security testing is the process of testing an application to check
whether it is according to the specific security objectives. • Security testing test cases can be derived by studying known security problems in similar systems. • Web-based applications often need a higher level of security testing than most applications. Usability Testing•
Test the user-friendliness of an application and identify usability defects. • Testing is performed by using a small set of target endusers. • Mainly focuses on the user's ease to use the application, flexibility in handling controls and the ability of the system to meet its objectives.
Localization Testing• Test whether the software behaves according to the local culture or settings. • Test whether the application has appropriate linguistic and cultural aspects for a particular locality. • Localization testing of a globalized application is the process of identifying
whether all components of the application are designed according to the local culture of target countries and regions.
Introduction to STLC STLC is a systematic approach to testing a software application to ensure that it meets the requirements and is free of defects. Follows a series of steps or phases with specific objectives Fundamental part of SDLC Why STLC is Important? ❑ Helps
in finding defects early ❑ Ensures a systematic approach to testing. ❑ Makes testing measurable and repeatable. ❑ Supports better planning, coverage, and quality control. Key Characteristics of SDLC Phased Approach Goal-Oriented Process-Driven Early Defect
Detection Improves Quality Traceability Reusability. Challenges in STLC Unclear or Changing Requirements Time Constraints Lack of Collaboration Environment Issues Tooling and Automation Challenges Frequent Scope Creep or Last-Minute Changes Knowledge
Gaps / Inexperienced Testers.
What is Test-Driven Development? A software development technique which reiterates the importance of testing Promotes writing software requirements as tests as the initial step in developing a code Initially writes tests and then moves forward with the least amount of
code needed to get through the tests. TDD Cycle (Red-Green-Refactor) Here, the write a faling test phase denotes that the code is not working. The make the test pass phase is an indication that everything is working. However, they do not work optimally. The refactor phase
denotes that the code is being refactored. Benefits of TDD Improve code quality Code is written to pass specific tests Immediate Feedback Finds out immediately if something is broken. Better Design & Maintainability Encourages writing loosely coupled, highly cohesive
code Documentation Through Tests Tests act as live documentation of how the system should behave Confidence to Refactor You can improve or change code structure without fear, because tests will catch regressions. Facilitates Debugging When something breaks, you
already have unit tests that isolate logic. Challenges of TDD Initial Learning Curve Beginners may struggle with writing tests first or designing testable code Slowdown Development at First Writing tests before writing any code will consume time Maintenance Overhead A large
suite of outdated or redundant tests becomes a burden. UI, Legacy, or Non-Deterministic Code TDD works best with deterministic, modular code Over-Testing / Rigid Code Writing too many detailed tests can make refactoring painful Mindset Change TDD requires a
fundamental shift in how developers think about writing code. SOLID Principles for TDD Single Responsibility Principle (SRP) Principle: A class should have one and only one reason to change. TDD Implication: As you write unit tests first, you tend to split responsibilities to
make the code more testable. Open/Closed Principle Principle: Software entities should be open for extension but closed for modification. TDD Implication: TDD encourages you to write code that can evolve via extension - because changing tested code is risky. Liskov
Substitution Principle Principle: Subclasses should be substitutable for their base classes without altering behavior. TDD Implication: TDD helps spot LSP violations early - when a test for a base class fails with a subclass. Interface Segregation Principle Principle: Clients
shouldn’t be forced to depend on interfaces they don’t use. TDD Implication: TDD naturally leads to smaller, focused interfaces, because large ones are hard to test. Dependency Inversion Principle Principle: Depend on abstractions, not on concrete implementations. TDD
Implication: You often start writing tests by mocking dependencies, which encourages depending on interfaces, not classes. TDD Impact on Developer’s Life Reversing the Usual Workflow, Thinking in Small, Testable Units, Letting Tests Drive Design, Immediate vs. Long-Term
Payoff, Team and Organizational Culture.
What is Automation?Making an apparatus, a process, or a system operate automatically. What is Test Automaton? Test automation is the use of software tools to automatically run and validate test cases, enhancing testing efficiency. What/When to Automate? Regression
Testing Run frequently to ensure that new code hasn’t broken existing functionality (e-com app). Smoke Testing / Sanity Checks Quick checks to see if the basic functionalities work before deeper testing High Volume / Repetitive Tests Saves time and reduces human error
(load testing) Data-Driven Testing Same test logic with multiple input data sets (500 combinations of inputs like names, emails, and contact number.) Cross-Browser / Cross-Device Testing Tools can quickly test web apps on different browsers/device Stable Features Automate
tests for features that don’t change often (Login/Logout). What/When Not to Automate? Exploratory Testing Requires human intuition, creativity, and observation (to find visual bugs) Short-Lived Features Not worth it for features that will be removed soon (a holiday sale)
Unstable or Frequently Changing UI Tests will break often, causing more maintenance than value (Promotional page) Usability Testing Needs human feedback on look, feel, and ease of use (New color scheme) Tests That Run Only Once The effort to automate it outweighs
the benefit (One-time data migration) Complex Logics Involved Some things are just too delicate to automate easily (Image comparison, CAPTCHA). Benefits of Test Automation Application-wise Improved Quality, Improved Accuracy, Enhanced Test Coverage, Early Bug
Detection, Reusable Test Scripts. Principles of Test Automation Tests should improve quality. Tests should reduce the risk of introducing failures. Testing helps to understand the code. Tests must be easy to write. A test suite must be easy to run. A test suite should need
minimal maintenance. Automation Tools Best for Web Automation: Playwright, Selenium, Cypress Best for Mobile Testing: Appium, Katalon Studio Beginner Friendly: Katalon, TestCafe, Cypress Enterprise-Grade Tools: TestComplete, UFT One, Ranorex Open-Source
Leaders: Selenium, Appium, Robot Framework, Playwright. Key Features of Automation Frameworks ❑ Reusability – Common functions and utilities used across tests. ❑ Maintainability – Easier to update tests as the app changes. ❑ Scalability – Supports adding more tests
without breaking. ❑ Consistency – Standardizes test writing, naming, and structure. ❑ Reporting – Provides logs, pass/fail summaries, screenshots, etc. ❑ Integration – Can connect with CI/CD tools, bug trackers. Linear Scripting Framework Advantages: ❑ Coding knowledge
is not required ❑ A quick way to generate test scripts Disadvantages ❑ Hard to maintain as project grows ❑ Lack of reusability Example: Selenium IDE – record browser actions and play them back. Modular Testing Framework Advantages: ❑ Better scalability and easier to
maintain ❑ Can write test scripts independently Disadvantages ❑ Requires more initial effort to develop scripts ❑ Requires coding skills to set up the framework. Data-driven Framework Advantages: ❑ It supports multiple data sets ❑ Modifying the test scripts won’t affect
the test data Disadvantages ❑ Require coding skills ❑ Setting up the framework and test data takes more time Example: Use an Excel or CSV file to test a form with 100 inputs. Keyword-driven Framework Advantages: ❑ No need to be an expert to write test scripts ❑ It is
possible to reuse the code. We can point the different scripts to the same keyword Disadvantages: ❑ Take more time to design ❑ The initial cost is high. Integrating Test Automation with CI/CD What is CI/CD? ❑ Continuous Integration: Automating the integration of code
changes into the main codebase. ❑ Continuous Delivery: Automating the deployment process to ensure new changes are automatically released. Automation in CI/CD: ❑ Integrating automated tests into the CI/CD pipeline to run tests automatically on every code commit.
❑ Benefits: Faster feedback loops, early bug detection, and seamless delivery process. Challenges in Test Automation High Initial Investment Significant time and cost to choose tools, develop frameworks, and train staff Test Maintenance Overhead Changes in UI or
application logic, requiring frequent updates. Unstable Tests (Flaky Tests) Sometime passes, sometime fails. Causes confusion and reduce trust Partial Test Coverage Not all test cases can be automated Choosing the Right Tool Matching with tech stack, team skill level, and
project needs is difficult Lack of Skilled Resources Requires both testing knowledge and programming skills.
Shift-Left Testing -Integrate testing early in the development lifecycle.Benefits: Faster bug detection, reduced debugging time, lower costs (errors cost 640x more if fixed late vs. 10x in traditional testing).Risks of delay: Missed flaws, project delays, insufficient resources.
AI/ML in Testing Applications:Auto-generate test cases, prioritize tests, predict defects, analyze results.Techniques: NLP (requirements→test cases), predictive analytics (defect hotspots), reinforcement learning (optimized test paths).Benefits: Efficiency, accuracy, cost
reduction.Challenges: Data requirements, integration complexity, ethical bias. QAOps-Embed QA continuously into DevOps pipelines.Core Practices: Automated testing in CI/CD, shift-left, production monitoring./Vs. DevOps: QAOps prioritizes quality assurance; DevOps
prioritizes deployment speed.Crowdtesting-Leverage global testers via platforms (e.g., Mechanical Turk).Benefits: Scalability, diverse test coverage, real-user feedback.Vs. Outsourcing: Flexible, output-based pricing, 24/7 global workforce.Security Testing- (DevSecOps)-
Integrate security early (design→development).Methods: Vulnerability scanning, penetration testing, API security.Driver: Cybercrime costs projected to hit $10.5T annually by 2025. IoT Testing-Focus on security, data integrity, performance, scalability for connected
devices.Market Growth: Driven by rising IoT adoption. Mobile Test Automation-Automate testing across devices/platforms using cloud labs (e.g., BrowserStack).Advantage: No physical device labs needed; supports scalability.API Test Automation-Critical for microservices
architectures.Vs. GUI Testing: Faster, easier to automate, uses JSON/XML.Benefit: Enables high-frequency testing in Agile/DevOps.Accessibility Testing-Ensure compliance with standards (ADA, WCAG 2.1).Methods: Automation (screen readers), crowdtesting (disability
perspectives).Focus Areas: Assistive tech, legal requirements, inclusive design.
Stages of STLC (Software Testing Life Cycle)-Requirement Analysis-Understand what to test,Test Planning-Plan how to test,Test Case Development-Write test steps,Test Environment Setup-Get systems ready,Test Execution-Run tests, find bugs,Test Closure -Wrap up and
report Why is it called a Cycle?Testing is rarely a one-way street Involves:Feedback loops,Rework,Iterations,Continuous improvement,These aspects make it cyclic.STLC VS SDLC SDLC > Develop SoftwareGoal: Build a functioning, high-quality application.Focus Area: Entire
development (requirements to deployment).Who Performs It: Developers, Architects, Business Analysts, DevOps, etc.Starts When: At the beginning of the software project.Output/Deliverables: Working software, system architecture, code, documentation.End Result: A
product ready for release.STLC > Test Software Goal: Ensure the software works as expected and is bug-free.Focus Area: Only testing-related activities.Who Performs It: Testers / QA team.Starts When: When requirements are ready.Output/Deliverables: Test cases, bug
reports, test summary reports.End Result: Verified and validated software. Test-Driven Development (TDD) vs Traditional Testing-TDD (Test-Driven Development)-Tests are written before code implementation,Focuses on small, incremental tests for specific features,Tests are
written by developers based on requirements,Encourages high code coverage, especially on critical paths,Provides immediate feedback on code changes,Helps identify bugs early in development Traditional Testing Testing is performed after code implementation,Focuses
on comprehensive testing of the entire application,Tests are created by QA testers,Code coverage may vary, depending on test cases,Feedback is usually delayed until the testing phase,Bugs may be detected later in the development cycle. Manual vs Automated Testing-
Manual Testing-Execution Time: Slower due to sequential execution; high resource usage,Initial Setup: Less effort needed (tests done manually),Reliability: Prone to human error; less precise,Programming: Non-programmable; limited in creating complex tests Reusability:
Often requires new test cases for each function Reporting: Varies due to manual implementation Flexibility: More adaptable to changing requirements and scenarios Automated Testing-Execution Time: Faster with automation tools; less resource consumption,Initial Setup:
Requires more effort for scripts and tools,Reliability: High reliability; consistent execution,Programming: Supports complex test scripting to detect hidden defects Reusability: Test scripts can be reused across software cycles Reporting: Standardized and consistent tracking
and reporting Flexibility: Less flexible to unexpected changes (pre-programmed). Test Automation Lifecycle-Stage 1:Deciding the scope of Test AutomationStage 2:Choosing the Right Automation Tool Stage 3:Plan, Design, and Strategy Stage 4:Set-Up Test Environment Stage
5:Test Script & Execution Stage 6:Test Analysis and Reporting.
WCC Measure – Computation Guidelines 1. Identification of the tokens begins after the class declaration.2. In general, all the operators, keywords (except access flags such as public, static, etc.), strings, identifiers, and numerical values (including zero) are identified as
separate tokens. However, they are exceptional cases.3. All the characters inside a pair of single or double quotes are identified as a single token.4. Along with the array name, the square brackets of an array are considered as one token.5. Each comma operator that separates
two program components from one another, is identified as a separate token.6. Brackets are not identified as separate tokens.7. In a program statement that contains a variable declaration, the variable name is not identified as a token. Hence,only the data type of the variable
and comma operator that separates two variables are identified as tokens. On the other hand, if a program statement contains a variable definition, in addition to the data type of the variable and comma operator, the variable name is also identified as a token.8. In a method
declaration or invocation, the round brackets and the method name are identified as one token.However, the components inside the round brackets of user-defined methods are not identified as tokens.Similarly, the components inside the round brackets of the constructors
of user-defined classes are also notidentified as tokens.9. In a decisional statement, along with the keyword that defines the decisional type, the round brackets are identified as one token. Thus, if( ) , if-else( ), else-if( ), for( ), while( ), do-while( ), and switch( ) are identified as
onetoken. However, the words ‘else’ and ‘do’ are not considered for the complexity calculation. In addition, the default :10. The ‘case :’ and ‘default :’ in a switch statement are identified as separate tokens.11. The keyword ‘catch’ and the round brackets are identified as one
operator. However, the word ‘try’ is notconsidered for the complexity calculation.12. The '.' operator that is used to connect classes, fields and/or methods is identified as a separate token. The class,method, or field names which are connected by '.' operators are also identified
as separate tokens.13. The statement terminator ( ; ) is not identified as a token.14. Manipulators such as endl, "\n" are considered as tokens.15. The “*” sign used in the declaration of pointer is not a dereference operator. It is just a notation that creates a pointer. Thus, it is
not considered as a token.16. The keyword ‘return’ is not considered as a token.17. For a program which does not have a built-in root class, the weight allocation of the Wi attribute begins at 1.