3 – Day Performance Testing Training
• Theoretical Foundations of Performance Testing
• Advanced JMeter and Practical Performance Testing
• Analyzing, Reporting, CI/CD Integration, and Final Assessment
Day 1 Agenda
• Welcome and Introduction • Common Performance Testing Tools
• Introduction to Performance Testing • Break
• Real-life examples of load-related issues • Introduction to JMeter
• Evolution of Infrastructure & Performance Testing • Q&A and Day 1 Recap
• Performance Testing Types
• Break
• Performance Metrics and KPIs
• Steps to Identify Performance Bottlenecks
• Common Performance Bottlenecks
• Performance Testing in the SDLC
Welcome and Introduction
• Overview of Training Objectives: Performance testing ensures that
your software can handle a specific workload without crashing or
slowing down. The objective of this course is to teach you how to
simulate real-world usage of software to find performance issues
before they affect actual users.
• Introduction to Trainers and Participants: We’ll introduce
ourselves and understand your current knowledge of performance
testing. This will help us tailor the sessions to your needs.
Introduction to Performance Testing
• What is Performance Testing?: is a type of software
testing used to determine how a system behaves under
various loads. Its primary purpose is to assess the
system's speed, scalability, reliability, and stability.
Performance testing is not focused on finding
functional bugs but on identifying performance
bottlenecks and ensuring that the system meets
specific performance criteria.
Introduction to Performance Testing
• Why is Performance Testing Important?: Think about websites like Amazon
or Netflix. If they become slow during high-traffic periods, users might
leave and never come back. Performance testing is crucial for several
reasons, especially when it comes to delivering high-quality software or
systems. Here are the key reasons why performance testing is important.
• 1. Ensures System Stability and Reliability
• 2. Improves User Experience
• 3. Validates Scalability
• 4. Identifies System Limits
• 5. Prevents Downtime and Outages
• 6. Enhances System Efficiency
• 7. Compliance with SLAs (Service Level Agreements)
Real-life examples of load-related issues
• 1. Amazon Prime Day Outage (2018)
• 2. Ticketmaster (Taylor Swift's Tour, 2022)
• 3. Walmart Black Friday Crash (2017)
• 4. Google Cloud Outage (2020)
• 5. Shopify Outage During Black Friday (2019)
Evolution of Infrastructure & Performance Testing
• Mainframe Era (1960s–1970s)
• Early computer systems, particularly mainframes, had limited resources.
Performance testing in this era was primarily focused on ensuring that these
machines could handle the computational load for specific tasks
• Client-Server Systems (1980s)
• As computing shifted from centralized mainframes to distributed client-server
architectures, performance testing gained more importance.
• Internet Boom (1990s)
• With the rise of the internet, web-based applications had to serve potentially
millions of users at once. E-commerce, online banking, and social media platforms
created new demands for scalability and performance.
Evolution of Infrastructure & Performance Testing
• Agile Development (2000s)
• As agile development practices took hold, the need for frequent performance
testing grew. Continuous integration (CI) practices required automated
performance testing to be embedded into the software development lifecycle
(SDLC).
• Cloud & Mobile Era (2010s)
• With the advent of cloud computing and mobile applications, performance testing
had to adapt to highly dynamic environments.
• Modern-Day (2020s)
• Modern applications, built using microservices and containers (e.g., Docker and
Kubernetes), add another layer of complexity to performance testing.
Types of Performance Testing (Load)
• Load/Capacity Testing: a type of performance
testing that focuses on assessing how a system
behaves under a specific load, usually simulating
expected user traffic and transaction volumes.
The primary goal of load testing is to determine if
a system can handle a given number of users,
transactions, or requests within acceptable
response time limits.
• Key Objectives of Load Testing
• Determine System Capacity
• Identify Performance Bottlenecks
• Measure Response Time
• Validate Stability and Reliability
Types of Performance Testing (Stress)
• Stress Testing: a type of performance testing designed to
evaluate how a system behaves under extreme or
"stressful" conditions, typically beyond its normal
operational capacity. Unlike load testing, which focuses
on determining how the system performs under expected
traffic, stress testing pushes the system to its breaking
point to observe how it handles excessive load, and more
importantly, how it recovers from failure.
• Key Objectives of Stress Testing:
• Identify Breaking Points
• Evaluate System Recovery
• Understand System Behavior Under Extreme Load
• Test Robustness and Stability
Types of Performance Testing (Spike)
• Spike Testing: a type of performance testing that
focuses on how a system reacts to sudden, extreme
surges (spikes) in load. Unlike stress testing, where
load increases gradually until the system reaches its
limit, spike testing introduces abrupt and often
unpredictable spikes in load to assess the system's
stability and recovery capabilities.
• Key Objectives of Stress Testing:
• Evaluate System Behavior During Sudden Load
Increases
• Assess System Stability
• Test Recovery Ability
• Identify Weaknesses
Types of Performance Testing (Endurance)
• Endurance Testing: also known as soak testing, is a
type of performance testing that evaluates how a
system performs under a sustained load over an
extended period. The main objective is to ensure that
the system can handle prolonged usage without
performance degradation, memory leaks, or other
issues that may only become apparent after
continuous operation.
• Key Objectives of Stress Testing:
• Detect Memory Leaks
• Evaluate Long-Term Stability
• Monitor System Performance Over Time
• Test Resource Utilization
• Ensure Durability Under Realistic Conditions
Types of Performance Testing (Volume)
• Volume Testing: type of performance testing that evaluates a
system's ability to handle large volumes of data. The primary
objective is to assess how the system performs when faced
with a significant amount of data in databases, file systems,
or network traffic, and to ensure it can manage, process, and
retrieve that data without performance degradation.
• Key Objectives of Stress Testing:
• Assess System Behavior with Large Data Volumes
• Identify Performance Bottlenecks
• Test Data Processing Capacity
• Evaluate Storage and Database Performance
Types of Performance Testing (Scalability)
• Scalability Testing: a type of performance testing that focuses on
evaluating a system's ability to scale in terms of performance,
capacity, and efficiency as the workload increases or decreases.
The primary goal is to determine whether the system can handle
increasing amounts of data, users, or transactions without
experiencing significant degradation in performance. Scalability
testing helps ensure that a system can grow (or shrink) to meet
demand in a predictable manner.
• Key Objectives of Stress Testing:
• Determine Maximum Capacity
• Evaluate Performance at Scale
• Identify Bottlenecks
• Test Horizontal and Vertical Scaling
• Ensure Efficient Use of Resources
BREAK
Performance Metrics and KPIs
• Response Time is a common performance Key Performance
Indicator (KPI) used to measure how long a system, service, or
application takes to respond to a request. It's often used in
various fields like IT, customer service, and project management.
In the context of software or IT systems, it generally refers to the
time taken from when a request is made to when the response is
received.
• Example:
• Imagine you are using an e-commerce website to search for a
product.
• Request: You type in the product's name and press "Search."
• Response Time: The time it takes for the website to show you the
list of results.
If the search results appear in 2 seconds, the response time would be
2 seconds.
Performance Metrics and KPIs
• Throughput is another important performance Key Performance
Indicator (KPI), often used to measure how much data or how
many requests a system can handle within a specific period. It
refers to the number of transactions or operations that a system
can process in a given amount of time, often expressed in terms
like requests per second, transactions per second, or bits per
second.
• Example:
• Let's say you are running a website where users can log in.
• Throughput: If your website can handle 100 login requests per
second, the throughput of the system is 100 requests per second
(RPS).
Performance Metrics and KPIs
• Hits per Second is a performance metric used in web or server testing
to measure the number of "hits" (or requests) made to a server per
second. A "hit" refers to a request made to the server, which could be
for any type of resource, such as an image, CSS file, HTML page, or an
API request. This KPI is crucial for understanding how often a server is
being accessed and how well it can handle multiple requests
simultaneously.
• Example:
• Let’s say you are testing a website that serves multiple resources
(HTML, images, CSS, etc.) for every page load.
• When a user visits the homepage, their browser makes 10 individual
requests (hits) to load all the resources (e.g., images, scripts, stylesheets).
• If there are 50 users accessing the homepage every second, the server
would need to handle 500 hits per second (50 users * 10 requests per
user).
• In this scenario, Hits per Second = 500.
Performance Metrics and KPIs
• Resource Utilization refers to the measurement of how much of a system's
available resources (such as CPU, memory, disk, or network bandwidth) are
being used during the operation of an application or process. It’s a key
performance KPI because it helps determine how efficiently a system is
using its hardware and software resources and whether there is potential
for optimization or overuse.
• Example:
• Imagine you're running a web application on a server.
• CPU Utilization: The percentage of the CPU being used by the application. If the
application uses 40% of the CPU during peak load, the CPU utilization is 40%.
• Memory Utilization: How much RAM is being consumed. If your web application
uses 3 GB out of 8 GB of available memory, the memory utilization is 37.5%.
• Disk Utilization: The amount of disk input/output (I/O) activity. If the application
writes 100 MB of data per second to disk, that can be a measure of disk
utilization.
• Network Utilization: The bandwidth usage for data transfer. If the server has a
bandwidth limit of 100 Mbps, and the application uses 70 Mbps, the network
utilization is 70%.
Performance Metrics and KPIs
Identifying Bottlenecks in a system refers to finding the points where
performance is hindered, causing delays or failures when processing a
high volume of tasks. Bottlenecks are often caused by limited resources
(like CPU, memory, or disk), slow network speeds, or inefficient code,
which prevent the system from achieving its maximum potential.
Example:
• Let’s say you are testing a web application that serves 1,000 users.
• During a load test, you notice that when 500 users log in simultaneously,
the response time increases significantly. Monitoring shows the CPU
utilization is 95%, while other resources (memory, disk, network) are
within normal limits. This indicates a CPU bottleneck.
• To resolve it, you might optimize your code to reduce CPU usage or
upgrade the server's CPU.
Steps to Identify Performance Bottlenecks
1. Monitor Key Performance Indicators (KPIs): Start by tracking
important performance metrics like Response Time ,
Throughput and Resource Utilization.
2. Analyze System Logs: Review system and application logs for
errors, slowdowns, or unusual patterns. Logs can help pinpoint
specific areas where delays occur, such as slow database
queries or timeouts.
3. Use Performance Monitoring Tools: Tools like New Relic,
Dynatrace, or CloudWatch can give detailed insights into
resource utilization and system performance. These tools often
have dashboards that help visually identify bottlenecks.
4. Isolate and Test Individual Components: If possible, test
individual components separately (e.g., database, API, or front-
end). This can help you determine whether a specific part of
the system is the bottleneck.
Common Performance Bottlenecks
1. CPU Bottleneck: If CPU utilization is consistently high (above 85%-90%)
and the system is slow, it could indicate a CPU bottleneck. This may
happen if the application is performing complex computations or if
there are too many concurrent users.
2. Memory Bottleneck: When memory usage is too high, the system
might start using disk swap space, slowing it down. This could be due
to memory leaks or excessive data caching.
3. Disk I/O Bottleneck: If the system is reading/writing data to disk too
often or too slowly, this can slow down the overall performance,
especially in data-intensive applications.
4. Network Bottleneck: Slow network response times or high bandwidth
usage can indicate a network bottleneck, which occurs when data
takes too long to transfer between systems.
5. Database Bottleneck: If database queries are taking too long, the
system might be waiting on the database to return results, causing a
bottleneck.
Performance Testing in the SDLC
• Integration into the Software Development Lifecycle (SDLC): Performance
testing shouldn’t be done at the end of development but integrated into
the SDLC. We’ll explain how to include performance tests at different
stages (e.g., after coding, during integration).
• CI/CD Pipelines: In modern development, Continuous Integration (CI) and
Continuous Deployment (CD) ensure that any new code is tested
automatically. You’ll learn how to set up performance tests that run
automatically whenever code is updated.
• Agile Challenges: In agile development (where work is done in small, fast-
paced sprints), performance testing can be tricky. We’ll discuss how to
overcome these challenges.
Performance Testing in the Agile vs Watefall
Aspect Waterfall Agile
Timing At the end of development Integrated throughout sprints
Testing Frequency Once or twice before deployment Ongoing, frequent testing after each sprint
Scope Entire system tested at once Incremental, smaller scopes per feature
Test Planning Rigid, predefined Flexible, adaptive
Collaboration Minimal, isolated from developers High, constant feedback loops
Tooling & Automation Less emphasis on automation Strong focus on automation and integration
Cost & Risk of Fixing Issues High, issues discovered late Lower, issues found early
Performance Testing Challenges in Agile
Challenge Solution
Limited Time for Testing Shift-left testing, smaller tests, test parallelization
Integration with CI/CD Automate performance tests, run tests in stages, use continuous monitoring
Inconsistent Test Environments Use production-like environments, cloud-based testing, service virtualization
Changing Requirements Collaborate, modular test design, set performance goals early
Balancing Functional & Performance Combine tests, include performance in Definition of Done, schedule testing
Realistic Load Testing Create realistic scenarios, cloud-based load testing, varying load tests
Resource Constraints Cross-train team, use automated tools, consider outsourcing
BREAK
Common Performance Testing Tools
Real-time CI/CD
Tool Type Protocols Supported Scripting Language Scalability Cost Logo
Monitoring Integration
Apache JMeter HTTP, FTP, JDBC, Web GUI-based, supports Yes (Jenkins, Distributed testing
Open-source services, JMS, LDAP, TCP Java/Groovy scripts Yes etc.) support Free (Open-source)
LoadRunner HTTP/HTTPS, FTP, Enterprise-grade,
WebSocket, SAP, Citrix, supports millions of Commercial
Enterprise Oracle, PeopleSoft C-based scripting Yes Yes users (Expensive)
Gatling
HTTP, WebSocket, JMS, High concurrency,
Open-source MQTT Scala Yes Yes highly scalable Free (Open-source)
k6 Free (Open-source),
HTTP/HTTPS, WebSocket, Lightweight, cloud paid cloud version
Open-source gRPC JavaScript Yes Yes or local execution available
BlazeMeter
JMeter, Gatling, Selenium Supports JMeter, Simulates
Cloud-based integration Gatling scripts Yes (via cloud) Yes thousands of users Paid (Cloud-based)
NeoLoad Supports large-scale
HTTP, HTTPS, FTP, SAP, distributed load Commercial
Enterprise Citrix, Oracle JavaScript, Groovy Yes Yes testing (Expensive)
Introduction to JMeter
• Overview of JMeter: This is one of the most widely used tools for
performance testing. We’ll introduce its capabilities and features.
• Installation and Setup: I’ll guide you step-by-step through the installation
and basic setup so you’re ready to start building your tests, Download
Jmeter.
• JMeter Interface: We’ll explore the JMeter interface, showing you how to
create and run simple tests, so you get comfortable with the tool.
Basic JMeter Components
• Test Plan
The top-level container for a JMeter script that defines everything about
the performance test. It includes all the other components and settings
for the test execution.
• Thread Group
Defines the number of users (threads), the ramp-up period (time taken to
start all users), and the loop count (number of times the test will run). It's
the core component for simulating user load.
• Samplers
These are used to send different types of requests to the server (e.g.,
HTTP Request, FTP Request, JDBC Request). Samplers define what action
is performed.
• Listeners
Collect and present test results. Examples include View Results Tree,
Aggregate Report, and Summary Report. Listeners are essential for
analyzing and visualizing performance data.
Basic JMeter Components
• Timers: Introduce pauses or delays between requests to simulate real-world user
interactions. Examples include Constant Timer, Gaussian Random Timer, etc.
• Controllers: Logic Controllers: Control the flow of execution. Examples include If
Controller, Loop Controller, Transaction Controller, etc.
• Assertions: Used to verify that the server response meets the expectations.
Common assertions include Response Assertion (checks the content), Duration
Assertion (checks response time), etc.
• Configuration Elements: Set up defaults and variables for the test. Examples include
HTTP Request Defaults, CSV Data Set Config (for parameterization), and User-
Defined Variables.
• Pre-Processors: These are actions that are executed before the Sampler request.
Common examples include HTTP URL Re-writing Modifier and User Parameters.
• Post-Processors: Actions executed after a Sampler request. Examples include
Regular Expression Extractor, JSON Extractor, which are used to capture and
manipulate responses.
Recap
• Introduction to Performance Testing: Basics of performance testing and its importance for system stability
and scalability.
• Real-life Examples of Load-related Issues: Case studies of system failures due to poor load testing.
• Evolution of Infrastructure & Performance Testing: How cloud, microservices, and modern infrastructure have
changed performance testing.
• Performance Testing Types: Overview of load, stress, spike, endurance, and volume testing.
• Performance Metrics and KPIs: Key performance indicators like response time, throughput, and error rates.
• Steps to Identify Performance Bottlenecks: Process for diagnosing and identifying bottlenecks in system
performance.
• Common Performance Bottlenecks: Typical issues: CPU overload, memory leaks, disk I/O, network latency.
• Performance Testing in the SDLC: Importance of integrating performance testing into the software
development lifecycle.
• Common Performance Testing Tools: Overview of tools like JMeter, LoadRunner, Gatling…etc.
• Introduction to Jmeter: Basic JMeter setup, creating test plans, and GUI walkthrough.