Performance Testing Process: from Requirements to Optimization
In this article I share my Miro performance testing map and walk through each step: from requirements to optimization, with practical tips from my experience.
๐
๐ฅ Download the map: PDF โข Image
Introduction
This article is built around the performance testing process map I created in Miro. The map shows the main building blocks and how they connect. In the text I go through each block and explain what it means in practice. The goal is not just to look at the diagram but to understand it and apply it in real projects.
Contents
- 1. Requirements
- 2. Statistics & Scenarios
- 3. Environment & Monitoring
- 4. Testing Cycle & Analysis
- Step-by-Step Checklist
- Conclusion
1. Requirements
In the map this is the red block. Everything starts with defining goals. Without them, tests turn into random runs without real meaning.
- Define the response time: how many seconds users are willing to wait.
- Set a target for concurrent users or RPS (requests per second).
- Establish limits: acceptable error rate, CPU and memory usage.
- Align with the business so the whole team shares the same understanding.
๐ก Tip: create a table with โacceptable / criticalโ values for each metric.
๐ Outcome: a list of goals and criteria to compare against.
2. Statistics & Scenarios
These are the orange blocks on the map. Here we gather data about how the system is actually used.
- Collect logs, APM reports, and analytics from tools like Google Analytics or Kibana.
- Identify the most common user journeys and the heaviest operations.
- Build a mix: percentage distribution of scenarios. For example: login โ 10%, catalog view โ 50%, order โ 20%.
- Prepare test data with enough variety to avoid the โcache effect.โ
๐ก Tip: make sure the scenarios cover both business-critical and frequently used functions.
๐ Outcome: a set of scenarios and their share in overall traffic.
3. Environment & Monitoring
These are the green blocks on the map. The test environment should be as close to production as possible.
- Infrastructure: servers, containers, databases โ with the same configurations as in production.
- Application: the same versions of services and dependencies.
- Load generators: distributed, so they donโt become the bottleneck themselves.
- Monitoring: metrics like CPU, memory, disk, network, database, cache โ taken from the application and infrastructure.
Response time, throughput, and errors are recorded from the performance test results.
๐ก Tip: add tags to each run (for example, date and build number) so you can quickly find metrics later in APM and logs.
๐ Outcome: a stable environment and a complete dataset for analysis.
4. Testing Cycle & Analysis
These are the purple blocks on the map. Here we run different types of tests and analyze them right afterwards.
- Smoke โ make sure scripts and the environment work correctly.
- Baseline โ a reference point for comparison, under minimal stable load.
- Load โ check SLA compliance under real-world load.
- Capacity/Stress โ find the systemโs breaking point.
- Scalability โ verify linear growth when resources are increased.
- Soak โ a long run to uncover memory leaks and performance degradation.
After each run:
- Compare the results with the baseline.
- Look at trends: is response time increasing, are errors appearing?
- Pinpoint bottlenecks: in code, database, or configuration.
- Separate fixes into quick wins (e.g., add an index) and long-term improvements (architecture changes, logic refactoring).
๐ก Tip: write a short 5โ7 point report after each run โ this helps the team react faster.
๐ Outcome: a series of test runs and a list of improvements for the team.
Step-by-Step Checklist
Requirements
- Gathered SLA and business expectations.
- Created a table with metrics (acceptable / critical).
Statistics & Scenarios
- Logs, APM data, and analytics reviewed.
- Most common and business-critical actions identified.
- Scenario mix with percentages built.
- Unique test data prepared.
Environment
- Infrastructure deployed and fixed.
- Application versions aligned with production.
- Load generators distributed and verified.
Monitoring
- Dashboards set up in APM/monitoring tools.
- CPU, memory, network, DB, and cache metrics visible.
- Response time, throughput, and errors collected from test results.
- Tags added for each run.
Testing Cycle & Analysis
- All runs executed: Smoke โ Baseline โ Load โ Stress โ Scalability โ Soak.
- Parameters of each test captured.
- Results compared against baseline.
- Bottlenecks identified, quick wins and long-term fixes listed.
Optimization
- Report with clear recommendations created.
- Task owners and timelines assigned.
- Criteria for re-running tests defined.
Conclusion
The Miro map gives a big-picture overview, and the steps in this article fill it with practical meaning. Performance testing is not a one-time run but a continuous cycle: tests โ analysis โ improvements. Each run makes the system stronger and helps you see where it could break tomorrow if you donโt fix it today.
๐ The full version of the map is available as PDF and image.