Performance Starts with Requirements

21.10.2025

Why NFRs aren't just bureaucracy but the foundation of every performance process. Here's how to define real performance criteria, who should set them, and what to do when a client says: "Just make it faster."

theory

📋

Introduction

Most performance testers have faced this situation at least once: after a detailed report and neat graphs, the client asks the same question:

“So… is it good or bad?”

And that’s when it becomes obvious: there’s no baseline. The system might handle hundreds of users, but if no one defined what success looks like, the results mean nothing. One engineer sees stability, another sees degradation, and the business just sees pretty charts. In the end, the performance engineer becomes a translator between numbers and the business.

Without clear criteria, the team doesn’t know what to aim for, and the client doesn’t understand what they’re paying for. That’s why defining NFRs isn’t bureaucracy. It’s a key step in building a mature engineering culture.

Contents

🧠 This article is based on a discussion and shared reflections with Ivan Zarubin, Nadzeya Tamela and Sergiy Rudenko.


🔍 What Are Performance Requirements and Why They Matter

Performance requirements are the language of communication between engineers, business, and users. They’re a set of measurable characteristics that define under what conditions the system can be considered performing well.

When there are no criteria, the loudest “it’s fine” wins.

NFRs (non-functional requirements):

  • define the boundaries of normal operation;
  • form the basis for meaningful analysis;
  • capture the business and user expectations.

Good NFRs answer not just how fast, but also how stable and how predictable performance should be. Without them, it’s easy to fall into the illusion of success: the app flies on QA but dies with the first thousand real users.


🧭 How to Define Requirements: From Observation to Metrics

Performance starts not with JMeter but with understanding the business.

  1. Who are the users?
  2. What do they actually do?
  3. Which operations are critical for the business?

Only then do you move to numbers.

Good requirements aren’t invented. They’re found in the data.

📊 Sources for defining NFRs:

  • Production logs and monitoring — real statistics on scenarios.
  • User behavior observation — where and when peak loads occur.
  • Business context — seasonality, peak hours, client types.
  • Analytics and expert knowledge — predictions for new systems.

If it’s a local project, two seconds of response time might be fine. If it’s global every millisecond counts.

For large companies, every 100 ms of delay can mean a 1% revenue loss.

NFRs are not carved in stone. They’re living documents that need to evolve as the product grows, infrastructure changes, or new scenarios appear.


🧩 Who’s Responsible for NFRs

In mature teams, requirements are defined collaboratively: architects, developers, QA, and business work together. But in reality, it’s often the performance engineer who writes them. Simply because no one else does.

“I just want it to be better” isn’t a requirement. It’s wishful thinking.

A good engineer doesn’t wait for perfect instructions. They propose metrics, analyze data, and formalize expectations. That’s how testing becomes an engineering practice, not chaos.

Sometimes NFRs appear after an incident: something crashed, the business lost money, and only then the team defines a rule like “let’s make sure it never happens again.” Mature teams do the opposite — they set boundaries before things go wrong.


⚠️ Common Mistakes

  1. Make it faster without numbers. That’s not a goal.
  2. ⚖️ Using averages. The mean response time means nothing. Look at percentiles.
  3. 🔍 No prioritization. Not all scenarios are equal. Checkout matters more than profile editing.
  4. 🧱 Ignoring architecture. Sometimes the issue isn’t slow code, but poor design.
  5. 🔄 Static numbers. Requirements should evolve along with the system.

Bad NFRs are worse than no NFRs at all.

They’re dangerous because they create a false sense of security. The team sees green graphs and assumes everything’s fine, until production collapses under real load.


📈 Metrics That Really Matter

Performance isn’t a single number. It’s an ecosystem of metrics that describe how the system behaves under different loads.

  • Load metrics: users, sessions, transactions per second.
  • Timing metrics: median, 90th and 95th percentiles, maximums.
  • Resource metrics: CPU, memory, IO, network, database.
  • Behavioral metrics: error rate, recovery time, GC pauses, timeouts.

If everything looks perfect at 80% load, you simply didn’t test hard enough.

It’s not enough to measure, you need to understand the cause and effect. A CPU spike might be caused by inefficient caching, and network delays might point to a misconfigured load balancer. A performance engineer should think systemically looking for patterns, not isolated bad metrics.

A good system not only survives stress but also recovers gracefully: freeing memory, closing connections, clearing queues.


🔧 Main Challenges

  1. 📉 No data. Without logs, any discussion is guesswork.
  2. 🗣 Different languages. Devs talk about TPS, analysts about users, business about profit.
  3. 🕒 Late involvement. Performance gets remembered a month before release.
  4. 🧩 Disconnection. Everyone thinks it’s someone else’s responsibility.
  5. 💸 Underestimation. As long as the system doesn’t crash, performance feels optional.

Performance is an investment in the future. We don’t test for today, we insure tomorrow.


☁️ Production-Like Environment

Performance tests make sense only in a production-like environment.

Testing performance on QA is like checking if a plane flies in a garage. It moves, but it won’t take off.

An ideal environment:

  • the same number of servers and cores;
  • identical database and cache configurations;
  • the same JVM, drivers, and service versions.

Cloud makes this easier: Terraform, Helm, CloudWatch, Grafana, Prometheus. The main thing is observability and stability.

Tests should simulate real user flows: logins, navigation, purchases, reports. Otherwise, you’re not testing performance, you’re testing randomness.


💼 Investment, Not Bureaucracy

The absence of NFRs turns performance discussions into opinion wars. When NFRs exist, there’s a shared language, measurable goals, and transparency.

In mature teams, NFRs aren’t just paperwork. They’re part of the project’s DNA.

Requirements make performance testing meaningful. Developers understand their limits, business sees risks clearly, and engineers have a target to aim for. It’s not about reports. It’s about control over the system’s growth.

Well-defined NFRs let you plan ahead: forecast scaling, budget infrastructure, and prevent incidents, instead of firefighting them.


🧭 Final Thoughts

Performance without requirements is like flying without instruments. You can fly by intuition, but sooner or later, the fog comes and the system loses direction.

Requirements are the map and compass. Without them, everyone optimizes their own piece, but no one moves forward.