07.11.25

From side project to system

This text is not really about a side project. It is about how systems behave once they exist for real. Under time, load, and everyday use.

GetMeOne started as a small script to make my life easier. I was tired of manually checking apartment listings, so I wrote something simple that pushed new offers into a spreadsheet. No database. No users. No expectations.

Over time, the script grew into a real system. This happened naturally, as new needs appeared. Filters were added. Notifications followed. Data had to be normalized. State became unavoidable. At that moment, the script crossed a line and became a system.

At that point, I stopped treating it as a hobby. Not because it suddenly became critical, but because systems behave the same way regardless of why they were built.


Why I decided to test it

Most side projects are never tested under load. They are tested for correctness and then left alone. That works until the first moment when time and volume start to matter.

I am a performance engineer. So instead of asking whether the project works, I asked a different question:

What happens when this system runs continuously and is forced to deal with more work than usual?

This was not about proving that the system is fast. It was about understanding where it bends, where it slows down, and where it might quietly break.


Thinking in terms of behavior, not components

It is easy to describe the architecture of GetMeOne as a set of services: parsers, filters, notifier, web app, database.

That description is correct, but it is also incomplete. What matters more is how work moves through the system. How often parsers produce data. How much work the filter creates. How delays in one place affect everything downstream.

From a performance perspective, this looks less like a collection of services and more like a flow. Seen this way, testing individual endpoints brings little value. The behavior of the full chain over time becomes the main concern.


Defining performance in user terms

Users do not care about internal latency numbers. They care about outcomes. In this system, the outcome is simple: how long it takes for a new listing to appear in a Telegram notification after it shows up on the source website.

This became the main performance goal. Everything else exists to support it.

Instead of starting with infrastructure limits, I started with this end-to-end delay and worked backwards. How much time can each step consume before the experience becomes useless? This approach makes trade-offs visible. If one part slows down, something else has to give.


Load and behavior over time

The system does not follow a simple request-response pattern. Most work happens in the background. Parsers collect data. Filters process it asynchronously. Notifications are sent with deliberate delays.

In this setup, load comes less from request rate and more from accumulated state. As users and filters increase, each new listing creates more work. A single input can trigger hundreds or thousands of checks and messages.

Because of this, short tests only show part of the picture. They can reveal obvious issues, but they rarely show slow degradation. What mattered more here was observing the system over time.

Long-running tests made it possible to notice memory growth, queue buildup, and gradual increases in delay. Monitoring became the main way to understand what the system was doing when no one was actively interacting with it.

Charts were treated as questions rather than proof. Why does this line keep rising? Why does delay increase even when input stays stable?


What this experiment shows

This experiment applies the same discipline to a small system that is usually reserved for large ones. Clear goals, explicit limits, and continuous observation.

The most valuable outcome is not only whether the system survives a certain load, but what the system reveals about earlier design decisions. Unexpected behavior often points to assumptions that were never questioned.

This is only an early step. Further work (you can find it here) will focus on sustained load, visible limits, and design choices that begin to hurt first. Some changes will be minor. Others may require revisiting core decisions. That is expected.

Performance testing, even for small systems, is a way to learn how systems behave once assumptions meet reality.