What DoS is
DoS stands for denial of service. The service is still alive, but it spends so much effort handling excess load that it stops responding normally.
In everyday terms, the site opens slowly, the form does not submit, the API returns errors, and support gets the classic “your site is down” message. It is not always an attack. But if this happens suddenly and without an obvious cause, DoS is one of the first suspects.
How DoS differs from DDoS
DoS usually means a single source of pressure or a very narrow source of load. That can be one node, one script, or one extremely expensive action that makes the system choke.
DDoS is distributed denial of service. Here the pressure comes from many devices at once, often through a botnet. Blocking it is harder because you do not have one address to stop. You have many, and they can change quickly.
Short version: DoS is “the entrance got clogged.” DDoS is “the entrance got clogged from many directions at the same time.”
Why it matters
The problem is not just the technical shape of the graphs. When a service loses availability, it starts hurting the business:
- orders or payments do not go through;
- user dashboards and forms break;
- support ticket volume rises;
- trust in the service drops;
- the team spends hours fighting the fire instead of doing normal work.
The worst part is that DoS often looks like a normal traffic spike. From the outside it can seem like “we got lucky with a campaign” or “something broke after the release.” That is why you need signals, not guesses.
Illustration
The picture shows the simple idea: normal traffic arrives evenly, while an attack turns the flow into a wave that pushes the service into errors and delays.
How to recognize the problem
Start with facts, not panic.
1. Look at the symptoms
Typical signs:
- a sudden rise in 5xx;
- slow responses even for simple requests;
- CPU or RAM spikes;
- more connections than usual;
- the same routes repeat over and over.
If only one page or one endpoint is slow, it may just be heavy code. If everything starts choking at once, DoS becomes a much stronger suspect.
2. Compare it with the normal baseline
Every service has its own rhythm. It helps to know:
- how many requests arrive on a quiet day;
- which routes are the heaviest;
- where public access exists without login;
- what must stay available even under pressure.
Without that baseline, any spike looks like a disaster or, on the other hand, “just traffic.” Both are bad.
3. Separate attack from a normal spike
Not every spike is DDoS. Sometimes it is:
- an email or social campaign;
- a release that suddenly increased interest;
- cache that stopped helping;
- one expensive request that hundreds of clients started repeating.
But if the load comes from many addresses, hits the same routes, and changes quickly, it is much more like DDoS than a successful marketing day.
How to protect the service
The best defense is not one magic switch. It is several layers.
1. Put a simple barrier at the edge
For public routes, WAF and rate limiting are useful. They do not make the service invulnerable, but they can cut away a lot of noise before it eats all the resources.
2. Make expensive things harder to abuse
Do not let costly operations run unless they really need to. If a request consumes a lot of work, add an extra check or separate the route. Cache also helps where data does not change every second.
If you already have a CDN, it can absorb part of the noise before the app even sees it. Cache helps in the same spirit: fewer full computations, fewer chances to choke.
3. Keep a simple path for the most important things
The homepage, health checks, and critical screens should survive longer than everything else. If you must save resources, let the service stay understandable and manageable.
4. Limit what is easy to exhaust
Watch large files, long requests, retries, and very expensive routes. These are often the things that turn a small spike into a real incident.
5. Have an incident plan
You should already know:
- who watches the metrics;
- who makes the decision;
- where protective limits are enabled;
- who to call at the provider or hoster;
- how to roll back if the change makes things worse.
The first 15 minutes of suspicion
If the service starts choking, do not change ten things at once.
- Check whether the issue hits the whole service or just one route.
- Look at 5xx, CPU, RAM, and connection count.
- Compare that with the last release or config change.
- Enable or strengthen WAF and rate limiting where it is safe.
- Gather logs and write down exactly what happened.
The order matters: stabilize first, then analyze, then tune.
Common mistakes
- thinking DoS only counts when it is huge and dramatic;
- confusing an attack with a normal traffic spike;
- scaling everything blindly without baseline protection;
- turning protection off too early after the first improvement;
- not having graphs and logs when you really need them.
Conclusion / action plan
DoS targets availability. DDoS is the same problem, but from many sources and at a much harder scale.
A sane order of action looks like this:
- understand what exactly is broken;
- separate attack from normal spike;
- enable WAF, rate limiting, and other simple barriers;
- protect the most important routes;
- write down a plan for next time.
DoS is not a reason to panic. It is a reason to have layered defense and a calm response plan.
Official sources: