Contents
- 1 Php worker processes explained: the quiet machinery behind your app
- 2 Two worlds of PHP workers
- 3 The anatomy of a web request worker
- 4 How many web workers do you really need?
- 5 The other half: background and queue workers
- 6 Web workers vs queue workers: the quiet dependency
- 7 Supervising workers like adults
- 8 Observability: reading the workers’ mood
- 9 Worker problems that look like something else
- 10 A quiet skill that makes you a better PHP developer
Php worker processes explained: the quiet machinery behind your app
There’s a moment most of us remember.
It’s late. Production is slow.
Your monitoring pings you with that nervous orange color: latency up, queue depth rising.
Someone mutters, “Maybe we just need more PHP workers?”
And then the room goes quiet, because nobody wants to admit what they’re actually thinking:
“What is a PHP worker, really? And how do we know we’re doing this right?”
We throw the word around casually—“workers”, “FPM processes”, “queue workers”, “supervisors”—like we’re talking about actual people. Which is not entirely wrong. Underneath all the config files and dashboards, PHP worker processes are basically the people in your back-end factory.
If you’re building or running PHP systems, especially at work where production incidents have real cost, understanding workers is not optional anymore. It’s the difference between “works fine locally” and “keeps working when your traffic doubles overnight.”
Let’s unpack it together, in human terms, and then get practical.
Two worlds of PHP workers
When people say “PHP worker”, they often mean one of two things:
-
Web request workers
The processes that handle HTTP requests:
fastcgi (php-fpm), Apachemod_php, Caddy+php-fpm, Nginx+php-fpm, etc. -
Background/queue workers
Long-running CLI processes that consume jobs from a queue:
Laravel Horizon workers, Symfony Messenger workers, customphp worker.phploops.
They behave differently, they’re tuned differently, and they fail in different ways. But they share the same core idea:
A PHP worker is just a PHP process, configured to do some kind of work, within some limits.
If that sounds underwhelming, good. It means we can reason about them without mysticism.
Let’s walk through both worlds, then talk about how they collide in real systems.
The anatomy of a web request worker
Picture a busy day on your application. A hundred users hit “Checkout” within the same second.
What happens?
- The request comes in: Nginx or Apache or Caddy receives the HTTP request.
- The web server hands it off to PHP: via FastCGI or embedded module.
- A PHP worker process accepts that request, spins up your framework, hits database/cache, runs your business logic.
- The worker returns the response and:
- either dies (traditional CGI, rare today), or
- gets ready for the next request (php-fpm usual case).
The worker as a person on a support line
Think of an FPM worker like a support agent on the phone.
- If you have 5 workers, it’s like having 5 people to answer calls.
- If 50 people call at once:
- 5 get through,
- 45 sit in a queue (backlog),
- and some will eventually time out or hang up.
Setting pm.max_children to 5 when you need 50 is not “optimizing infrastructure”. It’s like opening a call center with one intern and a folding chair.
Key php-fpm settings that secretly run your life
If you’re on PHP-FPM (which is what most modern setups use), your worker world is shaped by a few settings.
In www.conf (or similar):
-
pm = static|dynamic|ondemand
How workers are managed:static: fixed number of workers.dynamic: min / max workers, FPM adds/removes workers based on load.ondemand: workers start when needed, die when idle.
-
pm.max_children
The maximum number of workers (max simultaneously handled requests). -
pm.start_servers,pm.min_spare_servers,pm.max_spare_servers
Only fordynamic. These decide how many workers are kept warm and ready. -
pm.process_idle_timeout
Fordynamic/ondemand: kill idle workers after this time. -
pm.max_requests
How many requests a single worker handles before being recycled.
That last one is the unsung hero.
Why pm.max_requests matters more than you think
PHP has a thing: long-running processes tend to leak memory. Frameworks add more state. Libraries add more. Over time, a worker that started at 40 MB might creep up to 200–300 MB.
pm.max_requests gives each worker a lifespan. After that many requests, it calmly exits, and FPM spawns a fresh worker. Like asking a tired support agent to go home and sending in a rested one.
Typical numbers people use in real life:
- low-traffic site:
pm.max_requests = 500–1000 - heavier apps with memory leaks:
100–300 - super memory-tight systems: even lower
Is there a perfect value? No.
But there is a boring, reliable habit:
Monitor per-process memory, adjust
pm.max_requestsuntil it flattens out instead of climbing forever.
This is the kind of tedious tuning that saves you from 3 a.m. incidents. It’s not glamorous. It’s effective.
How many web workers do you really need?
Here’s the question that haunts people who administer PHP:
“How many php-fpm workers should I configure?”
It’s like asking how many chairs to put in a restaurant without knowing how many customers you’ll have, how long they stay, or how fast the kitchen is.
But we can approximate.
The CPU and RAM reality check
Each worker has two main costs:
- CPU: how much compute it uses while processing a request.
- RAM: how much memory it occupies.
Let’s say:
- Your VM has 4 vCPUs and 8 GB RAM.
- A typical PHP worker uses 150 MB RSS under real load.
- You want some RAM left for MySQL client buffers, cache, system, etc.
Rough math:
- 8 GB ≈ 8000 MB
- Leave ~2 GB for system & others → 6000 MB for PHP
- 6000 / 150 = 40 workers max from a RAM standpoint
CPU side:
- On 4 vCPUs, squeezing 40 CPU-heavy workers is a good way to melt your box.
- But most PHP requests are a mix of I/O waits and bursts of CPU.
A common pragmatic rule-of-thumb:
- Start near 2–4 × CPU cores if requests are moderately heavy.
- Watch CPU saturation. If you’re pegged at 100% all the time, you’re oversubscribed.
So with 4 cores:
- Start with 8–16 workers.
- Monitor:
- response times,
- CPU load,
- queue/backlog,
- worker memory growth.
Then adjust gradually.
Configuration is an ongoing conversation with reality, not a one-time ceremony.
The invisible queue: where user experience really lives
Between incoming requests and workers, there is always a queue.
We don’t see it directly. We see symptoms:
- requests stuck in “waiting for PHP” in the browser dev tools;
- slow TTFB (time to first byte);
- health checks flapping.
You can usually see queue length via:
- web server status (Nginx status, Apache server-status),
- FPM status page (
pm.status_path), - APM tools (New Relic, Datadog, etc.).
Low queue, stable latency?
You’re fine.
Growing queue, rising latency, CPU not saturated?
Maybe you simply don’t have enough workers.
Growing queue, rising latency, CPU already at 90–100%?
Throwing more workers at it is like adding more cars to a traffic bottleneck. It feels like action, but it just deepens the jam.
And this is where we drift into the other kind of workers.
The other half: background and queue workers
Most modern PHP apps don’t just respond to web requests. They:
- send emails,
- process images,
- update search indices,
- sync data to external services,
- calculate reports.
Do you want someone waiting with a spinning loader while you:
- generate a PDF,
- call three remote APIs,
- reindex 10,000 records?
Of course not. That’s where queue workers come in.
The long-lived PHP process
Unlike FPM workers, which serve one request at a time and reset per request lifecycle, queue workers are usually long-running CLI processes.
Laravel’s typical command:
php artisan queue:work --queue=emails,default --sleep=1 --tries=3
Symphony Messenger:
php bin/console messenger:consume async --limit=100 --time-limit=3600
Or your own loop:
while (true) {
$job = $queue->pop();
if ($job) {
handle_job($job);
} else {
sleep(1);
}
}
These workers:
- connect to a queue backend (Redis, RabbitMQ, SQS, etc.),
- wait for jobs,
- process them,
- repeat.
They are your night shift. They keep working after the user has already moved on.
The three numbers that define a queue worker
When tuning background workers, almost everything revolves around:
- Concurrency: how many workers you run in parallel.
- Job time: how long each job takes.
- Arrival rate: how many jobs per second/minute arrive.
If:
- each job takes ~2 seconds,
- you get 50 jobs per second,
- and you only run 10 workers…
…your backlog will explode.
A rough mental model:
Workers needed ≈ (jobs per second × average job time)
If you get 5 jobs/second, and each job takes 2 seconds:
- 5 × 2 = 10 workers to keep up on average.
Plus some buffer for spikes.
This is not exact math. Real systems are noisy. Jobs vary in complexity. Queues spike. But even such back-of-the-envelope thinking is better than “we just started 3 workers because… reasons.”
The dark side of long-lived workers
Long-lived processes come with trade-offs:
- Memory leaks accumulate over hours.
- New code deploys don’t automatically restart workers (unless supervised).
- Connections to the database/cache can go stale.
- Bugs that would only manifest on the 500th job suddenly matter.
Good habits:
- Limit job count or time per worker:
e.g. Laravel:--max-jobs=1000 --max-time=3600 - Graceful restarts via Supervisor, systemd, Kubernetes, or Laravel Horizon.
- Watch memory over time: if your worker creeps from 100 MB to 800 MB in an hour, something is wrong.
An ugly truth: a lot of PHP shops learn this only after the first incident where the worker box dies in the middle of a marketing campaign because nobody noticed memory climbing for weeks.
The postmortem root cause line is always simple and brutal:
“Workers ran forever, no recycling strategy, memory leak in job X.”
It’s rarely something exotic.
Web workers vs queue workers: the quiet dependency
Here’s where things get interesting.
In real life, these two kinds of workers are not separate worlds. They depend on each other in subtle ways.
Consider this chain:
- User submits a heavy operation.
- HTTP request enqueues a job, responds “OK, we’re on it”.
- Queue workers pick it up and do the hard work.
- User receives an email or sees updated data later.
If queue workers are down or overwhelmed:
- the queue grows,
- downstream effects grow delayed,
- user “feels” the lag indirectly.
If web workers are too few or starved:
- jobs may not even be enqueued,
- people can’t trigger new work,
- your app feels frozen before the queue even sees a job.
You can’t scale one side in isolation.
You’re tuning a system, not two disconnected components.
Supervising workers like adults
At some point, every team hits the same wall:
“We’ve got 5 different PHP worker commands, 3 queues, 2 environments, and nobody knows which one is running where.”
Then someone SSHs into a random server and finds a screen session named worker1 running for 147 days.
If that sounds familiar, you’re not alone.
We all start there.
But real stability starts when you treat workers as first-class citizens.
Use an actual supervisor
Depending on your stack:
- Supervisor (the classic)
Good for bare-metal or simple VM setups. - systemd
If you’re on a modern Linux, this is already there. - Kubernetes
Jobs, deployments, HPA based on queue depth, etc. - Laravel Horizon
Specifically designed for Laravel queues.
Example Supervisor config for queue workers:
[program:laravel-worker]
command=php /var/www/app/artisan queue:work --sleep=1 --tries=3 --max-jobs=500
numprocs=5
autostart=true
autorestart=true
user=www-data
redirect_stderr=true
stdout_logfile=/var/log/laravel-worker.log
stopwaitsecs=3600
Key ideas here:
numprocs: defines how many workers you run in parallel.autostartandautorestart: make them resilient to crashes.max-jobs(or equivalent): forces periodic restart to avoid leaks.
The how doesn’t matter as much as the principle:
Workers should never depend on a human remembering to run
php artisan queue:workin a random shell.
Observability: reading the workers’ mood
You can’t manage what you can’t see.
That’s not a slogan. It’s brutal physics.
Good worker setups expose:
- Throughput: jobs/sec, requests/sec.
- Latency: average and percentile response times.
- Queue length / backlog: how many pending jobs.
- Error rates: exceptions, failed jobs, timeouts.
- Resource usage: CPU, memory per worker.
For web workers:
- enable
pm.status_pathin FPM. - scrape metrics into Prometheus, Datadog, New Relic, etc.
- track slow requests separately (slow log).
For queue workers:
- expose internal metrics if possible (Prometheus client, stats to Redis),
- log processing time per job,
- have a view (like Laravel Horizon’s dashboard) to see queue depth and failures.
Why does this matter for a platform like Find PHP?
Because when someone is hiring a PHP developer or joining a new team, “knows PHP” is meaningless without “understands how this thing behaves in production when it’s 3× traffic and nobody is calm anymore.”
Workers are where that knowledge shows up in real life.
Worker problems that look like something else
A fun and painful thing about worker issues: they rarely present as “the workers are misconfigured.”
They show up as:
- “The site is slow at random times.”
- “Emails take 20 minutes to arrive.”
- “Sometimes jobs are just… stuck.”
- “Memory usage keeps climbing and we don’t know why.”
- “CPU is low but requests are still timing out.”
Often, underneath:
- not enough FPM workers,
- too many FPM workers for the available CPU,
- queue workers stuck on a poisoned job,
- workers not restarted on deploy,
- one gigantic job blocking everything else,
- missing backpressure (no limits on job volume per user).
The fix is rarely heroic. It’s mostly about:
- matching worker counts to load,
- giving them sane lifecycles (
max_requests,max-jobs, timeouts), - monitoring their health,
- isolating heavy work into separate queues or worker pools.
It’s boring. It’s also the difference between “we’re always firefighting” and “we kind of trust our system.”
A quiet skill that makes you a better PHP developer
If you work in PHP, especially in the world that overlaps with Find PHP—jobs, hiring, teams, long-lived products—worker literacy is a quiet career multiplier.
Understanding PHP worker processes means:
- you can talk confidently to DevOps/SRE folks without bluffing,
- you can design features that don’t accidentally DDoS your own app,
- you can reason about scaling limits before they hurt you,
- you’re not scared of
php-fpm.confanymore, - when a recruiter asks “how did you make your system scale?”, you have a real story, not buzzwords.
For people hiring PHP specialists, it’s one of the most reliable signals.
Anyone can say “I know Laravel” or “I’ve used Symfony.” It hits differently when someone can say:
- “We tuned
pm.max_childrenbecause we saw queue time spike under load.” - “We added
max-jobsand memory metrics to our workers to kill a leak.” - “We split heavy jobs into a separate worker pool so normal traffic stayed fast.”
Those are the fingerprints of someone who’s wrestled with the system, not just the syntax.
We all start somewhere, usually staring at a log file at 1 a.m. wondering why FPM keeps crashing or why the queue is silently backing up. The difference is whether we decide to actually understand what those workers are doing for us, day in and day out.
If you take the time to really see them—not as black boxes, but as simple, configurable processes with limits and trade-offs—you’ll notice something subtle: your systems feel calmer, your incidents get shorter, and your own work starts to carry a quiet, earned confidence that doesn’t need big words to explain itself.