- Exit code 137 = OOM kill โ raise the container memory limit or fix the leak
- Exit code 1 = process crash โ the queue worker threw an unhandled exception (often a dead API key, payment gateway error, or bad Claude API response)
- Supervisor not restarting it? Check that
autorestart=trueand your Dockerfile runs supervisord in the foreground, not as a daemon - SaaS billing problems and payment gateway errors that hit the worker without retry logic will crash the process โ the container dies, not your code
Docker Queue Worker Container Exits Randomly in 2026 โ Here's Why and How to Fix It
I'm in Encinitas and I debug these exact issues for operators and founders in San Diego every week. Whether it's a supervisor misconfiguration, a SaaS billing problem crashing your worker, or a Claude API enterprise integration throwing unhandled errors โ I'll tell you what's wrong in one text.
๐ฌ Text PJ โ "My queue worker keeps dying"Questions people are actually searching
- Why does my Docker queue worker container die randomly or exit unexpectedly?
- How do I stop supervisor from letting a Docker container exit randomly?
- Docker queue worker exits with code 137 โ what does that mean?
- SaaS billing problems in 2026 causing worker crashes โ is this a known issue?
- Claude API enterprise integration challenges โ unhandled errors killing my queue?
- Payment gateway problems causing my background job container to crash?
- Why does my container exit immediately when I run it with supervisor?
What's actually happening โ 6 real answers
Exit code 137: OOM Kill
Docker sent SIGKILL because the container hit its memory limit. Run docker inspect --format='{{.State.OOMKilled}}' to confirm. Fix: raise --memory flag or find the leak with a profiler. Queue workers that accumulate jobs in memory are common offenders.
Supervisor daemon mode kills the container
If your Dockerfile has CMD ["supervisord"] without -n or nodaemon=true, supervisord forks to the background and PID 1 exits โ Docker stops the whole container immediately. Fix: add -n flag or set nodaemon=true in supervisord.conf.
API errors crashing the worker process
SaaS billing problems, payment gateway errors, and Claude API enterprise integration failures all return 4xx or 5xx status codes. If your worker doesn't handle those with retry logic, it raises an unhandled exception and exits with code 1. The container dies. The fix is in your worker code, not Docker.
Supervisor autorestart not configured
Default supervisord behavior is autorestart=unexpected โ it only restarts on unexpected exit codes. If your process exits with 0 after draining a queue, it won't restart. Set autorestart=true and startsecs=2 to give the process time to actually start before supervisor counts a crash.
Environment variable missing at runtime
Works locally, dies in Docker? A missing DATABASE_URL, REDIS_URL, or API key causes the worker to crash on first use. Run docker exec -it <container> env and compare against your local environment. Pass secrets via --env-file or your orchestrator's secret store.
Payment gateway or billing limit hit
Payment gateway problems in 2026 often show up as workers dying after processing a specific job โ the one that hits a rate limit, a declined card handling path, or an expired webhook secret. Add structured logging around every external API call in your worker so you can see exactly which job killed it.
Still not sure why your container keeps dying?
Send me the exit code and the last 20 lines of logs. I'll tell you what's wrong โ usually in one reply. No charge, no pitch.
๐ฌ Text PJ Now โ 858-461-8054โญ Helpful? Leave PJ a Google review โ takes 30 seconds.