Up to Main Index Up to Journal for April, 2026
JOURNAL FOR SATURDAY 25TH APRIL, 2026
______________________________________________________________________________
SUBJECT: Replacing Mountains with Ant Hills (tiny webserver setup)
DATE: Sat 25 Apr 23:37:03 BST 2026
Imagine you want a website. What are you thinking now? Next.js and Tailwind
for the frontend? Maybe Next.js Server actions or Node.js for the backend?
What about a database? Maybe PostgreSQL with Prisma or Drizzle. Maybe you want
AI features — Pinecone or Weaviate for the AI’s memory? Maybe put some Python
into the backend as well. Everybody want to use yet another chatbot to find
stuff instead of exploring a site and actually clicking on links…
Maybe you just love JavaScript and have MongoDB, Express, React and Node.js?
Or how about PostgreSQL, Express, React and Node.js?
Maybe you want to go all enterprisey instead. How does Spring Boot, Angular
and MySQL/Oracle sound? Maybe .NET, C#, SQL Server and Azure? Go, Vue.js and
PostgreSQL?
Then you just need to bundle everything up in docker containers made in CI/CD
pipelines that deploy onto Kubernetes clusters. After all you do need to find
time to get coffee, have a chat, maybe lunch. Job done! But… how much of your
technology stack do you really understand how much do you even own?
How much is a server, maybe with dedicated GPUs for the AI, going to cost you?
How much technical debt is buried under a fragile heap of complexity? All of
this to do what exactly? Serve a document that is mostly text? Maybe images?
Ouch! That’s a lot stuff. Is you heart sinking yet? We’ve traded understanding
for abstraction, and efficiency for “convenience”, but at what cost? What
happens when a single, vital component in that complex machinery fails, is no
longer compatible due to new features, or is no longer supported? What happens
when your “architect” moves on? What happens in time when people forget…
What if there was a simple, easier way? A way built on simple tools decades
old. Each tool battle tested, doing one specific job, all tools seamlessly
working together. Each tool maintainable and yours to use how you want.
Let’s start at the end and show you some results of my little setup. This is
from a heavy stress test using Apache Benchmark. The server and benchmark were
all running on the same desktop PC, an Intel i9-12900T with 64Gb RAM:
>ab -k -c 1000 -n 1000000 \
https://phreaks1.wolfmud.dev/annex/building-kubernetes-clusters.html
Server Hostname: phreaks1.wolfmud.dev
Server Port: 443
SSL/TLS Protocol: TLSv1.3,TLS_AES_256_GCM_SHA384,2048,256
Server Temp Key: ECDH prime256v1 256 bits
TLS Server Name: phreaks1.wolfmud.dev
Document Path: /annex/building-kubernetes-clusters.html
Document Length: 21199 bytes
Concurrency Level: 1000
Time taken for tests: 686.346 seconds
Complete requests: 1000000
Failed requests: 0
Keep-Alive requests: 0
Total transferred: 21413000000 bytes
HTML transferred: 21199000000 bytes
Requests per second: 1456.99 [#/sec] (mean)
Time per request: 686.346 [ms] (mean)
Time per request: 0.686 [ms] (mean, across all concurrent requests)
Transfer rate: 30467.32 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 9 474 543.7 183 5337
Processing: 47 212 40.6 210 1201
Waiting: 5 64 22.4 64 1074
Total: 63 686 535.1 413 5627
Percentage of the requests served within a certain time (ms)
50% 413
66% 460
75% 1308
80% 1358
90% 1419
95% 1465
98% 2369
99% 2443
100% 5627 (longest request)
That’s one million successful requests and nearly 22Gb of data transferred
using one thousand concurrent TLS (HTTPS) connections — delivered in an
average 686ms. The “server” software was running at under 30% CPU and using
10Mb RAM while `ab` was trying to take over the rest of the machine.
What’s the secret? Well, I will show you the entire configuration I used in
it’s gory entirety:
H:./public
I:index.html
E404:/public/not-found.html
.ttf:application/font-sfnt
.woff:application/font-woff
.js:application/javascript
.wasm:application/wasm
That’s it! One file with an embarrassingly meagre 7 lines, 157 bytes… that’s
because this server just knows how to be a server. Nothing more, nothing less.
No complexity, no megabyte files of JSON, YAML or (shudder) XML do deal with.
Some of you may recognise the configuration file from above as belonging to
busybox’s httpd server. You would be right. This is how I am running it, ready
for more complex “magic”:
busybox httpd -c ./httpd.conf -p 127.0.0.1:8080 &
Ok, now this is getting embarrassing, that’s it. Now you may be asking Why is
it bound to localhost, 127.0.0.1, because nobody can talk to it and where is
the TLS (HTTPS) coming from!? httpd only handles plain HTTP traffic! Well, you
can’t get safer than binding to localhost can you? As for the TLS? We have
some more “magic”. While busybox’s httpd is quietly running we front-end it
with another tool. In another shell I run:
socat -T1 OPENSSL-LISTEN:443,reuseaddr,fork,max-children=1000,\
cert=./combined.pem,verify=0 TCP:127.0.0.1:8080
Yes, that is socat the multi-purpose relay tool. It runs on port 443 and it
handles all of the TLS heavy lifting for us. In order to handle one thousand
concurrent requests socat forks itself and creates clones to handle the
requests. This has a number of benefits including predictable RAM usage and
stability. A clone may, possibly, fail only to be replaced with a fresh clone.
The number of clones and concurrent connections can be controlled with the
`max-children` setting. Any idle clients squatting on a connection are quickly
discarded in 1 second using the -T flag.
We also have zero-trust security, whereby only socat can talk to httpd on
127.0.0.1:8080. Hrm… `verify=0` does not look safe? Well that just means we
are not using mTLS (mutual TSL) and visiting clients do not need to produce a
certificate (enterprisey stuff). What if someone tries something nasty? What
ever they might do disappears with clone at the end of the request and a new
one takes its place with a nice new, clean environment.
One last comment on the socat clones. Each clone will read the certificate
./combined.pem when it starts. This means the certificate can be replaced at
any time with a simple `mv new-combined.pem combined.pem` and it will be used
automatically. No complex change over, no down time, not a single missed
request.
That’s it. A high-performance, self-healing, TLS-encrypted fortress. All in a
7 line (157 bytes) configuration file and two commands. It survived a huge
million-request onslaught without breaking 10MB of RAM.
Heckler from the audience: But, but… ahha! What if the httpd server or socat
dies? Website down and all hands on deck! It’s a disaster! (smug look)…
Hrm, well that’s where the onslaught of mighty automation comes into play.
This script runs as a cron job, just in case:
#!/bin/sh
if ! pgrep -f "OPENSSL-LISTEN:443" > /dev/null; then
socat -T1 OPENSSL-LISTEN:443,reuseaddr,fork,max-children=1000,\
cert=./combined.pem,verify=0 TCP:127.0.0.1:8080 > /dev/null 2>&1 &
fi
if ! pgrep -f "busybox httpd" > /dev/null; then
busybox httpd -c ./httpd.conf -p 127.0.0.1:8080 &
fi
The script also runs via an @reboot cron job for when the server is rebooted.
Automation can be horribly complex can’t it? ;)
Now I know these tools are “ancient”: openssl has been around since 1998,
busybox since 1999 and socat since 2011 - but they just work. They are simple,
stable and don’t need constant maintenance and attention. I can just leave
them running on their own knowing they will do their jobs. With them I only
have to manage a 7 line configuration file and a 9 line script, all of which I
can probably recreate from memory, to run multiple websites. Pair that up with
a nice new static site generator creating everything from Markdown and we have
a winner. How’s your site doing?
There is a little more to this: redirects for people landing using ‘HTTP’,
logging, virtual hosting for multiple sites, serving git repositories and
more. But I’m saving that for a full how-to guide in the Annex!
--
Diddymus
Up to Main Index Up to Journal for April, 2026