Me and Kubernetes are not friends

We are not enemies either. These days I'm mostly indifferent, that's all. I used to oppose K8S with every inch of my body. Not anymore, at least. Frankly, it's not K8S that bothers me—it's everything that comes along and around it.
All of my "professional" exposures to K8S were always very specific to where K8S was running on. As an on-prem kind of guy, the learning curve is quite steep—not only did I not fully comprehend what's the difference between it and Docker or how is it better (if at all) than Docker Swarm, but there were also cloud components coming along. Ingresses, load-balancers, object storage, all of the crap that is quite specific to which cloud provider you are using. I felt lost and confused.
For a very long time, click was my only way of coping with all things Kubernetes. It was still very confusing with all the contexts, namespaces, pods, and containers, but at the very least, I was able to reason about some stuff that was running there. Only to a certain degree—my understanding was shallow at best, non-existent if I'm to be completely honest. I knew how to check some things and perform some basic tasks, but otherwise, if anything went south, I was completely lost.
That's not how I do stuff at all. I require understanding of things at hand, and the deeper I can get, the better. With Kubernetes it felt like an uphill battle, though, maybe even one impossible to win. I didn't want to render myself completely obsolete, and I already skipped on all the cloud native shit, so I knew that, at the very least, I need to deepen my understanding of containers orchestration.
Inspired by some introductory videos on that subject on YouTube I decided I needed to spin it up in some local environment where I can dissect and break it apart. There were some points I knew I'd want to follow, though:
- No cloud—bare metal, deployed by hand if need be
- No wrappers, frameworks, or anything like it—I want it raw
- Components used by cloud providers preferred
- No clustering
This, very quickly, rendered all things like k3s unacceptable. That last point might be controversial, I know, but initially I only wanted to see what are all of the components, how they are connected and how does the overall "flow" look like.
I already had an Intel NUC7i3BNK running Debian Sid (as I tend to do with non-mission-critical stuff) with bunch of containers maintained via Docker Compose, I decided it may serve as a good starting point: moving them into K8S once I've got it in place. Going through the docs I quickly realized I'm going to kick it all off using kubeadm
.
At that time I was experimenting with my own kernel builds, which caused me loads of headaches. K8S expected a lot of modules to be present in the kernel so I had to go back and forth—recompile and install new kernel, reboot the box, attempt kubeadm
deployment again, etc. Daunting for sure, but it also gave me a good understanding of cgroups and all the "spaces" it required for smooth running.
It's been 5y ago when I initially deployed my single-node server. I'm quite sure it started off using Docker as my container runtime, but later on, I switched to containerd
. I also recall switching at some point to systemd
as my cgroup driver.
Once all the required dependencies and kernel modules were in place, the deployment was not too difficult:
sudo kubeadm init --upload-certs
Of course, there are more steps involved to get it actually going. Because I'm running a single node, I also needed to remove taint from the only place I'll be able to run pods on:
kubectl taint nodes --all node-role.kubernetes.io/master-
For better or worse, I really wanted to also learn Helm at that time, so I was trying to use it for everything. That included Cilium which I chose for my networking—main idea was to use something that is widely adopted and I had zero preference at that time (I still don't).

Is it an overkill for a single node? Obviously. Is it cool? No, no it's not. But it works, and that's what counts.
Two additional things I knew from the get-go I'll want to have were cert-manager to handle wildcard certificate for my delegated to K8S subdomain, and ingress-nginx for serving all the things running on it. The former was rather straightforward to set up and there were no surprises, the latter got a bit more involved. For one, I had to use hostNetwork: true
, but also because I was exposing some "raw" tcp and udp services, I eventually had to stop using Helm directly for deploying it. These days I simply generate deployment with Helm and then use kubectl apply
directly:
helm template -n ingress-nginx ingress-nginx ingress-nginx/ingress-nginx -f ingress-nginx/howl-values.yaml > ingress-nginx/howl-deploy.yaml
kubectl apply -f ingress-nginx/howl-deploy.yaml
So, what am I running here? Honestly, not that much.
NAME READY STATUS RESTARTS AGE
adguard-home-79b95d7788-4w2pr 1/1 Running 0 22d
ghost-568687d44d-zvggq 1/1 Running 0 99m
hedgedoc-deployment-64b94989c5-nd4qn 1/1 Running 0 99m
memos-666b8f8bff-cftff 1/1 Running 0 100m
minio-7874ccdbfd-7qtph 1/1 Running 13 (31d ago) 185d
nocodb-7788675d7c-k7trg 1/1 Running 0 100m
percona-57df5d487b-kxmqd 1/1 Running 27 (31d ago) 413d
redis-5bd54f6848-vdqbh 1/1 Running 14 (31d ago) 210d
shorty-deployment-854c67bd74-sqk6k 1/1 Running 10 (31d ago) 130d
tootly-cronjob-29122980-8vsvc 0/1 Completed 0 152m
tootly-cronjob-29123040-bhlgf 0/1 Completed 0 92m
tootly-cronjob-29123100-p8cxt 0/1 Completed 0 32m
vault-96c85d8c9-xqrpf 1/1 Running 0 99m
Majority of these services are not something I care much about. Sure, my Tootly is nice to have, same goes for Shorty. I could just as well run them as bare services anywhere else. The only mission critical service up there is AdGuard which I use as my internal DNS service. My OPNsense firewall is configured to use it and to pass it to all clients in my LAN.
In a separate namespace (monitoring
) I'm also running kube-prometheus-stack. I have mixed feelings about this one. It's been causing me quite a bit of problems lately, to the point where I had to manually bind PV and PVC for some pods. It's not annoying enough to completely rip it off, but I dread each new release. Sure, helm rollback
is quite handy here, but it was not always able to recover from the PV and PVC woes. I like how well integrated it is with all things Kubernetes, but I'm not sure the effort to keep it is justified. Time will tell.

One of the things that definitely improved my overall understanding was forcing myself to use kubectl
until I puke inside each time I type it. This came as a strong recommendation from a good friend of mine who actually knows how to properly use Kubernetes, including in cloud environments. I did that and eventually wrote my own wrappers on top of it to go faster, but they are still very much kubectl
dependent. I prefer it this way these days.
There are some things I happily discovered still being simple in K8S. For example, spinning up an ephemeral container to fool around in:
kubectl run test --image=debian --rm -it -- /bin/bash
This is cool and not much different from using Docker, right? Right. The same goes for debugging containers:
kubectl debug -it memos-666b8f8bff-p76k5 --image=ghcr.io/hadret/debug:latest --target=memos
Sure, a bit more involved, but I feel it might get even more convoluted with Docker, especially when you need to deal with capabilities.
Storage has got to be my biggest pet peeve of K8S. It's so unnecessarily convoluted. Why can I easily start a container, but I can't as easily mount a file or folder to it? How is this good?
kubectl run who-thought-this-is-good --image=debian --rm -it -- /bin/bash --overrides='
{
"apiVersion": "v1",
"spec": {
"volumes": [
{
"name": "my-empty-dir",
"emptyDir": {}
}
],
"containers": [
{
"name": "debian",
"image": "debian",
"command": ["/bin/bash"],
"volumeMounts": [
{
"mountPath": "/mnt/data",
"name": "my-empty-dir"
}
]
}
]
}
}'
Carnage. The same goes for overall deployment handling—tons of yaml files to define everything or overrides when Helm is in place. Call me crazy, but it's not something I enjoy doing. Feels to me it's still so much faster to experiment with things by just using Docker Compose.
Of course, my use case is very specific. But this entire experiment that is going for quite a while now, was not too bad. Even though I'm running it all on top of Debian Sid, I was able to regularly bump K8S version and am currently rocking v1.32. It was mostly trouble-free.
OK, only recently I was going through some serious shit, and, you won't believe it, it was storage related. Long story short, upgrading util-linux to version 2.41 wreaked havoc with storage handling in K8S. You can read and learn more about it here: Possible mount regression with 2.41 when using Kubernetes. I think this should be a solved problem by now, but I still have the following packages downgraded and kept on hold:
bsdextrautils
libblkid-dev
libblkid1
libmount1
libsmartcols1
libuuid1
mount
util-linux-extra
util-linux
uuid-dev
uuid-runtime
Perks of running an unstable rolling distro, I suppose.
Overall, Kubernetes reminds me of OpenStack back in the day—of course you can use it, but unless you are a cloud provider, Proxmox is going to serve you just fine. If you hit certain scale, I can imagine using something as complex as Kubernetes being actually easier than maintaining it in a different way.
These days I'm mostly fine dealing with stuff running in and with Kubernetes itself. Debugging things (so many things are interconnected and intertwined it's really hard to reason about when anything goes wrong), convoluted deployments, and storage woes aside, it's not too bad. Requirements and needs dependent, I can see the appeal.
We are not friends. We are not enemies.
Discussion