For a number of months now, i have been getting into the self-hosting rabbit hole. This long-winded, probably incoherent ramble, is an account of my ongoing progress.

The hardware

In spring 2023 i came into possession of an old Lenovo workstation that was gonna go to the landfill, a ThinkStation E32: It has an i5-4570, 32GB of DDR3, nothingg special overall.

From the beginning, i left in in a corner of my apartment with the intention of someday using it for a home server, but i could never be bothered to actually start working on the project for a long time.

There is a lot of OEM bullshittery going on with this machine, just like with any mass-market enterprise OEM:
The PSU is non standard and it only has 2 wires: a standard CPU power connector, and a 14pin (?!) Mother board power connector. As a consequence of this, most devices are powered off the motherboard, which looks pretty whacky As a side-effect of this, it would be kind of difficult to upgrade this machine peace meal, so i am kinda stuck on a Haswell platform: This is a relic, from the dark past when no one made CPUs with more than 4c/8t, and Intel had no interest in giving anyone any more. Even if you look at rack-mount servers from this era, you will notice they usually have dual Xeons with each 4 or 6 cores each. High core counts just were not a thing back then

Closing out the hardware section, early on in the project i had bought a NVIDIA tesla P4, a data-center gpu which is rougtly equivalent to a gtx 1080 (-ish). I thought to myself whatever i end up with software-whise, i would like to be able to run a stableDiffusion model: i was quite disappointed to find out there were not any good hosted versions of this that were free, you had to jump throught a lot of hoops to use MidJourney and stuff like that.

So i needed a graphics card, but lenovo’s bullshit got in the way yet again: i had no cables to power a GPU, it needed to be small and power eficient enought to survive of 75W from the PCIE slot (unless i wanted to change the PSU, and theMotherboard)

Hence, the Tesla P4: small, power efficient, and quite cheat at ~120$ + shipping from China. JJust like any good bargain, the P4 came with some downsides However:

It had no monitor outputs as it was intended for compute workloads in a data center. That was fine, i intended to use it for compute anyway.
It ad no fans or cooling system, because it was intended to be used in a rack with insane airflow: this one’s a bit more tricky, but i managed to solve it with some horrible hacks involving fan splitters and some 3d printed parts.

The software stack

Looking around online, it seemed to me like there are 2 main approaches to home-labbing: Either you use Virtual Machines, like with Proxmox, or you use Containers.
In my case, i didnt feel i had the cores to spare for a bare-metal hypervisor, even if it was a more familiar workflow, so instead i thought it more efficient to stand up a Kubernetes cluster to set up my homelab.

This, as it turns out, was not a great idea.

My chosen Kuberntes implementation ended up being K3s, because the internet said it was lightweight, so i just installed RHEL 9 on the thinkStation (using a paid enterprise distro is objectively funny), and i got my cluster up: At the time i didnt understand a lot about k8s, i only ever used it as a managed service in the cloud, and it quickly dawned on me just how different this was: it is quite incredible, the amount of complexity AWS or Azure hide from you as a user.

Regardless i pressed on: K3s was actually quite easy to set up and manage, but then i hit a road block: i had some idea of the stuff i wanted in my homelab, but to manage applications sanely i needed to have helm charts for everything.
Initially i thought Writing my own helm charts was good enough, but then it dawned on me just how much work that would have been.

So, off i went to look for a good source of the helm charts i wanted.

Eventually i found a project called TrueCharts, that seems like the thing i needed: it distributed helm charts, and it had most of the applications i wanted.

Among their supported platforms were TrueNAS SCALE, and Rancher.

Initially i had considered going with Rancher, since its SUSE’s orchestration UI/Platform and k3s was also made by SUSE. But eventually i caved, i gave up on the whole “Building a kubernetes cluster from scratch” thing, and i moved over to TrueNAS SCALE

More to come in the next Section