Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm late to the distributed system game, and I have a handful of old PCs which I've been wanting to connect into some sort of "supercomputer" or cluster as a learning exercise. Looking at the docs, it seems to me like I could accomplish that in theory using minikube, by treating the PCs as a local environment and managing them all with k8s. Does that sound reasonable, am I way off base, or is there a simpler method?


Using minikube will be a complete waste of time for this IMHO. I am working on doing the same thing and minikube really doesn't work outside of the narrow happy path of kubernetes-on-a-vm. You can probably do multi-node but only on one host.

Best to just bite the bullet and install kubernetes itself I think. Currently I'm at the point of figuring out how to customize a Fedora CoreOS image to boot from PXE or USB and auto-install itself on bare metal with kubernetes already installed in the image.

The current environment for that is a Fedora Server install on one of those old PCs with enough environment manually bootstrapped to have the tools to customize Fedora CoreOS in a VM.


If you end up not liking (or not wanting to learn) k8s, check out Hashicorp's Nomad. I set it up recently, it's great for a homelab, and a bit more flexible than k8s because it can also run raw executables and VMs.


I'll strongly second this.

Consul + Nomad makes for an excellent home lab setup, using docker, podman, raw binaries, etc, as you observe. It strongly recommends 3-node cluster, but works fine on a single host if you don't need the distributed / HA aspect. We've been running it in production at work for a couple of years and it's been rock solid.

The big problem with Nomad is that it's not as popular as k8s -- so while you can leverage docker hub, there's fewer oven-ready packs for more complex systems, eg cortex metrics or mimir, as current challenges for us.

Hashicorp is building up a public repository [1] which is great to see, but it's a long way from having the same scope as the collected repositories of helm charts.

[1] https://github.com/hashicorp/nomad-pack-community-registry


the biggest problem with nomad is it just punts on networking (so it really is just a scheduler) and it's quite a bit of work to get it set up right for dynamic workloads


I haven't played extensively with k8s, so I've not much to compare to - do you mean inter-workload comms?

Between CNI, Consul Connect, and Traefik, we have hit some stumbling blocks, but nothing we can't do, yet.

As to dynamic load, yeah, we've mostly been using Nomad for HA -- are there some surprises in store when we start playing with the autoscaler?


>> is there a simpler method?

I think kind wins the simplest but microk8s might be a close second

https://ubuntu.com/tutorials/install-a-local-kubernetes-with...


Avoid minikube for anything long-lived.

If part of your objective is to learn how to build a k8s cluster then I’d recommend building it with kubeadm. If you just want to get something working, k3s or microk8s could fulfill your needs.


I would use K3s to set it up


Or k0s! I used it to set up a4x nice cluster in Oracle Cloud and it really doesn't get any easier.


Wow! Thanks for sharing not heard of this, nice it’s designed for bare metal


K3s and microK8S are both decent choices, K3S appear more well-adopted and 'production grade'


The more I use microk8s the less I like it. It seems neato and magic "just works". Until snap upgrades your k8s and it "just stops working". The quality control on their plugins is also pretty low. E.g. the GPU plugin was just completely broken for an entire major version. Also way too often it gets into a confused state where various parts of the system are or aren't running inconsistent with what the microk8s command-line tools think they should be doing, and the best recourse is just to wipe the cluster and start over.


I do the exact same thing with some mini pcs as a hobby.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: