Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows (k8s.io)
195 points by gjvc on May 25, 2022 | hide | past | favorite | 133 comments


I used minikube for a long time, it's pretty good but I found kind to be a lot nicer of an experience and eventually switched.

https://kind.sigs.k8s.io/


Yep, kind is IMHO the way to go for most of the use cases. Really fast to set up new clusters and low resource consumption due to its nature.


Have you tried using k3s? How do the two compare?


I can second this. Kind seems so much nicer. With minikube I was somehow constantly messing with virtualisation frameworks. Kind just works.


Agree, Minikibe occupies a strange middle ground of being ephemeral but heavyweight. It’s slow enough to make it seem worth maintaining the same cluster over time, but that was a bad approach when I tried it.

Kind is very clearly ephemeral, and it starts up quickly enough that it’s not annoying to deal with frequent restarts while developing setup scripts.


I just wish kind had a similar addon ecosystem as minikube. I'm sure if it gets popular enough then it will spawn stuff like that too though. The one thing that keeps me using minikube is the ease of spinning up an ingress, internal registry, etc. that usually just works.


not a bad tool but you get ~90% of the same functionality with podman, it doesnt need to run as root, and it supports user scoped service definitions and control in systemd.


You can use kind with podman, and also rootless

https://kind.sigs.k8s.io/docs/user/rootless/


You’re comparing apples and oranges.


I'm a huge fan of KinD for development and testing clusters. It's also relatively easy to customize cluster configuration or node image, so you can create clusters for custom scenarios.


I run a startup that lets folks attach clusters and I’ll just say, for development or on windows/OSX, I highly recommend k3s and Rancher Desktop (which wraps k3s). Dramatically fewer issues and cpu usage compared to minikube.

Minikube does try to be more “correct”, so it’s possibly better strictly for learning kubernetes or hacking on k8s itself.


I've tried several of the kubes and k3s for my personal projects works well. I'd use it for work too if I had different options. I think Rancher Desktop might cost though so I use kubectl to do everything.


(Disclaimer: Rancher Desktop developer)

Rancher Desktop is fully Apache 2 licensed, so I think that shouldn't be an issue — even if it somehow ends up closed source at one point, at worst somebody can start a fork from the last public release and go from there.

Our release process for Windows & Linux literally involves pulling the build from GitHub, attaching signatures on top, and pushing the bits back up. (There's some QA in the middle, but that doesn't change the bits.) The only reason we don't do that for Mac is because we haven't sorted out the signing there using this flow.

Note that Docker Desktop wasn't open source (but the core Docker engine was), which is why they can start charging money for the binaries and not worry about somebody making a fork.

Having said that — feel free to continue using kubectl to do everything; whatever floats your boat is good :)


I’m pretty happy with kind on a mac.


I'm pretty happy with kind on Rancher Desktop on a mac!


What does the Rancher Desktop give you that you don't get with just plain KinD?


KinD requires some docker engine to run it on. That's all I'm using from Rancher. It allows me to switch between containerd and docker-engine.

My options on this physical machine as I see it are: lima, rancher desktop, or Docker Desktop itself. I don't actually use Rancher Desktop for its Kubernetes integration, and in earlier versions it wasn't even possible to do this without weird vague workarounds, but now it has an "Off" switch, and I use it for the rare occasion that I need to run container images locally, or a kind cluster.

I ruled out Docker Desktop because I can't use it alongside of other VMs (not because it doesn't work, which I understood to be true for Windows users, but because it is composed of a virtual machine, with a memory footprint, and one shouldn't have too many of those unless you have a beast of a laptop...) and because it requires paid volume licensing. Don't get me wrong, I like to pay for things, but I also like to evaluate alternatives!

I used this for a while: https://github.com/lima-vm/lima/blob/master/examples/k8s.yam...

It wasn't quite perfectly stable, also not maintained by the Kubernetes core team but externally, so it has to play catch-up a bit when releases are done, rather than in the case of KinD it can enjoy a release on the schedule with Kubernetes.

I am a kubeadm baseline user and I find that Kind is actually the best way to do experiments (I don't use MiniKube at all so can't compare) Some experiments require a bit more control of the VM that runs your CRI and for those there are raw nodes (physical machines) or a Lima VM for a convincing approximation of one.

When there's a need to go to production, I'm generally not using any of these choices, but I am using KinD quite extensively (even on remote nodes!) through tailscale. This is actually really nice.


I'm still not sure what's the benefit. The only thing I got out of this is that Docker Desktop requires payment for volume licensing.


Yeah. So if you don't want to pay Docker/Mirantis for the privilege of using their open source implementation of the open container initiative standard on the MacOS, what else do you do with your Mac to run docker containers?

Are you using Docker Desktop? (Do you pay?) That is the benefit and that's literally all I use Rancher Desktop for. A Docker Desktop replacement. It also works with KinD, that was my whole point. Since one might assume it doesn't work (or maybe they tried in the past and found it didn't work at some point in the past) – today it works great.

If you have another alternative to suggest how you can run KinD on MacOS, I'm all ears. I'm literally using a full stack open source approach, to pay a gatekeeper in order to access it from my MacOS workstation seems like an anti-thesis to me.


Have you checked out cycle.io?

Started using it for container management, and have been pleased.


I like Minikube and it was a read godsend when I was trying to learn Kubernetes. It just "worked" and I was able to play around with deployments and whatnot.

After a while, I really found it limiting. Especially when I wanted to get into the internals of K8s. I then started to spin up VMs in VirtualBox and deployed bare metal K8s using Ansible and tutorials like this - https://www.tecmint.com/install-kubernetes-cluster-on-centos...

Creating a 2 node cluster was super helpful to understand how K8s worked. I was able to get onto the nodes themselves and see what was going on. It was an amazing journey.

Adding MetalLB to the mix really was the cherry on top. Allowed me to play around with ingress controllers.

So while I really do appreciate Minikube for what it is, it just doesn't go the distance when you want to dig in deep.


Would you say MetalLB is the most straight forward load balancer?

Really I just want an easy to configure load balancer and an easy way to configure subdomains and let's encrypt. That's always been weirdly the most annoying part of dealing with k8s for me. I found it pretty hard to mess around with traefik...


do you have the extra IP space to do service type: LoadBalancer? Most people don't, so it's easier to run something like traefik as DaemonSet with hostPort: 80/443


My preferred installation method on Linux is via brew [1] opposed to the official installation instructions [2]. Additionally brew contains formulaes for kubectl, k9s, helm, and many other kubernetes related tools. It's the easiest way of keeping everything up2date on a developer machine.

[1] https://formulae.brew.sh/formula/minikube [2] https://minikube.sigs.k8s.io/docs/start/


Wow, I did not realise that you can use brew outside of Mac. Thanks.


I knew, but am surprised anyone actually uses it. Are there more of you?


I use it when working on Ubuntu/Debian LTS systems, since the Homebrew packages tend to be more significantly more up-to-date than whatever's packaged in the LTS Apt repositories.

As an added benefit, unlike apt, you don't need root to install (most) brew packages.


Have you tried https://mpr.makedeb.org/ ?

It seems like it’s made for Debian based systems, much like the AUR for Arch.


Absolutely use and love homebrew on linux. All of the fun rust-based tools that get posted here on hackernews (exa, fd, ripgrep, fzf, etc.) are all there and much more up to date than apt. Heck, I find brew is more up to date than nix in some cases.


I have been using it on an WSL/Ubuntu instance just over the last few days... for some things its pretty handy (e.g. in the context of k8s I get a more up to date version of the excellent kubectx utility via brew than apt). But for others its bit of a pain, e.g. building Python through pyenv and using OpenSSL in brew, etc.


always using brew on linux/osx, I wish they were supporting shipping binary /cask on Linux like on OSX (only recompiling)


I use it all of the time! It has a plethora of packages and they're often more up-to-date than the package manager of whatever distro I'm on (usually Debian or Amazon Linux 2).


It's been there for a while but it's done as an afterthought which can lead to cascading problems, and it isn't a great idea to use it. It stomps all over the distro's Python and is generally insecure, it's much safer to use the distro's own package managers.

I believe it came about to cater to Mac developers that don't want to bother with Linux based package managers, this becomes an easy out, but as mentioned previously, is more of an afterthought.


Is there a reason these aren't just in the native package managers? Licensing? It blows my mind I can't install a RHEL-alike and run `dnf install kubernetes` and just, like, have what I need. Sure, maybe I have to pick a container runtime, fine. This is even more astonishing when you realize Fedora CoreOS has no actual RPM for kubernetes even though it is ostensibly a container hosting OS. How do I let rpm-ostree run wild and keep kubernetes at a compatible version?

The installation steps for Kubernetes and the rest of the ecosystem like minikube and helm are all essentially `wget install.sh | sh`. What am I missing? How do people do this in production environments?


> Is there a reason these aren't just in the native package managers?

Some major Linux distros do ship with official k8s packages. For instance, Ubuntu even dedicates an article on how to deploy multiple k8s choices.

https://ubuntu.com/kubernetes/install


That only has options to install microk8s with snap.


Why not use your distribution's native package installer?


k8s ecosystem tools are fast changing and native package repos are woefully out of date. Minikube in particular has a habit of just straight up breaking and requires a lot of reading long github issues only to realize there's a fix in a version just pushed 24 hours ago.


> k8s ecosystem tools are fast changing and native package repos are woefully out of date. Minikube in particular has a habit of just straight up breaking and requires a lot of reading long github issues only to realize there's a fix in a version just pushed 24 hours ago.

I don't understand what point you tried to make. If what you want is a stable install to avoid breaking changes pushed in unstable releases, why not stick with a working install just like what you get with brew?

I don't see how using exotic non-standard package managers make releases more stable.


Brew doesn't give you a stable or repeatable install. You can't pin tools to specific versions, they'll get updated whenever you update brew. This is by design and kinda the core ethos of brew. The k8s tooling is generally all meant to have you using the bleeding edge stuff--every tool readme usually says get the latest build from a github release package, brew just effectively does that automatically.


Brew will, by default update everything it manages when you run it.

Depending on your perspective, this is either horrifying and annoying or the best thing since sliced bread.


Huh. I guess that makes sense? But can't the tools just host their own rpm and deb repos? I guess that's too much operational overhead? Seems like an operational nightmare to run this in prod without the os-level package managers. Are production environments rolling their own RPMs?


At this point it seems to be that making Kubernetes nearly impossible to self-host is a whole point. You either pay money to dedicated experts or pay money to clouds.


as someone late to the party, what's the local development auto-reload, auto-deployment, etc. story in k8s? i can set up a k8s instance with that or k3s or microk8s or whatever relatively easily, but how do i get my code into there without having to build and push images after the smallest change? short googling points to skaffold[0], but is that state of the art?

[0] https://skaffold.dev/


I'm currently working on creating a local dev with https://tilt.dev/ and having a good time


Tilt is excellent, and unlike other fools it does not ise YAML, it uses python - real programming language- so you can deal with any strange scoping, pathing,or arbitrary logic like 'look for this file in this folder, but only on second tuesday after full moon'

And it is declarative. Try it, and you will never look back


enjoyed your comment!


+1 on Tilt. It just works.


Check out Devspace: https://devspace.sh/

Gives you a fairly slick "hot reload" experience in k8s (minikube-created or not).


Skaffold is in my experience the best tool in this space, but there are ton of promising options. There's a really nice VS Code plugin from google that basically adds a whole workflow based on skaffold to the IDE: https://cloud.google.com/code/docs/vscode or https://marketplace.visualstudio.com/items?itemName=GoogleCl... Despite a lot of focus on Google's hosted k8s offersings, the cloud code extension happily works with minikube and other local clusters.


Google PM for Skaffold & Cloud Code here.

Glad to hear Skaffold & Cloud Code are working well for you.

I wanted to note for others that there is a Cloud Code JetBrains plugin that also adds a workflow based on Skaffold to IDEs such as IntelliJ, PyCharm, etc. as well. Just like the Skaffold and the VS Code plugin, it also works with any K8s cluster.

Documentation: https://cloud.google.com/code/docs/intellij JetBrains Marketplace listing: https://plugins.jetbrains.com/plugin/8079-cloud-code


Google PM for Skaffold here.

As others have mentioned, there are many great solutions in the market for Kubernetes local iterative development. If you do decide to give Skaffold a try, I recommend checking out the File Sync feature (https://skaffold.dev/docs/pipeline-stages/filesync/) which enables you to continue making changes during a Skaffold iterative development session and have that code synced over to your running Kubernetes instance without having to rebuild or push images. When using Jib or Buildpacks to containerize your application, File Sync just works for Go, NodeJS, and Java but requires a little configuration for Dockerfiles.

FWIW, Skaffold is under active development and we're continuing to expand support for iterative development, CI and CD use cases as well as simplifying configs and general ease of use.

We also have a bunch of ways you can reach out to the team (https://skaffold.dev/docs/resources/) if you have any questions. We'd be happy to hear from you.


(Note: I work on Telepresence)

There's a bunch of semi-opinionated tools like Skaffold: Garden, Tilt (just bought by Docker), Docker Compose, and of course, the trusty ol' shell script powered by kustomize.

We wrote Telepresence because we found that everyone has their own snowflake-ish workflow once you get beyond the base case of "build container, kubectl apply", so we decided to focus just on the inner dev loop, and then integrate with your preferred workflow. YMMV.


I appreciate the comments on this because its confirming my own experience with k8s local dev best practices. As I write this there are 5 comments with 4 different technologies.

For a less cheeky answer, I'm currently using skaffold but I'm not in love with it. It does the job but has some rough edges


For webdev, my team has the build output directory mounted as hostPath volumes, so you change the files on your disk and the changes are reflected in the pod


Here's a way to avoid having to push the image (but you do still have to build it).

If you're using k3s and containerd for the container runtime, you can build the image with docker and then load it directly to the place containerd expects it to be stored. Then if the pod's imagePullPolicy is IfNotPresent, simply deleting/restarting the pod will be enough to get your new code running.

The magic incantation is:

docker save the-image-name:latest | sudo ctr -a /var/run/k3s/containerd/containerd.sock -n k8s.io image import -

But for front end development where auto-reload is super useful, I don't use containers at all. Just expose whatever services you need outside the cluster using NodePort or host mode networking.


My standard trick here is to mount your code directory into the container. Pretty easy with k8s using `hostPath`. You can still build the container with the production-worthy process, but whatever directory your code ends up in gets overridden with the live directory you're editing from the host machine. Then if you're using an application server that can run in a dev mode and notice when file changes, changes show up instantly just as if you're not using containers at all.


Skaffold is great, I'm hearing good things about Tilt though so I was planning on giving it a whirl soon.



I usually opt for k3d, because it gives you a cluster (or multiple) that runs inside a docker container. If you already have docker running, it's a quick and easy solution for setting up and destroying local clusters.

https://k3d.io/


K3D also mimics a multi-node cluster. K0s, K3s, Rancher Desktop, Minikube, and Docker all run (by default) a single-node cluster. This can be useful when learning about Kubernetes and how it would function in a "real" cluster.

K3s and K0s both operate with control plane components not running in containers. Again, when learning how Kubernetes functions, hiding these control plane components can be counterproductive. However, this also means K0s and K3s require fewer resources than the others, giving you more room to run workloads. Additionally, K3s now supports multi-node clusters and multiple options for etcd, including several options that allow for an HA control plane.

Personally, I am a big fan of Rancher Desktop (K3s under the covers), and on my M1 Mac I find Docker to be the most complete and easiest to use. Docker Desktop is for-pay except for small companies and for open-source projects.


I’m now in an uphill battle with the pointy-haired ones that all this is a waste of entropy.

I cannot believe that folks think k8s is a good thing.


I can't believe folks still want to manage bespoke distributed computing clouds with chef/ansible/shell scripts, homegrown monitoring, deployment, etc. IMHO you know kubernetes is long term good when everyone has complaints of some sort about it but no one can think of a better replacement.


Yeah, I don't have all that. I have a few on prem services, and trouble is very rare. Yet they hired some guy that spent a year trying to move our shit into k8s (and failing). He's long gone, and everything is still working fine.


The problem is that most people advocating Kubernetes don’t have enough experience with use cases where it doesn’t fit, of which there are many.


> I cannot believe that folks think k8s is a good thing.

If you are ever in a position to do professional work on web apps, you'll quickly learn why container orchestration systems are invaluable.


Being a part time devops engineer in my previous org, I genuinely do believe that k8s is a good thing. Automating everything is a really good thing.


I think its the best thing happened in the last 10 years regarding infrastructure and software.

All my stuff runs on one very flexible, easy to maintain k8s cluster.

App deployment is super easy.

It feels already much better and saver than all the snow flake systems i build before..

If you don't get it, you should really try it out before having a battle with 'the pointy-haired ones' whatever you are referencing here.

I run a big k8s cluster at work (200-400 nodes), a private single node one and for a startup with 2 nodes.


it's hard to appreciate before your apps get to a certain scale and it's a pain to manage yourself... but it does solve problems that used to require special snowflake deployment/sysadmin/devops scripting in a standard way.


Why do you think so?


It's actually bad that it sets it up quickly, IMHO.

I learned with MicroK8s on a Pi cluster and it was super painful and complex to get it all working, I found myself troubleshooting obscure logs and having to delete and recreate the cluster many times. But once I got it all working it was exactly the pain I needed to prepare myself for working on a production k8s cluster.


As if Docker wasn't enough overhead for a poor laptop. What next - a local virtual data centre for development? Honestly, the infrastructure layers we have to fight with to get shit done these days.


I've yet to meet a dev who can coherently explain why s/he uses a laptop instead of a proper pc.


I wonder what devs you've met that can't explain this.

Personally, I work in an open-plan office. I have lots of Zoom meetings, which I have to do in separate conference rooms, and require having my computer with me.

This week I've been at a conference, and I had my laptop with me to get some work done while half-listening to presentations and panels. I also work from home, of course.

Also, sometimes I sit in a coffee shop to work.

I also sometimes travel to get a change of scenery, and work remotely with my laptop.

Also, I sometimes bring my laptop on vacations if there's any chance that I have to attend to an emergency.

If I only had a stationary machine, none of the above would be possible.

Of course, my MacBook is a "proper PC". When I'm at a real desk, I can plug it into an external monitor and have the full stationary experience.

Could I have a "real" stationary Mac for desk work, and a separate laptop for mobility? Sure, but that sounds like a huge pain, as I'd have to keep everything in sync.


Err ... mobility?


I'm late to the distributed system game, and I have a handful of old PCs which I've been wanting to connect into some sort of "supercomputer" or cluster as a learning exercise. Looking at the docs, it seems to me like I could accomplish that in theory using minikube, by treating the PCs as a local environment and managing them all with k8s. Does that sound reasonable, am I way off base, or is there a simpler method?


Using minikube will be a complete waste of time for this IMHO. I am working on doing the same thing and minikube really doesn't work outside of the narrow happy path of kubernetes-on-a-vm. You can probably do multi-node but only on one host.

Best to just bite the bullet and install kubernetes itself I think. Currently I'm at the point of figuring out how to customize a Fedora CoreOS image to boot from PXE or USB and auto-install itself on bare metal with kubernetes already installed in the image.

The current environment for that is a Fedora Server install on one of those old PCs with enough environment manually bootstrapped to have the tools to customize Fedora CoreOS in a VM.


If you end up not liking (or not wanting to learn) k8s, check out Hashicorp's Nomad. I set it up recently, it's great for a homelab, and a bit more flexible than k8s because it can also run raw executables and VMs.


I'll strongly second this.

Consul + Nomad makes for an excellent home lab setup, using docker, podman, raw binaries, etc, as you observe. It strongly recommends 3-node cluster, but works fine on a single host if you don't need the distributed / HA aspect. We've been running it in production at work for a couple of years and it's been rock solid.

The big problem with Nomad is that it's not as popular as k8s -- so while you can leverage docker hub, there's fewer oven-ready packs for more complex systems, eg cortex metrics or mimir, as current challenges for us.

Hashicorp is building up a public repository [1] which is great to see, but it's a long way from having the same scope as the collected repositories of helm charts.

[1] https://github.com/hashicorp/nomad-pack-community-registry


the biggest problem with nomad is it just punts on networking (so it really is just a scheduler) and it's quite a bit of work to get it set up right for dynamic workloads


I haven't played extensively with k8s, so I've not much to compare to - do you mean inter-workload comms?

Between CNI, Consul Connect, and Traefik, we have hit some stumbling blocks, but nothing we can't do, yet.

As to dynamic load, yeah, we've mostly been using Nomad for HA -- are there some surprises in store when we start playing with the autoscaler?


>> is there a simpler method?

I think kind wins the simplest but microk8s might be a close second

https://ubuntu.com/tutorials/install-a-local-kubernetes-with...


Avoid minikube for anything long-lived.

If part of your objective is to learn how to build a k8s cluster then I’d recommend building it with kubeadm. If you just want to get something working, k3s or microk8s could fulfill your needs.


I would use K3s to set it up


Or k0s! I used it to set up a4x nice cluster in Oracle Cloud and it really doesn't get any easier.


Wow! Thanks for sharing not heard of this, nice it’s designed for bare metal


K3s and microK8S are both decent choices, K3S appear more well-adopted and 'production grade'


The more I use microk8s the less I like it. It seems neato and magic "just works". Until snap upgrades your k8s and it "just stops working". The quality control on their plugins is also pretty low. E.g. the GPU plugin was just completely broken for an entire major version. Also way too often it gets into a confused state where various parts of the system are or aren't running inconsistent with what the microk8s command-line tools think they should be doing, and the best recourse is just to wipe the cluster and start over.


I do the exact same thing with some mini pcs as a hobby.


I use a Mac, and I have tried Minikube, KinD, Rancher, and also running a VM. And of course, I have Docker Desktop. All of these have failed for me as long-term solution, for the simple reason that they consume way too much CPU and RAM. I would not recommend them on macOS at all. The Kubelet and API server processes are designed around very inefficient polling that easily consumes a whole core even when completely idle.

These days, I use hosted Kubernetes (via Google Kubernetes Engine), which means my poor laptop isn't burdened by any of this stuff. It's great; I can just create a namespace and load it up with whatever I want to run, and the cluster will automatically provision everything. I have to be online, but I've been been in a situation where this has been a problem.

GCP also has some powerful features you can't get locally, like load balancers, automatic DNS provisioning, automatic TLS cert provisioning, virtually unlimited CPU/RAM/disk, and so on. Another added benefit of this approach is that it's trivial to give shared, public access to everything. Sharing a process with colleagues is also easy.

Using Google's Kubernetes is particularly smooth because the cluster is set up with GCP's "autopilot" mode, which is basically a smarter control plane; it's preconfigured to dynamically scale the cluster according to the workload, including CPU/memory requests. Need to run a pod with 16 dedicated cores and 32GB RAM in order to do some temporary stress testing, for example? Just declare that in the "requests" part of the pod resource, and you'll have a dedicated node in a minute or two.

Admittedly I don't run a "live" develop-compile-run-debug cycle of any application this way. Maybe it's possible, but I haven't researched how. Personally, when I want to develop on a single app, I just run the code locally, and set it up to connect to the test cluster using a VPN.


Which of the various options would people recommend for setting up a single node cluster on a spare laptop (i7, 16gb ram)?

I currently use a collection of docker-compose files, but am toying with the idea of switching it out for k8s


This is exactly what I’m after right now. My server at home runs off 2 big docker compose files and I’m looking at other options.


Anyone else experiencing DNS lookup failures on Minikube-managed clusters running under VirtualBox? The host O/S or host DNS server doesn't seem to matter. I've noticed this when trying to build Alpine Linux-based images, an issue I reported here:

https://github.com/kubernetes/minikube/issues/13968

I'm really at a loss as to how to troubleshoot this as I'm still learning how to use Kubernetes.



Unfortunately, that doesn't appear to be relevant to this problem.


There was a "use minikube as a drop-in replacement for Socker Desktop" post not too long ago... which is a pretty nice look if it works OOB.


That was my post. I'm one of the ex-maintainers of minikube and skaffold as well.

[0] https://matt-rickard.com/docker-desktop-alternatives/


Great post.

Also ofc typo S->D for Docker


Google PM for Minikube here.

Here's a link [0] to the Minikube documentation for this use case.

[0] https://minikube.sigs.k8s.io/docs/tutorials/docker_desktop_r...


There is two more can be added 1. https://k0sproject.io/ 2. https://microk8s.io/

The problem with minikube if you have low resource system. It would definitely headache if you run any heavy Workloads.


How does one automate the minikube (or any other flavor) provisioning and setting up the dev environment matched to prod so there would be as little friction as possible?

Talking of something, which idempotently installs minikube if necessary, pulls images from the remote registry, sets up the network with DNS... Is there such a tool?


I don't think there's anything that does exactly what you describe, particularly that last few yards of getting the tools, container images, etc. in a dev environment to be consistent across everyone's machines. Maybe a dedicated VM image that's updated and pushed to everyone's dev machines would be best?


Rancher Desktop, wonderful tool on OS X and Windows when i’ve used it.


Ansible?


No thanks


I'm not particularly involved with Kubernetes, but last I heard (about 2 years ago), Minikube was on the way out (because it was going into maintenance mode or something like that), and Rancher's k3s was taking the mantle of "tiny local Kubernetes server."

What happened? Was active maintenance on Minikube resumed?


Where did you hear that? Minikube never ceases development.


K3s started being super inefficient. Using up to 20% of cpu for idle cluster.

A lot of ppl decided to ditch it due to that.


It's peaking at 8% CPU time on my 1-shared-vCPU digital ocean droplet, spending most of its time below about 3% - I can't imagine any modern laptop would blink at that usage.


Search for Rasbian github issues. Raspberry community was and is a big share in kubernetes lightwaight installations.


Keep in mind that Pi is like an intel atom in performance,and an old one to boot.

If you have anything with a 'real' cpu from the last 5 years, this problem is not going to exist for you


OpenShift Local (formerly CodeReady Containers) sets up a local-machine OpenShift cluster k8s for development etc.

https://developers.redhat.com/products/openshift-local/overv...


I've noticed that minikube can also setup multi node cluster which can be nice to simulate some things.

https://minikube.sigs.k8s.io/docs/tutorials/multi_node/


We switched from minikube to microk8s for local development at work. We found minikube to be bloated and (at least at early releases) sometimes the vm just broke for no reason and we had to deploy everything from scratch.


What I have not understood with running K8S locally. Do I need to run a separate container registry? Or can it read my containers from disk?


You can push images into it, or use it as your docker/podman daemon, so the images are built directly in there, at least with minikube. This command does the trick for the current shell session:

  eval $(minikube docker-env)


You can run minikube image load image:tag to pull images from your container runtime: https://minikube.sigs.k8s.io/docs/handbook/pushing/#7-loadin...


Not sure here but the local k8s setup included with docker desktop uses your local registry. The downside is you have to use docker desktop……


Long time ago I remember Minikube destroyed your hard drive (too many small writes bad for SSDs). Is it fixed now?


As someone who has learned k8s through Minikube examples, k3s was much better experience looking back.


For Win10 Pro users, Minikube can run directly on Hyper-V.


We use minikube for local development, very useful.


minikube is the best kept secret in the container industry. Use minikube to match closer to prod on your local development machine.


Yes and being decoupled from Docker Machine is super underestimated in the community IMHO. I switched seamlessly between VirtualBox based and Docker Machine based clusters in no time when Docker went rogue with their licensing.

EDIT: we're talking dev environments only. Not prod. DO NOT put minikube in prod.


DO NOT put minikube in prod.

I'm watching this happen in real time at the current gig. I brought up my concerns to the senior in charge of this. He's doing it anyway.

Let you know how that goes.


What are the concerns? I keep hearing the same "don't use it in prod" about docker, yet I know quite a few (small-ish) operations successfully running on it.


Minikube isn’t intended to run production services. You’ll encounter a lot of constraints that don’t exist on a real cluster, and you’ll likely lose important data if you use it very long.

Docker made sense to use in production because people wanted to run containers.

What could possibly be the justification for running production workloads on minikube?


Hm, never heard against Docker in cloud (since early times of Docker). Only about keeping your database in Docker -- yes, that was arecommentadion some 5+ years ago and maybe now as well.


What do you mean by "best kept secret"?


It's secretly used in almost every kubernetes tutorial on the internet


Totally agree.


[flagged]


[flagged]


What if they're not, and it is to them?


Give me a break... How did you decide on posting this? Did you accidentaly stumble upon Minikube one day? was it news to you?

I bet it wasn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: