Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Among these "tiny changes" there are also plenty of Big Things. Kernel updates and python3 migration to name two. Not to mention the sometimes conflicting packages as a result of staying on the bleeding edge.

Don't get me wrong, I've used Arch Linux as my primary distro for the last two years and I like their model a lot, but it's more than "nothing to it".

EDIT: Sorry about the somewhat gloomy reply. I agree with your comment in spirit and think the model is, despite its inevitable problems, a great one.



This model sounds like fun if it's your hobby.

In my world, however, spontaneously upgrading a library or a language runtime from one major version to the next will cause a bunch of websites to break. This is exactly what customers pay our company to avoid.

Because, in fact, the "continuous" model is a poor fit for software ecosystems unless your time scale is sufficiently long. If you don't care about downtime as measured in hours or days, go ahead and sync up every day. But if you care about downtime as measured in minutes or seconds, the continuous model isn't good enough, because at that scale software doesn't evolve continuously at all. It lurches ahead in sudden jolts, and each component lurches ahead on its own timescale, and only when you stand way back does the process look continuous.

The other problem with the "continuous" model is bugs. Software only lurches ahead on average; in practice any given patch is as likely to move you backwards as forwards. Three steps forward, two steps back. If I could be assured that every change in the Linux kernel was permanent and perfect, it might be worthwhile to run my websites off the latest kernel source HEAD, sync up every day, and fix bugs as they arrived. But in the real world, if someone commits a bug to the kernel source tree on Monday, and it breaks my site, so I work around the bug, and then someone fixes the bug in the kernel on Friday, and then I roll back my workarounds, I have just wasted hours of my life that I will never see again. I have churned my code for no net gain.

(Of course, you may ask: Why didn't I just fix the kernel bug on Monday instead of working around it in my site's code? Well, the Linux universe is too complicated for any of us to hold all of it in our heads: I can't fix kernel bugs, at least not on a timescale of days. So "this isn't the Apple II anymore; the universe is too big to patch the whole thing at once" is yet another reason why the continuous model doesn't work.)

It's just not that simple.


Obviously one should not use this frivolous upgrading scheme on a server - I don't think anyone thinks otherwise.

At my own computer I don't see the problem. Python 3 replaced Python a couple of weeks ago in Arch, but it's possibly to use Python 2 (which I do, professionally) without too much hassle. As for kernel upgrades, I can always wait a couple of days to see if any major bugs creep up (obviously this requires keeping up with the news, which in itself is a small time investment).

Since I'm not too experienced I guess there are situations when I would wholeheartedly agree without you, but I've yet to come across them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: