Namely that systemd doesn't allow persistent processes started from the shell by default, preferring to terminate them when the user logs out.
This would include processes like "screen" whose entire raison d'etre is to persist after the user logs out. (Well, it has other uses, but this is the main one IMO.)
The stated workarounds - fiddling with some options like "KillUserProcesses=no" in logind.conf &co. - have so far failed.
I don't know whether this situation is a problem with systemd or the distro, but it seems very much a problem with the culture summarised by the top commenter in the above thread, of (paraphrasing) glibly breaking existing workflows then casually brushing away criticism with arguments often boiling down to: "this is the right way, I don't care about tradition or protecting 'incorrect' usage."
This is yet another example of pointless incidental complexity in systemd. The whole point of "nohup"-based tools like tmux and screen is to cleanly separate the management of user sessions from the incidental mechanism of whether a remote connection is being closed (the 'HUP' in nohup is short for "hang up" i.e. close a [possibly remote] connection). Systemd should simply acknowledge this fact and keep the user session going when a program has been launched under nohup, instead it tightly couples its own "session" concept to the remote connection and then adds a totally ad-hoc, hacked-together feature called EnableLinger to somehow make nohup work anyway. It's amazing.
The trouble is that 'nohup' is not a particular state that specifically marks processes that want to survive logout. Instead, the rule is that processes running inside a particular terminal are killed when that terminal closes, and processes running outside any terminal run as they please. If you SSH to a server, or telnet, or login on the text console, or via a serial port, you get a terminal and everything you run runs inside it, and the system works.
However, if you login to a graphical desktop, that is not a text terminal, and therefore everything is effectively nohup'd: your browser, your chat program, your media player, all the miscellaneous helper processes they spawn, etc. Most people do want that stuff to be automatically shut down when they log out.
Since the kernel's idea of "a login session" is wrong (it only includes text-terminals), and it can't easily be changed for compatibility reasons, systemd implements its own idea of "a login session" that works the way most people (who aren't experts on POSIX job control) expect.
> However, if you login to a graphical desktop, that is not a text terminal, and therefore everything is effectively nohup'd
Systemd's revamped notion of "session" includes graphical ones. Yes, that's different from the legacy sense of a mere text-terminal window. But since they've had to reimplement the whole concept of graphical sessions anyway, they could have made them work like textual ones. If you're not running a session manager as part of your windowed environment (almost everyone is these days) the whole thing might break, but then you can just enable the hacky "lingering" mode to make it work and you're no worse off than before.
> Since the kernel's idea of "a login session" is wrong (it only includes text-terminals), and it can't easily be changed for compatibility reasons
This is a CADT attitude. Improving existing interfaces while maintaining compatibility is hard; it's also what makes the difference between a serious software professional and an incompetent vandal.
I agree it would be nice to fix this in the kernel instead, but as the article said, it might not be possible due to compatibility constraints.
The fact that the logind developers came to a different solution than you, after spending much more time thinking about it and actually implemented it, doesn't exactly imply logind developers are the ones with an attention deficit, or that logind is bad. ("CADT" apparently means "Cascade of Attention-Deficit Teenagers")
Also, never call out others as "incompetent vandals" if you think of yourself as a "serious software professional". This is the kind of toxic behavior that makes communities non-inclusive and leads to impostor syndrome.
>The fact that the logind developers came to a different solution than you, after spending much more time thinking about it and actually implemented it, doesn't exactly imply logind developers are the ones with an attention deficit, or that logind is bad. ("CADT" apparently means "Cascade of Attention-Deficit Teenagers")
The logind developers went against the established wisdom of experience (that big rewrites are generally a bad idea) with the predictable result: high costs (both in migration and in handling outright bugs) for nebulous benefits, with the result that desktop linux is flakier and (understandably) less popular than ten years ago.
> This is the kind of toxic behavior that makes communities non-inclusive and leads to impostor syndrome.
Is that supposed to be a bad thing? We should be less "inclusive" of people who want to rewrite everything. They should feel like impostors. You can't produce good quality if you're not willing to call out bad quality; Linux succeeded (for a time) because Torvalds had high standards and was willing to maintain them.
>We should be less "inclusive" of people who want to rewrite everything. They should feel like impostors. You can't produce good quality if you're not willing to call out bad quality; Linux succeeded (for a time) because Torvalds had high standards and was willing to maintain them.
I'm making a humble request, please do not bring this attitude in open source projects. Really I mean it. It's not helpful and it only makes people angry. You are also misinterpreting the behavior of Mr. Torvalds and confusing things. The kernel developers have actually been some of the most adamant about rewriting major parts of the kernel and breaking internal APIs over and over again (not syscalls) because it's known that the only way to thoroughly improve on the code is to aggressively iterate on it like this. This is actually a major strength of open source: anyone who wants to try to rewrite something can pick up the code and just do it. If it's bad then you throw it away and forget about. If it's good then you keep it. This is precisely how the "high standard" even gets maintained.
> I'm making a humble request, please do not bring this attitude in open source projects. Really I mean it. It's not helpful and it only makes people angry.
I respect your position but I disagree. Projects shying away from criticism in the name of inclusiveness has gone hand in hand with a drop in quality, not just in terms of unwise rewrites but in terms of plain old bad/buggy code - which should not be surprising.
> You are also misinterpreting the behavior of Mr. Torvalds and confusing things.
My point is that Torvalds - historically - used language on the level of "incompetent vandals" freely where appropriate. In a serious software project people are, and should be, willing to state those kind of views very clearly and directly.
> The kernel developers have actually been some of the most adamant about rewriting major parts of the kernel and breaking internal APIs over and over again (not syscalls)
They have; at the same time they've been adamant about the need to avoid regressions, both in terms of maintaining external interfaces and in not ripping things out before a replacement offers feature parity and there's a reasonable migration plan in place. The bottom line is that they caused nowhere near the level of user-facing breakage that the systemd/gnome folks have, and that speaks to higher standards and better judgement.
> Also, never call out others as "incompetent vandals" if you think of yourself as a "serious software professional". This is the kind of toxic behavior that makes communities non-inclusive and leads to impostor syndrome.
I want to exclude incompetent vandals from the software community; they are impostors.
Some people consistently make bad decisions. Some of them can change, but others cannot. I do not want the ones who cannot or do not learn to make good decisions to make decisions which affect me (I acknowledge that I myself may be one of those people!).
Quality matters. Reckless vandalism matters too. Breaking nohup was and is indefensible.
> However, if you login to a graphical desktop, that is not a text terminal, and therefore everything is effectively nohup'd
No, it's not.
I think you're looking at this in the wrong way. The concept here is one of sessions and the parent-child relationship between them, as well as the decisions parent processes make when they create and manage (or don't manage) child processes.
If you log in to a text console, you end up with a process (like bash) controlling the terminal. If you run normal programs in the foreground, or even in the background (as long as you don't disconnect them from the controlling terminal), they are all children of that bash process. When you quit bash, all its children get terminated as well.
When you log into a graphical session, the login manager (or whatever) will start your desktop's session (which might just be a plain shell script, maybe just your window manager, or might be a full-blown session manager, or something else entirely). Whatever that is, it starts other applications (say your window manager, a panel or dock, desktop manager, etc.), which then can start other applications (browser, chat, media player). If you log out of your desktop, ultimately what happens is that original session-starter (script, WM, session manager, whatever) quits, and it takes all its children with it.
And regardless, the graphical session still runs in a TTY, just not in text mode!
> Since the kernel's idea of "a login session" is wrong
The kernel has no concept of login sessions at all (it does have the concept of "sessions", but they are unrelated to user login). It just starts something as root (init, as PID 1), and from there userspace takes over and does whatever it wants, including the possibility of starting a getty, which can run login, which can setuid() and launch your shell if you put in the right password. (Or run a display manager that does something analogous with graphical sessions.)
So back to the beginning:
> The trouble is that 'nohup' is not a particular state that specifically marks processes that want to survive logout. Instead, the rule is that processes running inside a particular terminal are killed when that terminal closes, and processes running outside any terminal run as they please.
Those two sentences would seem to be contradictory, no? "nohup" is of course not a particular state, though it is responsible for putting a process into a particular state: that of not having a controlling TTY. Which is (greatly simplified) what determines whether or not a process keeps running once the TTY's controlling process exits.
> If you log in to a text console, you end up with a process (like bash) controlling the terminal. If you run normal programs in the foreground, or even in the background (as long as you don't disconnect them from the controlling terminal), they are all children of that bash process. When you quit bash, all its children get terminated as well.
> When you log into a graphical session, the login manager (or whatever) will start your desktop's session (which might just be a plain shell script, maybe just your window manager, or might be a full-blown session manager, or something else entirely). Whatever that is, it starts other applications (say your window manager, a panel or dock, desktop manager, etc.), which then can start other applications (browser, chat, media player). If you log out of your desktop, ultimately what happens is that original session-starter (script, WM, session manager, whatever) quits, and it takes all its children with it.
This isn't how it works though: killing a parent does not kill its child, which is probably one of the bigger design flaws in the original unix (and probably persists because through another design flaw it's the only way to reparent processes, as the insane double-fork 'daemonisation' routine demonstrates). This is one of the things systemd aims to fix: by keeping track of process relationships with cgroups it can kill all the of the processes spawned in a session.
And I'm not sure about your assertion about graphical sessions: pretty much all processes on my PC have no controlling terminal.
> Should any user on a shared university computer be able to spawn processes to run for all eternity?
That's for the university IT to decide, of course. Software should focus on providing general mechanism, not policy.
> Should your desktop environment crashing lead to all software continuing to run, for all eternity, leaking memory like there’s no tomorrow?
AIUI, that's pretty much what might happen if you're forced to enable the lingering option to make nohup work. Systemd does improve session management under *ix-like systems, but the fact that it doesn't manage to interoperate cleanly with the likes of tmux and screen is a pretty blatant papercut.
> the fact that it doesn't manage to interoperate cleanly with the likes of tmux and screen is a pretty blatant papercut.
Interoperating with broken hacky solutions is nothing admirable. At some point, you have to introduce a new API, which handles this properly, and introduce a way to give/take permissions for these things (so e.g. zoom doesn’t just run their dataminer forever through the same mechanism).
You can grant or remove the linger state permission from users and applications.
Default is usually that either any user can have any software lingering, or that any user can authenticate (as you'd do with e.g. sudo) to set this state. This would work via PolicyKit and elevated permissions, similar to the UA prompt on Windows.
You can also white- or blacklist individual applications and services from this :)
That's what I meant with "proper" API in contrast to old tmux/nohupd.
And as service management is the init system's task, it's clear this is something where you have to interface with the init system (as the init system is actually even supposed to reap any process reparented to it to reduce zombie processes).
Slackware doesn't use it, and Patrick Volkerding recently got a Patreon account set up so Slackware 15.0 or 14.3 will hopefully be out soonish with updated packages. I'm writing this on 14.2 and have been running it since it came out with basically no issues. Also have it on a backup server, runs great.
Use Alpine or something then. I use Alpine as well on a server, OpenBSD on a server, slackware on a server, slackware on my laptop... There's some options. We even use Devuan to host OP's essay, although we'll eventually move to openbsd probably for other reasons.
In terms of package management... why do you need it? I have no problem maintaining everything with slackpkg and sbopkg along with slackbuilds.org. Sometimes it takes a while to find all the requirements for an application and add them to a queue, but once it's set up it's just sbopkg, click on update, upgrade, and you're good. It's pretty much rock solid once I get everything installed and I haven't missed package management much at all. My main gripe is the old packages in 14.2 but -current has a lot newer stuff. I don't mind waiting though.
I still don't entirely get the point of Devuan. I put Debian buster on a low-RAM embedded box, and since systemd eats up precious RAM I'd rather keep, I just switched to OpenRC with
apt install openrc && apt purge systemd
If I needed it, Debian packages elogind as well.
Rebooted and it worked perfectly. Now, I get that Debian doesn't really support OpenRC[0] (or sysvinit), and it could break in horrible ways when bullseye goes stable, or get removed entirely, but... I don't see why we need a fork before that happens? It seems like it's a lot of work to maintain a distro fork, when I feel like that effort could be more productively redirected to stronger maintenance and advocacy of OpenRC and/or sysvinit in Debian itself?
[0] Debian's openrc package hasn't been updated in a little over a year, which is indeed concerning. sysvinit does seem to be more actively maintained, though.
Iron grip, aggression, pushiness are needed to lobby systemd out.
Another important initiative is to not to let Poettering, Sievers and co. continue throwing new systemds onto distributives. A preemptive action is needed.
systemd uses cgroups (kernel namespaces), and if you want something to remain after you log out, then start it in a separate cgroup.
cgroups are control groups, you know, to control processes. Which was simply missing for decades in Linux.
nohup should be enhanced to support systemd, or systemd should provide a nohup wrapper, and just start a new scope (cgroup) for whatever the user launches with nohup.
...
Now, that said, I have no idea why distros and systemd did what they did without much communication, but ... that's usually the Linux way :/
> systemd uses cgroups (kernel namespaces), and if you want something to remain after you log out, then start it in a separate cgroup.
Why should the user be concerned with how systemd happens to implement its session management. If you start a screen or tmux instance, it's unambiguous that you want that part of the session to persist after logoff and to be reachable upon logging on to the system again. That's what screen and tmux were designed to do.
The user shouldn't be concerned. That's why the distro should solve this. But as we all know, abstractions are leaky, and encapsulation only goes so far.
How should anyone know what the user really intends. When I start a tmux in a graphical session and click log out I want the system to stop my things completely, (almost) like I haven't even logged in. After all, my session never was set up to be an always-on server session thingie. The distro never advertised this, etc.
Of course users who grew up on UNIX/POSIX/nohup and on distros that worked in a specific way (ie. without cleaning up processes after log out) expected this to continue.
It either did not bother that many people or no one really did anything to document/address it.
Distros did not care apparently. systemd maintainers were aware that it breaks all and every double-forking self-backgrounding resident stuff, but they accepted the trade off, and even documented it: https://github.com/systemd/systemd/commit/65eb37f8fcf0c82db0...
> When I start a tmux in a graphical session and click log out I want the system to stop my things completely.[] The distro never advertised this, etc.
The graphical environment could advertise that you have launched something that will persist upon disconnecting the session, and give you an option to terminate it entirely instead. (It's after all easier for graphical environments to offer these sorts of "user friendly" hints than for terminal-based workflows.)
Have you tried `sudo loginctl enable-linger [username]`? This used to be well-documented, but I don't see much about it now. It's possible things have changed since the last time I had to deal with this?
> Enable/disable user lingering for one or more users. If enabled for a specific user, a user manager is spawned for the user at boot and kept around after logouts. This allows users who are not logged in to run long-running services. Takes one or more user names or numeric UIDs as argument. If no argument is specified, enables/disables lingering for the user of the session of the caller. See also KillUserProcesses= setting in logind.conf(5).
Mainly I meant that I no longer see it recommended on the Arch wiki, which I think is where I first learned it. But it looks like that might be because Arch now compiles systemd with lingering on by default (or something similar to that, I'm probably using the wrong term).
Linger does more than just keeping the session open after logout. It also means that the session gets started automatically on boot, which is necessary if you want to have systemd units for that user autostart. Case in point, the PulseAudio setup on my homeserver: https://github.com/majewsky/system-configuration/blob/2f93ff...
Given the insanely vitriolic level of personal criticism leveled at the systemd people on an ongoing basis, some degree of glibness is an unfortunate byproduct.
How about we give them some benefit of the doubt, that they are mostly trying to make things better, and instead ask them "what is the technically correct way, in the systemd world, of implementing this functionality?"
I suspect the answer, for screen, would probably require some degree of messing with the user manager daemon.
> "what is the technically correct way, in the systemd world, of implementing this functionality?"
In the case of terminating programs left running in an ssh shell by a logged out user, it's a pretty simple answer IMO: just leave it alone - if the admin of the ssh server wants to terminate user processes on logout I think that's where the burden should lie and there's no good reason for systemd to get involved, that I can see.
May 04 03:02:54 [host] polkitd(authority=local)[879]:
Unregistered Authentication Agent for unix-process:1504:595444 (system bus name :1.25, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_GB.UTF-8) (disconnected from bus)
----------------------------
I changed my user and hostname to "[user]" and "[host]" respectively, the rest is verbatim.
It's interesting to note that to even see the error report (such as it is) I'm instructed to invoke a special command rather than just look at logs where I might expect to find it, and that when I do so I am locked into some dysfunctional pager requiring me to pipe through "useless cat" in order to cut and paste the uninformative details on long lines.
While the consistent boorishness of my systemd install continues to amuse, the details of this issue and the fact of its resolvability or otherwise by invoking recondite special systemd functionality are beside the point.
Why should I need to know how to run normal and trusted software in special systemd compatible ways that I've previously had no issues with for decades?
As I opine below, the point is the toxic "you're doing it wrong" mentality that infests the systemd project and its adherents. I've run into this in so many ways since ceding defeat and allowing it onto my systems. This is just the latest in a long line of examples.
I know that there is a separate command that can be used to tell systemd to allow a program to live. I know that there are systemd libraries that an executable can link against in order to opt out of the new behavior. These do not matter, because they shows that systemd is willing to break existing programs, and to break specified conventions. Systemd developers cannot be trusted to provide a foundation to build upon.
I see this kind of talking about stuff like the Unix way and existing conventions and programs. I almost have to ask myself when did we end up in a mausoleum. The Unix way is just a guideline from a time some folks wrote some code. A seminal and important time. Its just one(perhaps of many) piece of anecdata. Old code is old code. It's useful but the patterns and conventions it was built on may no longer be relevant. It is not obvious apriori that it is worthwhile to preserve existing conventions and not break existing code.
The conventions "Do one thing and do it well", and "Don't break the user experience" are not threadbare or shabby. Not UNIX nor its derivatives did spring fully formed from the foreheads of K̶n̶u̶t̶h̶ Thompson or Ritchie; it was written as a series of counterpoints to the prevailing designs and implementations of the day.
To us the ashes of our ancestors are sacred and their resting place is hallowed ground. You wander far from the graves of your ancestors and seemingly without regret.
The way Linux works now seems sane. You might very well have seen a change in the way Linux works. But when you've seen it change, multiple times, and each time 50% of what you know has to be tossed out and relearned, it gets tiresome. At times it just seems like it's change just for change's sake.
I want to have fun with the computer, not run madly just to stay in place like the Red Queen. Remember to re-read your own comment 30 years from now and see if you feel the same way.
It's a good question and infects all of our software "sacred cows."
The underlying thing is that we keep driving ourselves to "forward progress" in the sense of a collaborative hegemony, and only in those terms. Either a business wants to own the platform, or the developer wants to build that platform. To do that they have to achieve buy-in from existing stakeholders, but simultaneously reinvent incompatible things. Thus through repeated application of this approach the world of professional software development has aggregated itself into conformance to standards that barely make sense, are poorly specified, and have limited proof of concept, but tick whatever buzzword boxes are relevant to the immediate climate.
If you want to take a real stand, invest yourself in "dead" technologies. Then you can choose whatever you want, and if other people want to follow you on it it's implicit that they are working on a similar problem, and not trying to play the platforms game(else they would be looking for an angle to "modernize")
Healthy skepticism is necessary for any technologist. However, you do have to use it in both directions. I know my biases are that I prefer new technology to old. So I pay attention to that. However, it is just as bad to reflexively consign something as premature or ill-considered. Sometimes the collective hegemony gets it right. They may have missed on microservices but they didn't miss on the cloud really. I do try to be eclectic in my technology choices if only for cognitive reasons.
I'm actually very happy they chose to break this particular behaviour, based on their explanations about security and not allowing long-running services to hog up a machine when its user isn't logged in.
This is severe scope creep for an init replacement. I don't exactly know what "the service manager of the calling user" is meant to actually do (that's way the hell outside POSIX), but if a user wants one they should be able to choose, and it shouldn't matter to them how the sysadmin chose to create routing tables and mount filesystems.
Why should I be forced to prepend all of that to every command I want to survive terminal hang ups? The old mechanism was more user friendly, and better documented.
It’s just another example of systemd making everything on the system worse.
I read this comment, thought after having recently installed Debian replacing Slackware I'd wipe it again because that's unacceptable. So I ssh'd into the machine, started a screen session, detached it, killed ssh session, ssh back in and the session was still there. Is this an older problem?
So far, after coming from Slackware, systemd hasn't been in the way so much, thankfully.
I'm not sure why you're getting downvoted when it's the stated position of the systemd developers that they default to breaking those things. They've acknowledged it publicly. Hell, there was even a thread on the exact issue with tmux 4 years ago: https://news.ycombinator.com/item?id=11797075
Top comment on that link:
"""Salient comment: "Or somebody could go find the actual problem @keszybz saw
here - systemd/systemd#3005 - which is: In particular, for my gnome session,
if I log out, without KillUserProcesses=yes I get some processes which are
obviously mistakes. Even if I log in again, I'm much better starting those
again cleanly. fix that, and stop trying to make systemd break the world
because somebody's gnome session doesn't currently exit cleanly."""
Or Nicholas Marriott's 9 year old as-yet unanswered questions to the systemd developers, that I copy from a post by hn user JdeBP:
> "Shouldn't this code be part of glibc instead of tmux?"
> -- Nicholas Marriott, 2011
> If you want to change how processes daemonize, why don't you change how
> daemon() is implemented [...] ?
> -- Nicholas Marriott, 2016
> * https://news.ycombinator.com/item?id=11798173
> * https://news.ycombinator.com/item?id=11798328
Do you have decent examples of it failing to work? If so, then that sounds like a bug against screen. The link you provide is absolutely huge and from three years ago.
Please give me a simple "stages to reproduce ..." style report. Please keep it simple and short and I'll fill in the blanks if I can and only trouble you for stuff I'm too daft to work out.
remote login, start screen, ctl-A d (detach), log out, log back in, screen -r (resume) - there is no screen session because it was killed by systemd immediately after log out.
(syslog entry: May 3 09:01:25 $HOSTNAME systemd[1]: session-6.scope: Killing process 3290 (screen) with signal SIGTERM. )
But that's hardly the point, the point is the toxic "you're doing it wrong" mentality that infests the systemd project and its adherents. I've run into this in so many ways since ceding defeat and allowing it onto my systems. This is just the latest in a long line of examples.
Is it that weird that if you want to run a job in the background outside your session that ask your service manager to spawn the job?
I mean all of this is just a difference of opinion between, "I want a process supervisor to manage all my background jobs" and "I just want to double fork and throw processes at init."
> remote login, start screen, ctl-A d (detach), log out, log back in, screen -r (resume)
I just tried it on a fully up-to-date CentOS 8 box (which uses systemd), and it worked perfectly fine: after logging back in through ssh, `screen -r` restored the screen session as expected.
From what I have read, that screen session would only be killed if I had set KillUserProcesses=yes on /etc/systemd/logind.conf, which is not the default.
That's most interesting, given that both systemd and Fedora are Red Hat products. If there's one distro that would use the systemd defaults, I would expect it to be Fedora. Who exactly are these defaults for, then?
Switching to CentOS in order to use screen is not an appealing option, and the config appears not to work on my current distro (that might be my poor testing, but it's certainly not the default anyhow.)
I understand Slackware still lacks systemd, so maybe that's a better choice for me :)
The box in question is in use as a media server, but reloading logind logs out the X session, so it wasn't really possible to test properly until today. Having the opportunity to restart the machine (probably I just needed to systemctl daemon-reload?) the "don't kill my processes bro" options are working.
The distro is kde neon.
The point isn't that I can't (or the distro maintainers can't) get it to work, the point is that the aggressive systemd default breaks established practice, and that this is typical systemd behaviour: arrogant and uncaring. IOW user-hostile in the fullest sense of the term. It's not like this is an isolated example.
This is such a classic example of the type of argument I see from most anti-systemd proponents that it made me laugh out loud when I clicked your link.
For the lazy, here's the context of that cherry-picked sentence:
> In order not to break screen we currently do not set kill-user=1 or
kill-session=1.
> Note that in some cases it might be a good thing to kill screen sessions when the user otherwise logs out (think university, where students not logged in on a workstation should not be able to waste CPU), in other cases its a bad thing however (i.e. in yours). That means it must be configurable whether screen is considered an independent session or not.
1. Systemd did not solve any problems I actually had.
2. Systemd introduced problems that I did not have previously.
3. Systemd did not provide me any net benefit, that is, the few benefits it did provide over Upstart/SysV/etc. (easier service configuration and ordering) did not overshadow the issues it caused. Its introduction into my home and professional computing life has been a net negative.
My first experience with SystemD (Ubuntu 16.04 IIRC) turned into a frustrating experience, mostly around trying to mount a NFS volume in /etc/fstab, defined with 'soft' and 'bg' mount options. SystemD would often fail to mount the volume, and as a result would never fully boot to login prompt. Additionally, on shutdown SystemD would hang again forever trying to unmount that volume. No amount of mount option flag tweaking seemed to make much of a difference, so I simply removed the NFS volume from fstab and manually mounted when I needed it.
Happily running MX Linux today and back to mounting NFS in /etc/fstab.
systemd did make package maintainer's lives a lot easier. We don't often see the shift to systemd from their perspective. They are the ones making the distros so I think they are the ones who really get to say what goes and doesn't go in a distro. They don't have to keep maintaining long init scripts for the hundreds of packages that they are responsible for. Unit files are so much easier to deal with.
Yes, my systems are primarily operated for my benefit. This should not be a controversial stance, and certainly not one worth hurling insults about.
>Seriously - don’t use it if you hate it so much.
Oh how I WISH that were a viable option. Few things would make me happier than that overcomplicated, opinionated, arrogant pile of unsafe code being excised viscera ex machnia from any system I have the misfortune of managing.
Sadly I'm stuck with it if I want to run any mainstream Linux distros.
What benefit are you really getting from using mainstream distros if you find their choices to be so objectionable and don't agree with their decisions? And have you looked at the state of the kernel or at your CPU lately? I would really suggest not using Linux or any modern x86_64 processor if you are worried about "overcomplicated, opinionated, arrogant piles of unsafe code" at the lowest possible levels of your system.
>What benefit are you really getting from using mainstream distros if you find their choices to be so objectionable and don't agree with their decisions?
Well, that's an easy one: popularity = support. I dislike Ubuntu these days, but I run LTS on my work machines because no matter what obscure program I might need in the course of my job, it will almost certainly run on Ubuntu. Build instructions will target Ubuntu. Prebuilt .debs will target Ubuntu. Static binaries, or AppImages, will have been tested on Ubuntu. I get a choice between "install the snap, or build from source?" I can install the snap and get on with my day.
It's not inherently better in a technical sense, it's just overwhelmingly easier to swim with the current.
I mean that is their position yes. It’s a good default too since it’s weird that “logging out” doesn’t imply “and end all my programs.” This is basically the same behavior as Windows where anything not going through the task scheduler ends when you log out.
This is a feature that sysadmins have been asking for. On any multi-user system you run into this crap where background processes for users who are long-since gone just hang around forever because they do weird things or hang and ignored the signal. We implemented it ourselves with PAM but systemd’s solution is a lot cleaner.
And they’re not breaking any POSIX behavior. Nowhere does it say when the system isn’t allowed to kill a process.
> it’s weird that “logging out” doesn’t imply “and end all my programs.”
Weirdness is in the eye of the beholder. I regularly run programs whose lifecycles are not in sync with my login session. Why do I need to stick around to see a batch job complete?
I can understand that different folks have different backgrounds which changes expectations... but come on... somebody bringing up tmux, screen, etc. should simply end the conversation. "Oh, that is a common and historical use case that I have not considered, today I learned something."
Yes but that model conflicts with the human notion of being “logged in” to a system.
And the current behavior is literally what you describe. It’s just the quirk of subreapers being implemented recently that daemonizing a process wasn’t local to your session.
It makes zero sense that a process that double forks is reparented by init instead of your session leader. If how things worked currently was proposed today it would sound crazy.
>It’s a good default too since it’s weird that “logging out” doesn’t imply “and end all my programs.”
I have to disagree, I don't think I've ever run something where what I intend is "kill this mid-run if my connection drops".
>This is basically the same behavior as Windows
My recollection from when I used Windows Server (admittedly some time ago) is that the default behavior on connection loss or closing an RDP window is indeed the tmux-like behavior. The user has to click a separate "Log out" button, and even then, has to manually confirm that it's okay to kill anything left running.
> It’s a good default too since it’s weird that “logging out” doesn’t imply “and end all my programs.”
It may be a good default if you're starting from scratch, but GNU Screen has came out in 1987. (Poettering was 7 years old then.)
> This is basically the same behavior as Windows […]
Comparing it to Windows' behaviour is not going to win you any positive points with me. :)
> This is a feature that sysadmins have been asking for.
As a sysadmin this breaks my own daily workflow. I have 400 VMs that are treated as pets and I probably screen sessions on at least a third of them. (I have another 500 that are cattle-like.)
> And they’re not breaking any POSIX behavior. Nowhere does it say when the system isn’t allowed to kill a process.
Aka, malicious compliance. See also: nowhere does it say that the US President cannot fire the FBI Director. Also: nowhere does it say that the US Senate has to hold hearings when a US President nominates a judge for the US Supreme Court.
> This is basically the same behavior as Windows where anything not going through the task scheduler ends when you log out.
But when using remote connections, ie RDP, the default is that your session is disconnected. You have to actively choose to log out of an RDP session.
When using SSH I typically want the same. If I put my laptop to sleep or a networking issue causes the TCP connection to end, I want my terminal session to be restored when I reconnect via SSH.
Why exactly should students not be able to use CPU cycles while not logged in? I certainly did this as a student and would probably not have complained if it decided to kill my processes when doing so.
I don’t know if these even exist anymore, but I’m imagining a class that meets in a room of workstations. Students can login to any workstation in the room. In this kind of shared environment, you may not want to have a student start a screen session and use the resources of a workstation when they logout. They could be sucking up cpu time from the next person to log in (from the next class).
It’s a scenario that might make sense here. But then again, having a watchdog script just kill leftover processes has been doing this same job for forever. Including this use case in systemd seems like overreach.
Yeah, these still exist, though they're becoming more rare. I think what you actually want though is for processes to be killed at the end of class/lab period (even then, I think this should be done by students manually to avoid losing unsaved work). For example, say a student is running some big batch job as part of a lab. Now, if he wants to, say, go to the bathroom or go to the whiteboard to consult with other students, he has to either leave the computer unlocked (possibly leaving neighbors access to private student account data) or kill the running job.
Perhaps. I have worked half my career in academia & research, going back to when Solaris (SPARC) workstations were a thing, and this was never a problem.
You don’t deserve the downvotes. This is precisely the use-case this feature is good for. Multi-user systems where persistent background tasks for unprivileged users doesn’t make sense and is more indicative of a misbehaving program than a user trying to run a job.
Oh my god the number of times we had to bounce systems because the unintended background processes of long gone users overloaded the system.
While it's a valid argument, it's also a prime example of a special case that should not be the default - especially so because in my university, people were instructed to use the Linux workstations remotely for long-running tasks.
Sounds like you had junior admins at best leading your university's *NIX department.
A mid-level admin in 1995 could have easily made a 100 line script to kill background processes from non-logged in users that were running over three hours (or whatever).
I don't understand this angle. You acknowledge the functionality could have been useful even 25 years ago; it makes perfect sense for a daemon developer to integrate the functionality into a session manager and put it behind a config flag which is exactly what they did.
But that’s such a useless definition because then all software “provides nothing” since it’s implementation proves that you could have written it yourself.
logind is far more robust than any of the janky shell scripts
I’ve seen over the years to accomplish this.
The functionality to gracefully end long running background processes has been a part of standard killall, pkill, etc. for quite some time.
Even then, yes, the functionality of using ps and kill together has existed for 20+ years. Those tools are already implemented, providing the functionality for decades.
In no universe is logind considered robust, and basic scripts from middling UNIX admins have provided this functionality for decades, untouched. Even the "janky" ones.
See my comment below. These tools are fundamentally broken for this use case and have never provided the functionality needed. Bash scripts are not a process manager and it is incredibly wrong to try and make it work like one.
I responded to the comments below. These tools are not fundamentally broken, you're just ignorant in this space. They have always provided the functionality needed. Scripting can easily manage processes at a higher level, and it's basic functionality to make it work like that.
I've already been in a forest of bash scripts and I would not go back there again. I have no comment on systemd's implementation but the implementation you're talking about is also incorrect. It has never been safe to kill random processes using a bash script running in the background, on most Unixes (and Linux) it is 100% impossible to do that without race conditions due to the limitations of procfs. Doing "ps | grep" and "killall" is a footgun. You would need to implement this in C for it to have a chance of being safe at all, and even then, you would need to rely on system-specific functionality because there is no portable way in POSIX to actually do this.
>I've already been in a forest of bash scripts and I would not go back there again. I have no comment on systemd's implementation but the implementation you're talking about is also incorrect.
In certain cases I don't disagree, but systemd does not implement this feature correctly, so using functionality that's easily reviewable from decades past makes sense. If systemd could properly implement the feature, there would be no need for the scripting.
>It has never been safe to kill random processes using a bash script running in the background, on most Unixes (and Linux) it is 100% impossible to do that without race conditions due to the limitations of procfs.
This is hilariously wrong. Lots of UNIX OSes don't even implement the ps suite of tools by using procfs.
Even then, there were no race conditions in this use case anyway.
>You would need to implement this in C for it to have a chance of being safe at all, and even then, you would need to rely on system-specific functionality because there is no portable way in POSIX to actually do this.
It's already implemented this way on plenty of UNIXen. ps and kill is the POSIX portable way to approach this, so you're wrong about that as well.
Almost everything you said above is incorrect, or misunderstanding basic UNIX.
If you really want systemd to add this then I'm sure they will look at your feature request or PR.
I would urge you to actually check with your system instead of blindly dismissing me as wrong just because your bash script happened to work without error. In my experience BSD-based Unixes get it right and don't use procfs for ps or pkill. They don't have the problem because they use special syscalls for this. But SysV-based systems have historically used procfs to implement ps. Linux also still does. Try unmounting proc and running ps or kill and see what happens. If you can't do it then your system suffers from the problem, which is that you can't safely send a signal to the process after reading it because there is no guarantee that the actual PID will persist in between calls to read() and kill(). POSIX says nothing about this because it doesn't specify procfs, or how pkill should actually be implemented. This is all fair game as far as compatibility is concerned.
There is also the other more obvious race condition in your bash script which can also be pre-empted in between the calls to ps and kill. This can happen on any Unix and is not some big mystery either. PID reuse has been a known problem for decades and Linux finally got a solution to it a couple years ago with pidfd_send_signal. There is also the matter of cgroups but I am not going to get into that because I doubt you will want to hear about it.
>If you really want systemd to add this then I'm sure they will look at your feature request or PR.
The less systems touches the better.
>I would urge you to actually check with your system instead of blindly dismissing me as wrong just because your bash script happened to work without error.
Who said this was bash? The script from all those years ago worked perfectly without race conditions.
>They don't have the problem because they use special syscalls for this. But SysV-based systems have historically used procfs to implement ps.
Makes no difference, procfs works fine for this.
>Try unmounting proc and running ps or kill and see what happens.
You're effectively never unmounting procfs. Also, if you managed to, systemd would crash!
>If you can't do it then your system suffers from the problem, which is that you can't safely send a signal to the process after reading it because there is no guarantee that the actual PID will persist in between calls to read() and kill().
There's maybe 10 lines of code to ensure that logic. Again, middling sysadmin work.
>POSIX says nothing about this because it doesn't specify procfs, or how pkill should actually be implemented. This is all fair game as far as compatibility is concerned.
POSIX does say something about this; read about signals. You stop the process before killing it, and ensure that the start time is the same for the stopped process before the kill. That completely eliminates the race condition, using basic POSIX signals.
>There is also the other more obvious race condition in your bash script which can also be pre-empted in between the calls to ps and kill.
That race condition is eliminated with the logic above. That eliminates the PID reuse race condition, even if it's very rare.
>reuse has been a known problem for decades and Linux finally got a solution to it a couple years ago with pidfd_send_signal.
That functionality is where it belongs now, so instead of coding a logic every time you have to ensure PIDs, it's now handled for you.
>There is also the matter of cgroups but I am not going to get into that because I doubt you will want to hear about it.
Correct, I don't want to hear it from someone who doesn't understand POSIX and basic/intermediate sysadmin work.
at university I remember working around the auto-killing of logged out users processes by running screen, then inside that screen ssh'ing into localhost
https://www.reddit.com/r/programming/comments/4ldewx/systemd...
Namely that systemd doesn't allow persistent processes started from the shell by default, preferring to terminate them when the user logs out.
This would include processes like "screen" whose entire raison d'etre is to persist after the user logs out. (Well, it has other uses, but this is the main one IMO.)
The stated workarounds - fiddling with some options like "KillUserProcesses=no" in logind.conf &co. - have so far failed.
I don't know whether this situation is a problem with systemd or the distro, but it seems very much a problem with the culture summarised by the top commenter in the above thread, of (paraphrasing) glibly breaking existing workflows then casually brushing away criticism with arguments often boiling down to: "this is the right way, I don't care about tradition or protecting 'incorrect' usage."