The same way that Element does - they host a service for you that relays push notifications their Firebase Cloud Messaging endpoint for Android or iOS Instant Notifications for Apple. I believe ntfy's hosted option is the way they offset the costs of hosting this, even if self-hosted options can take advantage of those servers free of charge.
I think it's reasonable for Zulip to ask for compensation for access to these gateways, since Apple and Google do not make them available to end users free of charge, and the burden of responsibility to ensure that these systems aren't abused is on them. Also, the fact that they offer mobile push notifications for any self hosted server of up to 10 users is pretty generous, and there seems to be a Community plan option for larger servers that includes "groups of friends" as a qualifier. It really seems they're offering quite a bit.
This isn't true, self-hosted Android push notifications in ntfy are provided using a "foreground service" by default (i.e: the app keeps a websocket open and listens), unless you set up firebase for yourself and build a custom version of the app with the cert baked in.
I think you misread, the delays are if you don't use instant delivery. I use it and it's extremely consistently delivered instantly, which makes sense, it's a websocket.
As to battery drain, I'm sure it technically does consume more, but according to my phone it's an insignificant amount: <1% of usage which is the lowest stat it gives you. Their docs suggest the same thing:
> the app has to maintain a constant connection to the server, which consumes about 0-1% of battery in 17h of use (on my phone). There has been a ton of testing and improvement around this. I think it's pretty decent now.
Honestly it's a good solution that works well with few downsides, the only real one is that iOS doesn't support doing it, but personally I don't have any apple phones so I do get an essentially free lunch.
Google doesn't have any magic way to do instant notification that nobody else has access to. The only thing they have access to in this regard is disabling any battery optimisations without triggering warnings.
Notification and battery performance is on par with google's solution except when an android build does dumb things to prevent the background activity, in which case notification performance gets worse and battery draw gets worse (not sure why exactly, it's just a common issue in these regards).
Well, there is an advantage, if everything is using the one service then you only need to have one thing alive to check it, so each new app is "free" if you already have push enabled (assuming that push notifications are rare enough the activity isn't the cost), as where each app doing it themselves is going to cause more battery use, so it isn't directly equivalent.
However, it also isn't a big deal, at least in my experience, at least for ntfy.sh.
Listening on a socket doesn't drain any battery when no data arrives unless the app does other things that actually use CPU. That's just what Google/Apple want you to believe so you depend on their proprietary lock in services.
Also like, how else would the Google / Apple services do it? Probably via sockets right? I guess you could do it in a pull-based approach on a timer, but that doesn't seem more efficient to me.
A single process waiting on multiple sockets is basically no more expensive than a single socket, but if each app has its own background process then that is more expensive. So for best performance you really want to delegate all the push-notification-listening for all the apps on a device to a single background process owned by the OS, but it'd be fine for each app to use its own push server (though of course most apps do not actually want to self-host this).
From a platform risk perspective, each tenant has dedicated resources, so it's their platform to blow up. If a customer with root access blows up their own system, then the resources from the MSP to fix it are billable, and the after-action meetings would likely include a review of whether that access is appropriate, if additional training is needed to prevent those issues in the future (also billable), or if the customer-provider relationship is the right fit. Will the on-call resource be having a bad time fixing someone else's screw up? Yeah, and having been that guy before, I empathize. The business can and should manage this relationship however, so that it doesn't become an undue burden on their support teams. A customer platform that is always getting broken at 4pm on a Friday when an overzealous customer admin is going in and deciding to run arbitrary kubectl commands takes support capacity away from other customers when a major incident happens, regardless of how much you're making in support billing.
This is essentially how it is. Additionally, the reality is that our customers don't often even need to think about using root access, but they have it if they want it. They are putting a lot of trust in us, so we also put trust in them.
Musicians who are being threatened by AI impersonating them, flooding the market with music like theirs, and otherwise actually harmed by this would disagree with you. Benn Jordan speaks at length about it in this video: https://www.youtube.com/watch?v=QVXfcIb3OKo
Lutris by default will use an older WINE version (something based on WINE8 IIRC) by default for reasons I don't quite understand. You can, however, configure Lutris to use proton-cachyos by default, to which I was able to get Battle.net to install and work correctly without issues. Not sure what feature was implemented in later WINE to make that work better, but it works.
"EAC supports Linux, but devs just won't turn it on" is the clickbait answer, but the details are more nuanced. EAC has multiple security levels that a title can set based on the threat model of the game, and most games with heavy MTX that use EAC shy away from it, largely because Fortnite doesn't do it. EAC is owned by Epic, and if Tim Sweeney says that you can't do MTX on Linux safely, then any AAA live services game with in-game MTX is going to shy away from it, regardless of how true the statement actually is.
I don't know if this is a fever dream or if it actually happened, but I seem to recall reading something about Tim Sweeney using Linux for a week to see how it compared. If he liked it, Epic Megagames would publish titles w/Linux support. He ended up complaining about some irrelevant things in KDevelop and it was pretty clear what his intentions were before even trying things.
I can't find any reference to this online, but I'm pretty sure that it happened. This would have been ~1998.
For use cases like attaching to an SBC or really any other computer, I'm sure this is great, but there are also USB crash cart consoles that can be gotten pretty cheaply like the NanoKVM-USB[0] or Cytrence's KIWI[1]. This gets you both video, keyboard and mouse.
This is my current pick - simple, works exactly as expected, very small. Only thing I ever fight with is remembering to accept Mac OS's warning about connecting a USB device.
For just video (or w/ separate keyboard/mouse), the Genki Shadowcast devices work really well.
Is there anywhere I can buy a NanoKVM-USB? The page you linked has a 'preorder' page linked, but I'm not sure how long I'd have to wait and whether it's an actual product that people have successfully used.
I use Comet in remote field by plug the ethernet it expects to the laptop. Both will set up the link local IP and accessible in browser without internet
Is there a VGA "story" for these devices? Most of the Dell and HP servers I'm physically proximate to don't have HDMI video. VGA connectors abound on the gear I work with.
I've had poor luck with the couple of VGA-to-HDMI I've ever used over the years (latency, poor video quality) so I guess my question was more "Are there any known-working good adapters for VGA for these?"
Those both look very nice, but I am disappointed that neither lists support for DP alt-mode as an input despite having a type-c port on the input side. If I were to buy such a device, I'd want it be future-proof while also supporting legacy video input like HDMI, but these are legacy-only. Good for my old raspberry pis and my ancient sandybridge NAS, but these days I only buy computers capable of single-cable operation (with exceptions for power cables for power-hungry devices like desktops).
I feel like this is kind of looking a gift horse in the mouth, especially for the cost of these units. Certainly not impossible to add, but an increase in the BOM vs. the loads of off-the-shelf super cheap HDMI capture chips available, and questionable compatibility (DP Alt Mode is getting better, but plenty of devices still have interesting quirks with it depending on implementation). These devices aren't made with daily driving a system in mind so much as for installation and recovery of a system.
Would it be handy to have this all in one cable on both ends? Sure, absolutely, that'd be killer. I personally don't think it's too big of an ask to use two cables in an installation or recovery case though, and if your devices only have USB-C ports for video out, an active USB-C to HDMI via DP-Alt cable can be had to meet that need.
Following the Obsidian model, which I love and support. Give folks the best part of the product, offer a paid option to enhance it, but allow folks to use alternatives as first class options.
This doesn't address everything, but I thought I'd chime on specifically on the chat history question. It's still early days for support from most IRCd's, but IRCv3 has been slowly bringing protocol level support for many of the same features that Slack, Teams (chat), Mattermost, etc. have, including chat history support. It's likely not reasonable for the public IRC networks to ever support history, but for a self hosted IRC server to service your team/company/community/whatever, it would be totally feasible to connect and receive scrollback.
I did not know about IRCv3! These are the HN insights I love. I wonder if IRCv3 is still semi-usable from a raw telnet session like old IRC is? I remember using that in the early 2000s when I wanted to get on IRC but didn't have a real client.
Hack Club is a non-profit community, so the bulk of their user count isn't non-profit employees or even volunteers or mentors, it's a bunch of kids hanging out and making cool stuff.
Maybe that doesn't move the needle on whether they're a small non-profit or not for you, but it's different than a massive non-profit like, say, the Prevent Cancer Foundation, which also receives millions of dollars per year to facilitate their mission.
This is a good point to know about. I'm not too sure about how non-profits can be categorized in terms of "small" or "large", but typically when we are talking about SaaS costs, well that would depend on the number of seats or licenses. So for example, the Prevent Cancer Foundation might have millions of dollars in assets per year, they only have 26 employees[0], so in a way, they are a "small" nonprofit compared to others that might have hundreds of employees.
I think it's reasonable for Zulip to ask for compensation for access to these gateways, since Apple and Google do not make them available to end users free of charge, and the burden of responsibility to ensure that these systems aren't abused is on them. Also, the fact that they offer mobile push notifications for any self hosted server of up to 10 users is pretty generous, and there seems to be a Community plan option for larger servers that includes "groups of friends" as a qualifier. It really seems they're offering quite a bit.