Uh you know who’s so patiently answering your questions, right? My guess is that he’s been world famous at this since you were a twinkle in your daddy’s eye. He’s had a minute to think about it...
Blind appeals to authority are not appropriate here. It’s pretty clear that Walter has staked out an unreasonable position, regardless of how renowned (and rightly so!) he is for his work on D.
If my position is unreasonable, why hasn't the C Standard endorsed garbage collection? Or the C++ Standard? Rust's entire reason for existing is to find a way to make non-gc memory safe.
ISO C++11 introduced support for GC APIs, thus endorsed garbage collection in C++.
These API have been removed in ISO C++20, only because the biggest C++ GC customers, like Epic and Microsoft, never made use of them and carried on using their own implementions.
Your position is unreasonable because I think you know better than to espouse it. You don't have to be an expert on everything, of course, but considering the knowledge you already have you should at least be able to not make the claims you're making in this thread, if not arrive at what I mention here.
Your specific claim at the top of this chain is in response to someone asking "why don't you do this thing [that has 1% performance impact]", to which you respond "I cannot bundle a 1% performance hit in my language, because this would not make it a systems language". Putting aside the usual debate of what a systems language even is, if we consider the ones that are typically not up for debate: C, C++, Rust, these differ in performance by far more than 1% on typical workloads. As some commenters have mentioned, compiler optimizations alone cause pessimizations of greater than 1%, so looking at a number like this as any qualifier of how feasible something is doesn't make sense.
Taking a step back, I feel like you are missing what people actually mean when they're talking about "1%". Like, yes, 1% of Facebook's server load is $$$. Making a dozen 1% improvements to SQLite is a good improvement. But, like, you're conflating this with what you're doing, and it's not at all related. There are companies using Python in production right now trying to save 1%! The reason why this is a "rational" decision is that performance work is not actually a function of how much percent you can shave off your workload, but how you can balance a couple of people trying to wring a couple percent off of your existing code. The unfortunate truth is pretty much all code, even the stuff running billions of CPU hours in datacenters, is leaving tens of percent on the table at the very least just by using a high level language, with poor data access patterns, etc. The reason this is OK is that rewriting all of the code into perfect assembly or whatever is not a feasible task. It would be super tedious, error prone, and require huge amounts of effort. So it becomes relevant for specific people to try shave off a few percent here and there around an inherently inefficient codebase, because that's where the balance lies. Compared to the effort it took to write or migrate the code, the 1% win is always going to be a small fraction of the engineering cost. Otherwise, you'd just replace the thing altogether.
So, circling back around to your point: a 1% loss isn't actually catastrophic. If you put 1% extra code in for no reason and it was easy to get rid of it with something a little smarter, people would rightfully be up in arms. But if you bring actual benefit that is very hard to get any other way, then it's usually going to be welcomed. I mean, people are putting in specialized hardware to slow down their general C++ application code "just" 5-10% to get a fraction of the benefits of actually having memory safety. There's a limit to how much people will tolerate, and it's definitely not like 30%, but if the only penalty was 1% at runtime and you could never have to worry about freeing memory again this would probably be a good tradeoff.
(To answer your bit about why the C standard doesn't have garbage collection: multiple reasons. One is that people who use C have very picky views of how garbage collection should look like on their platform. And the other bit is that garbage collection is typically not "just" 1% CPU overhead, it has interactions with things like latency and memory usage, which I think other people have actually pointed out about this 1% figure. The reason I specifically called you out was because you accepted this and decided to argue for that number, rather than saying "oh, well, there's actually more to it than just that 1%".)
> Blind appeals to authority are not appropriate here.
True! Duly upvoted. Although I feel his arguments are closely reasoned and don’t agree with your point. And the person I responded to brought nothing you to the table.
Where is it that Walter is obviously wrong, by the way? Not trying to be argumentative. I just did not see the holes in his argument that you do.
Uh you know who’s so patiently answering your questions, right? My guess is that he’s been world famous at this since you were a twinkle in your daddy’s eye. He’s had a minute to think about it...