"I’m less sold on ensuring the safety of memory shared between threads because I tend to prefer message passing"
Memory safety, at least to my eyes, has not traditionally encompassed that as a requirement. I don't consider this a solved problem, in that it has a lot of solutions and consensus about them is still developing. (e.g., I still expect async as it has been implemented in Node & Rust to eventually be considered a gigantic mistake but clearly that is not an uncontroversial opinion in 2023; check in with me in 2033 or 2043). So I'd advise trying to use one of the better solutions but I'm not quite to "there's no reason to not use one of these things".
So my passion is mostly about out-of-bounds access and use-after-free. If it costs you performance... take the hit. It's not a lot. And if you do need unsafe approaches, they are almost always some tight loop somewhere or something where you can selectively take the gloves off and drop down to assembler or something. You don't need you entire language to be unsafe just so you don't have to wrap "unsafe { }" around your tight inner loop.
> So my passion is mostly about out-of-bounds access and use-after-free.
Yeah, those are the big ones indeed, and I am willing to take a performance hit to get there. If that’s the only hit I take I’ll still be much better than paying an Electron tax.
I do however still feel some discomfort about use-after-free, because to be honest I just don’t know enough about the relevant use cases, compilation techniques, and runtime checks. So far my only relevant experiences have been GC, RAII, and stack-only. They all solve my problem (or at least I can see how I could write a compiler that would solve each use case for me). But I know those aren’t the only use cases, and I’m not familiar enough with the other allocation patterns (pool, arena…) to have a relevant opinion.
But perhaps I’m just stressing over nothing? The problem is easily stated after all: no object should be accessed after its backing memory has been freed. One way to do that is to make sure the object (and any reference to it) goes out of scope before the backing storage is freed. Which sounds doable enough if the backing storage itself follows a stack discipline…
Hey, I can glimpse here a way to allow allocations and statically guarantee a limit on memory usage (barring input dependant allocation amounts). Perhaps even avoiding fragmentation, which would be terrific for embedded use cases.
Memory safety, at least to my eyes, has not traditionally encompassed that as a requirement. I don't consider this a solved problem, in that it has a lot of solutions and consensus about them is still developing. (e.g., I still expect async as it has been implemented in Node & Rust to eventually be considered a gigantic mistake but clearly that is not an uncontroversial opinion in 2023; check in with me in 2033 or 2043). So I'd advise trying to use one of the better solutions but I'm not quite to "there's no reason to not use one of these things".
So my passion is mostly about out-of-bounds access and use-after-free. If it costs you performance... take the hit. It's not a lot. And if you do need unsafe approaches, they are almost always some tight loop somewhere or something where you can selectively take the gloves off and drop down to assembler or something. You don't need you entire language to be unsafe just so you don't have to wrap "unsafe { }" around your tight inner loop.