They didn't explain why they've chosen Rust. There are a lot of memory-safe languages besides Rust, especially in application-level area (not systems-level like Rust).
There are a lot of memory safe languages; there are fewer that have (1) marginal runtime requirements, (2) transparent interop/FFI with existing C codebases, (3) enable both spatial and temporal memory safety without GC, and (4) have significant development momentum behind them. Rust doesn't have to be unique among these qualifications, but it's currently preeminent.
Yes, but you assume all their projects need all 4 of these. I like Rust, but it's a bad choice for many areas (e.g. aforementioned application-level code). I'd expect serious decisions to at least take that into account.
I’m not assuming anything of the sort. These are just properties that make Rust a nice target for automatic translation of C programs; there are myriad factors that guarantee that nowhere close to 100% of programs (C, application level, or otherwise) won’t be suitable for translation.
Apart from runtime/embedded requirements, there's the big question of how you represent what C is doing in other languages that don't have interior pointers and pointer casting. For example, in C I might have a `struct foo*` that aliases the 7th element of a `struct foo[]` array. How do you represent that in Java or Python? I don't think you can use regular objects or regular arrays/lists from either of those languages, because you need assignments through the pointer (of the whole `struct foo`, not just individual field writes) to affect the array. Even worse, in C I might have a `const char*` that aliases the same element and expects every write to affect its bytes. To model all this you'd need some Frankenstein, technically-Turing-complete, giant-bytestring-that-represents-all-of-memory thing that wouldn't really be Java or Python in any meaningful sense, wouldn't be remotely readable or maintainable, and wouldn't be able to interoperate with any existing libraries.
In Rust you presumably do all of that with raw pointers, which leaves you with a big unsafe mess to clean up over time, and I imagine a lot of the hard work of this project is trying to minimize that mess. But at least the mess that you have is recognizably Rust, and incremental cleanup is possible.
I’ve spent the past few months translating a C library heavy in pointer arithmetic to TypeScript. Concessions have to be made here and there but ended up making utility classes to capture some of the functionality. Structs can be represented as types since they are able to also to be expressed as unions similar to structs. These const types can have fields updated in place and inherit properties from other variables similar to passing by reference which JS can do (pass by sharing) or use a deep clone to copy. As far as affecting the underlying bytes as a type I’ve come up with something I call byte type reflection which is a union type which does self-inference on the object properties in order to flatten itself into a bytearray so that the usual object indexing and length properties automatically only apply to the byte array as it has been expressed (the underlying object remains as well). C automatically does this so there is some overhead for this that cannot be removed. Pointer arithmetic can be applied with an iterator class which keeps track of the underlying data object but sadly does count as another copy. Array splicing can substitute creating a view of a pointer array which is not optimal but there are some Kotlin-esque utilities that create array views which can be used. Surprisingly, the floating point values which I expected to be way off and can only express as a number type are close enough. I use Deno FFI so plenty of room to go back to unmanaged code for optimizations and WASM can be tapped into easily. For me those values are what is important and it does the job adequately. The code is also way more resilient to runtime errors as opposed to the C library which has a tendency to just blow up. TLDR; Don’t let it stop you until you try because you might just be surprised at how it turns out. If the function calls of a library are only 2-3 levels deep how much “performance” are you really gaining by keeping it that way? Marshalling code is the usual answer and Deno FFI does an amazing job at that.
Naah, I believe in some areas like DARPA's a lot of folks still do C out of tradition only. Same as in banking they still use COBOL -- way too many existing problems and integrations are already in COBOL.
In DARPA I think a lot of control software is written in C, even though some of their controllers can even run Java.
So a large-scale effort is needed to refactor all the infrastructure.l and processes.