But even proficient C and C++ programmers continue to produce code with memory safety issues leading to remote code execution exploits. This argument doesn’t hold up to the actual experience of large C and C++ projects.
They aren't trying to prevent them. It's trivial to prevent them if you actually put effort into it; if you don't, it's going to be vulnerable. This is true of all security concerns.
"You aren't trying hard enough" isn't a serious approach to security: if it was, we wouldn't require seatbelts in cars or health inspections in restaurants.
(It's also not clear that they aren't trying hard enough: Google, Apple, etc. have billions of dollars riding on the safety of their products, but still largely fail to produce memory-safe C and C++ codebases.)
In the case of OpenSSL, Big Tech clearly neglected proper support until after the Heartbleed vulnerability. Prior to Heartbleed, the OpenSSL Software Foundation only received about $2K annually in donations and employed just one full-time employee [1]. Given the projects critical role in internet security, Big Techs neglect raises concerns about their quality assurance practices for less critical projects.
The OpenSSL Foundation is not exempt from criticism despite inadequate funding. Heartbleed was discovered by security researches using fuzz testing, but proactive fuzz testing should have been a standard practice from the start.
OpenSSL is not a great example, either before or after funding — it’s a notoriously poorly architected codebase with multiple layers of flawed abstractions. I meant things more like Chromium, WebKit, etc.: these have dozens to hundreds of professional top-bracket C and C++ developers working on them, and they still can’t avoid memory corruption bugs.
Good True C Programmers had guard rails | canary bytes | etc. to detect and avoid actual buffer overflow (into unallocated memory) rather than technical buffer overflow (reading|writing past the end of a char|byte array).