The problem with benchmarks is that I never see any that estimate the impact of the extra code size on programs the size of, say, Photoshop. It takes annoying long to load such a program. Is code size part of that problem? Probably. Is the bloat added by exceptions significant? I'd like to know.
When it takes a program too long to load, it is because the program is doing too much non-exception work. The exception-handling code is not even being loaded unless it's throwing while it loads, which would just be bad design.
Interesting, GCC 7.x seems to simply puts the cold branch on a separate nop-padded cacheline.
GCC 9 [1] instead moves the exception throwing branch into a cold clone of exitWithMessageException function. The behaviour seems to have changed on starting from GCC 8.x.
Ooo, fancy. There is still a long way from just that to actually getting the savings in a real program running on a real operating system. For example, if I have thousands of source files, each with a few hundred bytes of cold exception handlers, do they get coalesced into a single block for the whole program?
Code paths introduced in order to execute any potential stack unrolling are inefficient and they make your code slow. Especially tight loops. This was common knowledge back in 2000s.
Common knowledge, but not correct. Code to destroy objects has to be generated for regular function returns, and is jumped into by the exception handler too. Managing resources by hand, instead, would also require code, but you have to write it. Its expense arises from its fragility.