Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with benchmarks is that I never see any that estimate the impact of the extra code size on programs the size of, say, Photoshop. It takes annoying long to load such a program. Is code size part of that problem? Probably. Is the bloat added by exceptions significant? I'd like to know.


When it takes a program too long to load, it is because the program is doing too much non-exception work. The exception-handling code is not even being loaded unless it's throwing while it loads, which would just be bad design.


I think the exception code _is_ loaded. It is only a theoretical possibility that loading it could be avoided.

I just built the following code with g++ v7.4 (from MSYS64 on Windows):

    #include <math.h>
    #include <stdexcept>

    void exitWithMessageException() {
        if (random() == 4321)
            throw std::runtime_error("Halt! Who goes there?");
    }

    int main() {
        exitWithMessageException();
        return 1234;
    }
The generated code mixed the exception handlers with the hot-path code. Here are the address ranges of relevant chunks:

    100401080 - Hot path of exitWithMessageException
    100401099

    10040109a - Cold path of exitWithMessageException
    10040113f

    100401140 - Start of main


Interesting, GCC 7.x seems to simply puts the cold branch on a separate nop-padded cacheline.

GCC 9 [1] instead moves the exception throwing branch into a cold clone of exitWithMessageException function. The behaviour seems to have changed on starting from GCC 8.x.

[1] https://godbolt.org/z/PKKZ8m


Ooo, fancy. There is still a long way from just that to actually getting the savings in a real program running on a real operating system. For example, if I have thousands of source files, each with a few hundred bytes of cold exception handlers, do they get coalesced into a single block for the whole program?


coalescing functions in the same section should be the linker job, yes.


Code paths introduced in order to execute any potential stack unrolling are inefficient and they make your code slow. Especially tight loops. This was common knowledge back in 2000s.


Common knowledge, but not correct. Code to destroy objects has to be generated for regular function returns, and is jumped into by the exception handler too. Managing resources by hand, instead, would also require code, but you have to write it. Its expense arises from its fragility.


What I meant was inserting stack frames into assembly, which are dissimilar to calling free, slowing things down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: