Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does he know that the performance improvement wasn't just a side effect of altering the alignment of his code? Here's a guy getting a 48% performance improvement just by randomizing the alignment of his functions: https://youtu.be/Ho3bCIJcMcc?t=351 or slides if you prefer https://github.com/dendibakh/dendibakh.github.io/blob/master...


I am very familiar with this effect. We could of course use effects like this to cast doubt on any benchmark someone has ever run, unless they specifically mention that they tested for this, and 100 other bench marking gotchas. We can, if we like, assert that all things are unknowable, while at the same time asserting that the thing we want to be true is for sure true.

However, it appears to be a rather commonplace occurrence that having exceptions on, even if you aren't using them, can cause performance problems, it isn't just a one of in Tim's case. Also Tim has quite a bit of experience working on bleeding edge C and C++ performance code so there is a good chance he did account for this. You can ask him.


How common is it, actually? And on what platforms/architectures? It was a common problem back in the day when most code was compiled for x86, since exceptions weren't designed to be zero-cost in that ABI.

For what it's worth, the article itself has this bit:

"Thanks to the zero-cost exception model used in most C++ implementations (see section 5.4 of TR18015), the code in a try block runs without any overhead."


It has been at least a 10% effect in the last two things I profiled, which were a simple software rasteriser and a distributed key value store. The other significant benchmarking gotchas I see are: 1) the CPU being in a sleep mode when the test starts and takes a while to get running at full speed, and 2) other stuff running on the machine causing interference. But these two are easy to work around compared to the alignment-sensitivity problem.


I didn't mean to disagree with the conclusion. My point was more that it is hard to be confident in the causes of results like this. It'd be great if we had tools that could randomise alignments etc, so we could take lots of samples and get more confidence. As far as I know those tools don't exist and we just have to use experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: