I think this is kind of funny. Using floats and rounding at some fixed precision before rounding to penny precision is basically exactly the same as using integer multiples of whatever you're first rounding the float to. So you're basically not using floating point anymore, you're just using a float type to represent fixed point integers. The problem as I see it here is that you need to be careful to round to the fixed precision at the right places. The easiest way to not miss a place is to do it after every operation, and the best way to do that is to abstract your money type as a class. So now what we're comparing is a fixed point class holding a float vs an int. In my opinion, holding an int is the better option, because it's slightly more obvious when the values overflow the maximum range of that integer than when the precision of the float drops below your fixed point rounding delta. In either case you need to add some error handling and I also prefer branching on ints than floats (mostly because of a big perf difference on the CPU I used to work on).