How are you going to do linear interpolation, let alone exponential growth, without floating-point? On August 1 you made $1000, on August 7 you made $1100, assuming that growth is linear, how much are you going to make on August 20?
If you have some routines for dividing fixed-point numbers, one, why do you believe they have more accuracy than floating point (especially if you're doing divisions by numbers that aren't divisible by 10, as in the example above - don't you have the same problem as with floating point?), and why do you believe they're more correct than floating point? Did you write a test suite? Do you know what needs to be tested? What prevented you from writing the test suite for the floating-point calculations you were originally doing to do?
> How are you going to do linear interpolation, let alone exponential growth, without floating-point?
Fixed point is one answer, but I see you know that already. I don’t know what the banks use for interest, but I can guarantee that it’s not float32.
> If you have some routines for dividing fixed-point numbers, one, why do you believe they have more accuracy than floating point
Fixed point routines do not have more best-case accuracy than float, given the same number of bits ... but float32 definitely has a worst-case accuracy that is very very bad compared to a fixed point number.
> why do you believe they’re more correct than floating point?
Can’t speak for the GP, but I think asking about correctness is a straw man. The issue is really about safety, predictability, and controllability. Floating point can be very accurate, but guaranteeing that accuracy is notoriously difficult, and it depends very much on the unknown ranges of your intermediate calculations. Fixed point, on the other hand, never changes accuracy as you go, so you don’t get surprises.
Yeah, definitely. My list certainly isn’t exhaustive; just trying to express that use of floating point isn’t only a matter of correctness, there are lots of other factors. The dev time cost of using floats correctly for financial calculations is higher than the dev time cost of using ints or fixed point numbers.
If you have some routines for dividing fixed-point numbers, one, why do you believe they have more accuracy than floating point (especially if you're doing divisions by numbers that aren't divisible by 10, as in the example above - don't you have the same problem as with floating point?), and why do you believe they're more correct than floating point? Did you write a test suite? Do you know what needs to be tested? What prevented you from writing the test suite for the floating-point calculations you were originally doing to do?