No, we aren’t using computers with logic gates that have 10 discrete states. Even if you’re programming to decimal, the underlying representation is still binary. But there are numerical differences between using decimals represented by rationals with binary integer numerators and denominators, vs floating point with binary mantissa and exponent.
> No, we aren’t using computers with logic gates that have 10 discrete states. Even if you’re programming to decimal, the underlying representation is still binary.
That is not the sense of "computers use binary" that justifies the conclusions in the sentence in which the phrase was used in the post being responded to, so while true on its own, in context it is an example of the fallacy of equivocation.
> it doesn't follow that you have binary fractions
Is that in reference to something I said earlier, or are we just falling deeper down a rabbit hole of hair splitting?
The point I was driving at is that a given number we think about in decimal like 12.3456 can be represented in binary in more than one way, one of which are rationals backed by integers for num/den, another is floating point with mantissa and exponent. Which way is chosen ultimately affects the outcomes of computations. I was trying to explain why OP was seeing unexpected results. I don't think I've been mistaken in the explanations...
The issue is that you are missing the possibility of decimal floating point, which is also defined in IEEE 754 but which is typically not exposed as a primitive in programming languages. Imagine a BCD mantissa. That can encode 12.3456 exactly.
So it's not "since computers use binary", but "since we use binary floating point". There are fair arguments that we made that choice because of efficiency concerns driven by the fact that "computers use binary", but I don't think that really helps the explanation.
Note that the original objection was picking at a nit that I am not sure I would have picked at, I am just seeking to clarify.