Superstitions about decimal floating point are far more prevalent, even, than superstitions about binary floating point.
Generally, whatever you think you are getting from decimal floating point, you very, very likely are not.
There is exactly one place where decimal floating point is the Right Thing: when interoperating with another system that is already using it. Those exist because others' superstitions have been locked in, sometimes even into regulatory frameworks. Decimal floating point is inherently less accurate, on any lossy computation, than binary, so you need a lot more digits to maintain the same result accuracy. This is why 128-bit decimal is common.
It is generally pointless to argue with anybody who thinks decimal computations are better. If reason mattered to them, they wouldn't be stuck on the idea. So, just roll your eyes and, if they have any authority, use a library. Performance will suck, but not so badly as you might guess, and you can spend the time until release circulating your CV without panic.
Generally, whatever you think you are getting from decimal floating point, you very, very likely are not.
There is exactly one place where decimal floating point is the Right Thing: when interoperating with another system that is already using it. Those exist because others' superstitions have been locked in, sometimes even into regulatory frameworks. Decimal floating point is inherently less accurate, on any lossy computation, than binary, so you need a lot more digits to maintain the same result accuracy. This is why 128-bit decimal is common.
It is generally pointless to argue with anybody who thinks decimal computations are better. If reason mattered to them, they wouldn't be stuck on the idea. So, just roll your eyes and, if they have any authority, use a library. Performance will suck, but not so badly as you might guess, and you can spend the time until release circulating your CV without panic.