Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many of your questions are discussed in detail in "The 8087 Primer" [1] but I'll give a quick summary. (I'm not defending their design decisions, just providing what I've read.)

> The 80-bit format has an explicit 1, unlike the normal IEEE float and double.

Apparently the explicit 1 made the hardware much simpler, and with 80 bits it doesn't cost you much to have an explicit 1.

(To explain for others: in the normal float format, the first bit of the mantissa is assumed to be a 1, so this bit is not explicitly stored. This gains you a "free" bit of precision. But then you needs special handling for zero and denormalized numbers because the first bit isn't a 1. The 8087 stores numbers internally using the 80 bit format for increased accuracy. The 80 bit format stores the first bit explicitly, whether it is a 0 or 1.)

> The chip has BCD support.

BCD was a big thing in the 1970s; look at all the effort in the 6502 for BCD support, for instance. My hypothesis is that cheap memory killed off the benefit of packing two digits in a byte.

> We got an 80-bit format, but no 16-bit or 128-bit format.

They did a lot of mathematical analysis to decide that 80 bits for the internal number format would result in accurate 64 bit results. Something about losing lots of bits of accuracy during exponentiation, so you want extra bits the size of the exponent.

> Don't we like powers of two?

Looking at old computers has shown me that word sizes that are powers of two are really just a custom, not a necessity. In the olden days, if your missile needed 19 bits of accuracy to hit its target, you'd build a computer with 19 bit words. And if your instruction set fit in 11 bits, you'd use 11 bit instructions. Using bytes and larger powers of two for words became popular after Stretch and the IBM 360, but other sizes work just fine.

[1] https://archive.org/details/8087primer00palm



> They did a lot of mathematical analysis to decide that 80 bits for the internal number format would result in accurate 64 bit results. Something about losing lots of bits of accuracy during exponentiation, so you want extra bits the size of the exponent.

I'm not sure if it was the reason, but 80-bit intermediate values lets you compute pow(x,y) as exp(y*log(x)) to full precision for the full range of 64-bit floats.


This is likely a big part of it, as the OP has indicated elsethread that a table of log constants are used by the 8087 for calculating logarithms and exponentiations.


My hypothesis — perhaps less informed than yours — is that BCD is a huge efficiency win (a couple of orders of magnitude) for conversion to and from decimal, and a slight loss for internal arithmetic, say about 10% inefficiency. And it avoids worries about fraction roundoff.

So if your data comes from humans and ends up with humans, and in between you have less than a couple hundred calculations, your program is more efficient with BCD, because you don't have to do a dog-slow repeated long division by 0xa to convert to decimal at the end. Somewhere around 1970, this ceased to actually be a big enough inefficiency to matter, but tradition and backward-combatibility kept BCD hardware alive for another 10 or 20 years.


I've come around to the idea that using binary floats in most cases is and was a mistake. Anything that deals with human readable numbers should be a decimal float not binary.


Maybe. It would get rid of some issues but might make people complacent about other issues.

Even then it probably wouldn't be BCD. Too inefficient. The digit-packing versions of IEEE decimals use 10 bits each for blocks of 3 digits. 99.7% efficiency rather than 83% efficiency.


Oh hey, I had no idea about this, thanks! I wrote a short essay about this idea in July ("BCM: binary-coded milial") but I didn't know it was already widely implemented, much less standardized by the IEEE! Do they use 1000s-complement for negative numbers? How does the carry-detection logic work?

I also thought about excess-12 BCM, which simplifies negation considerably, but complicates multiplication.


There's a sign bit, just like binary floating point. The details about how to do the math are up to the implementer, but I'm sure any bignum algorithm would work fine.

https://en.wikipedia.org/wiki/Decimal_floating_point

I can't say I'm a huge fan of the standard defining two separate encodings, one that uses a single binary field, and one that uses a series of 10-bit fields. There's no way to distinguish them either.


I do remember an ON (original nerd) mentioning tracking down a problem (blown nand gate) with a floating point unit. Computer worked fine, passed all the tests but the accounting department was complaining their numbers were coming out wrong.

Problem was with calculating decimal floats which only the accounting department programs used because they used BCD for money.


BCD support was more important for languages that are no longer quite as popular. Having native support made the CPUs look that much better on benchmarks for those particular languages resulting in more business.


re: BCD.

In my limited understanding, there are a variety of approaches to storing decimal numbers exactly. Some of the newer ones are apparently less likely to lead to errors in representation but are more computationally intensive than BCD.


Maybe BCD because otherwise converting from binary to decimal is too painful? Lots of these were used in instrumentation with the classic 7-segment displays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: