What is the binary floating point format used by C ++ for Intel based systems?
I am curious about binary format for single or double type used by C ++ for Intel based systems.
I have avoided using floating point numbers in cases where the data must potentially be read or written by another system (i.e. files or the network). I realize that I can use fixed point numbers instead, and that fixed point is more accurate, but I'm curious about floating point format.
a source to share
The floating point format is determined by the processor, not the language or compiler. Almost all processors these days (including all Intel desktops) either have no floating point block or have one that is IEEE 754 compliant. You get two or three different sizes (Intel with SSE offers 32, 64 and 80 bits) and each one has a sign bit, an indicator and a significance. The number represented is usually given by the following formula:
sign * (2**(E-k)) * (1 + S / (2**k'))
where k 'is the number of bits in the value, and k is a constant around the average range of indicators. There are special representations for zero (plus and minus zero), as well as infinity and other non-number (NaN) values.
There are certain quirks; for example, a fraction of 1/10 cannot be represented exactly as an IEEE binary standard floating point number. For this reason, the IEEE standard also provides decimal notation, but it is used primarily by pocket calculators and not by general-purpose computers.
Recommended Reading: David Golberg What Every Computer Scientist Should Know About Floating Point Arithmetic
a source to share
Wikipedia has a reasonable summary - see http://en.wikipedia.org/wiki/IEEE_754 .
Burt, if you want to transfer numbers between systems, you should avoid doing it in binary. Either use middleware like CORBA (jokes only, people), Tibco, etc. Or step back from your old favorite textual representation.
Intel's representation is IEEE 754 compliant. You can find information at http://download.intel.com/technology/itj/q41999/pdf/ia64fpbf.pdf .
a source to share
As other posters have pointed out, there is a lot of information about the IEEE format used by every modern processor, but this is not where your problems come in.
You can rely on any modern system using the IEEE format, but you will need to keep an eye on the endianness. Check out "endianness" on Wikipedia (or elsewhere). Intel systems are unlikely, many RISC processors are very enthusiastic. Switching between them is trivial, but you need to know what type you have.
Traditionally, people use large format formats for transmission. Sometimes people include a header indicating the byte order they are using.
If you want absolute portability, the simplest is to use a text representation. However, this can be quite verbose for floating point numbers if you want full precision. 0.1234567890123456e + 123.
a source to share
Note that decimal floating point constants can convert to different binary floating point values on different systems (even with different compilers on the same system). The difference would be negligible - perhaps only twice as much as 2 ^ -54, but it is a difference nonetheless.
Use hexadecimal constants when you want to guarantee the same binary floating point value on any platform.
a source to share