Recently I showed that programming languages vary in how much precision they allow in printed floating-point fractions. Not only do they vary, but most don’t meet my standard — printing, to full precision, decimal values that have exact floating-point representations. Here I’ll present a similar study for floating-point integers, which had similar results.
For each of the same twelve language implementations, I ran tests to determine the biggest integer each can print. I tested integers of two forms: those with only one 1 bit in their significand, and those with 53 1 bits in their significand (I am counting the hidden 1 bit as part of the significand). These represent the extremes of the number of significant bits in a floating-point value.
An integer with only one 1 bit in its significand is a power of two. An integer with all 1 bits in its significand is either a Mersenne number, if its floating-point exponent is 52 or less, or it’s what I’ll call a shifted Mersenne number, if its exponent is between 53 and 1023.
The largest power of two that fits in a double is 21023, which is this 308-digit number:
The largest integer that fits in a double is 21024 * (1 – 2-53), or 21024 – 2971. This is the decimal value of the floating-point number with an exponent of 1023 and a significand of 53 1 bits. This can be rewritten as 2971 * (253 – 1), or (253 – 1) * 2971. This is the largest representable shifted Mersenne number, which itself is the largest representable Mersenne number shifted left 971 bits. It is this 309-digit number:
Ideally, all languages should be able to print both of these integers. But as you’ll see, most can’t.
To keep this article short, I’ll just present the results, and not the code samples I used to determine them (like in the floating-point fraction study, I use “%f” or equivalent type formatting):
|Language||OS||Largest Power||Largest (Shifted) Mersenne|
|GCC (C)||Linux||21023||(253 – 1) * 2971|
|Visual C++||Windows||256||253 – 1|
|Java||Windows||257||253 – 1|
|Visual Basic||Windows||249||249 – 1|
|PHP||Windows||21023||(253 – 1) * 2971|
|VBScript (IE)||Windows||249||249 – 1|
|ActivePerl||Windows||256||253 – 1|
|Perl||Linux||21023||(253 – 1) * 2971|
|Python||Linux||2166||253 – 1|
As I anticipated, based on their handling of floating-point fractions, GCC and Perl on Linux print all integers to full precision. An unexpected surprise is that PHP does as well, which makes its lack of precision for dyadic fractions all the more curious.
Conclusion from the Two Studies
If you want to study binary numbers, you’ll want to run your programs on Linux, either using GCC or Perl. Both languages will print exactly what’s stored in your floating-point variables: integer, fraction, or both.
You could use this capability to write a program that prints all 2,098 powers of two that fit in a double. You could study the program’s output knowing the numbers are printed faithfully. There’d be no anomalies like in languages that print approximations; for example, no positive powers of two ending in ‘0’, and no negative powers of two ending in ‘3’. You will have removed one barrier to understanding floating-point binary numbers.