Interestingly, programming languages vary in how much precision they allow in printed floating-point fractions. You would think they’d all be the same, allowing you to print as many decimal places as you ask for. After all, a floating-point fraction is a dyadic fraction; it has as many decimal places as it has bits in its fractional representation.
Consider the dyadic fraction 5,404,319,552,844,595/253. Its decimal expansion is 0.59999999999999997779553950749686919152736663818359375, and its binary expansion is 0.10011001100110011001100110011001100110011001100110011. Both are 53 digits long. The ideal programming language lets you print all 53 decimal places, because all are meaningful. Unfortunately, many languages won’t let you do that; they typically cap the number of decimal places at between 15 and 17, which for our example might be 0.59999999999999998.
Continue reading “Print Precision of Dyadic Fractions Varies by Language”