An Hour of Code… A Lifelong Lesson in Floating-Point

The 2015 edition of Hour of Code includes a new blocks-based, Star Wars themed coding lesson. In one of the exercises — a simple sprite-based game — you are asked to code a loop that adds 100 to your score every time R2-D2 encounters a Rebel Pilot. But instead of 100, I plugged in a floating-point number; I got the expected “unexpected” results.
Floating-point score in Star Wars Hour of Code exercise

(The lesson comes in two forms: a blocks version and a JavaScript version. The JavaScript version, furthermore, has a blocks mode which allows you to assemble JavaScript with blocks. This article is about the JavaScript blocks mode version.)

The Code

Here is a screenshot of puzzle 9, where I’ve coded the value 0.1 for the score increment:
My take on puzzle 9 in the Star Wars Hour of Code lesson (click image to enlarge)

Here specifically is the code block:
My code for puzzle 9

The Results

Here’s the score after the first encounter (looks OK):
Score after first encounter

Here’s the score after the second encounter (looks OK):
Score after second encounter

Here’s the score after the third encounter (what on earth?):
Score after third encounter

Bingo! A floating-point gotcha! (If this is surprising, you’re not alone.)

What’s Going On?

The root of the problem is that 0.1 is not 0.1 in binary floating-point. Specifically, in double-precision binary floating-point, it is


Decimal numbers such as 0.1 do not have exact binary equivalents.

Furthermore, every time you add this to the score, rounding may occur (double-precision floating-point has a fixed precision of 53 bits). The internal double-precision score after each encounter is as follows (you can do the binary additions and rounding yourself, using my decimal/binary converter and binary calculator):

  1. 0.1000000000000000055511151231257827021181583404541015625
  2. 0.200000000000000011102230246251565404236316680908203125
  3. 0.3000000000000000444089209850062616169452667236328125

One strategy for printing double-precision floating-point numbers is to round them to 17 significant digits; that way, they are guaranteed to round-trip convert back to the same double-precision number. But that is not what is done here; that would make the scores 0.10000000000000001, 0.20000000000000001, and 0.30000000000000004, respectively.

Another strategy for printing floating-point numbers is to round them to the shortest number that will round-trip. In this case, that would be 0.1, 0.2, and 0.30000000000000004. That’s what’s happening here.

You can show this by using my decimal to floating-point converter. 0.10000000000000001 and 0.1 both convert to the original floating point number, as do 0.2 and 0.20000000000000001. However, 0.30000000000000004 converts to the original floating point number, but 0.3 does not; it converts to 0.299999999999999988897769753748434595763683319091796875.

Other “Anomalies” You Can Force

I entered a value of 1e308 as the score increment and the game displayed 1e+308 after the first encounter, as expected. After the second and third encounters, it displayed Infinity — also as expected. (The maximum value of a double-precision number is approximately 1.8e308.) This is not so much an anomaly as a demonstration of the limits and properties of floating-point.

In other exercises, where you can run up a higher score, you can see similar printing anomalies using the 0.1 score increment. The next one for which this happens is after the eighth score; it prints the 16-digit value 0.7999999999999999, the shortest number that round-trips.

How Can You Fix This?

You can’t fix this, because that’s how the underlying JavaScript code works, and because that’s how the underlying floating-point implementation works. I suppose you could prevent entering floating-point numbers — but then where would the life lesson be? 🙂

RSS feed icon
RSS e-mail icon

One comment

  1. Hi Rick,
    the relative DP FP representation error of 1E-5 is even worse than that of 1E-1, i.e., 8.18E-17 vs. 5.55E-17. Have you performed numerical experiments on the accumulated error in popular (P)DE-solvers containing statements like t = t + dt?? Do you remember the confusion caused by the fact that the SP FP value nearest to pi is bigger than the true value of pi, but the nearest DP FP value is smaller?? Caused “nice” effects when changing from SP to DP about 20 years ago…
    All the best

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

(Cookies must be enabled to leave a reduces spam.)