I’ve been teaching my sons Java by watching the Udacity course “Introduction to Programming: Problem Solving with Java” with them. In lesson four, we were introduced to the vagaries of floating-point arithmetic. The instructor talks about how this calculation
double pennies = 4.35 * 100;
produces 434.99999999999994 as its output.
I told my kids “it has to do with binary numbers” and “I write about this all the time on my blog”. Now of course I know this trips people up, but it really struck me to see the reaction firsthand. (I have long since forgotten my own first reaction.) It really hit home that thousands of new programmers are exposed to this every day.
The solution the instructor first proposed was to convert the result to an integer:
int pennies = (int) (4.35 * 100);
But as he notes, this doesn’t work — it produces 434; you must round the result:
int pennies = (int) Math.round(4.35 * 100);
At this point my ten-year old asked “Why can’t Java do that?”
Good question.
Why CAN’T Java Do That?
The reason is largely historical. Java syntax is based on C, and C was invented when computers were primitive and there weren’t millions of mainstream programmers. It was OK — probably even necessary — to expose the native workings of the machine to its sophisticated users.
If we were to design a new programming language today, would we make decimal arithmetic the default? Floating-point is really an optimization, for people who know what they’re doing. Make the experts ask for it explicitly. Make the experts use a floating-point class and code things like
if (num.compareTo(zero) > 0)
and
num = num.add(num1.multiply(two));
Let the rest of us use built-in decimal and instead code
if (num > 0)
and
num = num + num1 * 2;
By The Way, Why Am I Teaching Them Java?
Java is the first programming language I am teaching them (I’m not counting the hour or so we spent last year with Scratch, which we found too limiting). Don’t read into this as my endorsement of Java as a first programming language. I picked it primarily because they love Minecraft, and wanted to know how to make mods for it. (I don’t know if they’ll ever get to making mods, but at least they will have learned to program.) They also know that Android apps are written in Java, which was my secondary motivation.
I understand the frustration, but making the syntax horrible and ugly for the experts, who must use the language day-in and day-out, is the wrong way to go about it. This sort problem is what operator overloading (featuring prominently in many languages, with the notable exception of java) was designed for. That way, the “uglyness” comes out in the types (for which automatic inference is becoming the norm), rather than the method names.
Second, decimal arithmetic is not really what you want. It slows down the computation with only the (slight) benefit of matching the pen-and-paper result for certain sets of numbers.
Instead, we should use traditional binary (floating-point) arithmetic with precision-tracking – as seen in Mathematica (and, I’m sure many other math packages). This can still give you a result that’s even BETTER than the pen-and paper result (albeit with some post-processing), while also giving you an idea of how accurate the answer actually is. Think “significant digits”, but fancier and more accurate.
All tools in this world surely can be abused. Using binary floating point variable types in order to solve problems in the area of financial accounting imho clearly abuses this format, which has been invented in order to solve problems in the area of science and engineering. I am very happy to notice that there are instructors who mention the problems associated with FP right at the beginning of a programming course, so that nobody gets hurt unexpectedly by a thing Mr. Rumsfeld once called an “unknown unknown”.
On home-office level, I simply change the currency unit to one hundredth of the smallest officially available coin. Then, rounding is neccessary only if rates or proportions need to be calculated. But of course, there is good reason for more appropriate data types used by Oracle, SAP a.s.o. as well as the currency sub-type found in VBA.
@Joshua,
I know what you’re saying, but it would seem that in the present day, binary is something we could shield programmers from. We’ve built layer upon layer to hide just about everything else, why not binary? Experts can subvert the default decimal layer to get to binary floating-point.
As far as decimal helping only a “certain set” of numbers, I assume you mean decimal values, like 0.45, and not fractions, like 1/3. But I’d say anyone who attempts to program a computer has used a calculator, and knows that 1/3 is only approximately 0.3333…. I would think then they’d not be surprised to see a computer give inexact answers when fractions are used in calculations. Contrast this with 4.35 * 100 = 434, which is completely unexpected.
@Georg,
You and I know that floating-point was not designed for financial applications. But we’re the “experts”. I’m looking at this from the perspective of a novice programmer — a guy who picks up a Java book and starts coding. A newbie can plod along and teach himself how to program fairly easily because it’s very logical and the output makes sense; well, except that 4.35*100 equals 434.
Rick, I’ve always wondered about programming tutorials that treat the special effects of overflow of integers extensively, but do not mention at all the fact that the single/double/quadruple binary floating point number corresponding to the numeral string 0.1 does not exactly match its mathematical value. Imho, the big misconception of BFP variable values being representatives of the mathematically well defined set of real numbers is created during secondary education, and is hard to correct at university.
But there is hope: decimal FP is part of the latest version of the IEEE-FP standard, and Intel already provides DFP math libraries. For the record: a few seconds ago “Java” & “BigInteger” yielded 384,000 Google hits, but “Java” & “BigDecimal” won with 531,000 hits. So things seem to change for the better…
An alien with a Base 3 number system, looking at humans maths might just as easily say: why in decimal can’t they cope with Base 3 0.1 (0.3333333 etc in decimal) + 0.1 +0.1! They seem to always get 0.999999, terrible!
Decimal isn’t fundamentally different than binary (or Base 3 or Base 4) it has some fractions it can exactly represent (like 1/10 is representable in decimal but not in binary) and some it can’t (like 1/3 is representable in Base 3 but not in decimal)
@Richard,
Maybe the title should have been “Floating-Point Is So Insane Even a Ten-Year Old Earthling Can See It”. 🙂