Incorrectly Rounded Subnormal Conversions in Java

While verifying the fix to the Java 2.2250738585072012e-308 bug I found an OpenJDK testcase for verifying conversions of edge case subnormal double-precision numbers. I ran the testcase, expecting it to work — but it failed! I determined it fails because Java converts some subnormal numbers incorrectly.

(By the way, this bug exists in prior versions of Java — it has nothing to do with the fix.)

The OpenJDK Testcase

The OpenJDK testcase is a Java class that tries to verify that a decimal number “just above” or “just below” a subnormal power of two rounds to that subnormal power of two (the subnormal powers of two are 2-1023 through 2-1074). Precisely, what it tests are numbers that are within ±2-1075 of each of those negative powers of two. 2-1075 is a half a unit in the last place (ULP) — half of 2-1074, the value of a ULP for all subnormal numbers.

The testcase uses Math.scalb() to generate a double representing each power of two. From each double, it uses the BigDecimal class to generate the decimal strings representing the numbers 1/2 ULP above and 1/2 ULP below the power of two. It then calls Double.parseDouble() to convert each pair of strings, making sure they both convert to the original power of two.

Subnormal double-precision numbers range from 1023 to 1074 decimal places. The BigDecimal strings, which are expressed in scientific notation, represent numbers with 1075 decimal places. They are one decimal place too long, and must be rounded to 1074 places.

In terms of binary, subnormal numbers have 1 to 52 significant bits, unlike normalized numbers, which have 53. The BigDecimal strings, when converted to binary, are 53 significant bits long, with bit 53 always 1; they must be rounded to 52 bits. These are tough halfway cases, that are sometimes rounded incorrectly by Java.

My Testcase

I modified the OpenJDK testcase to generate 1/2 ULP tests for randomly generated subnormal numbers, not just powers of two. I discovered that any decimal number halfway between any two subnormal numbers may be rounded incorrectly.

Examples of Incorrect Conversions

I ran my testcase on a 32-bit Windows XP system, using Java SE 6 — both updates 23 and 24. Below, I’ll show four examples of incorrectly rounded conversions.

Example 1: Half-ULP Above a Power of Two

Java converts this 1075-digit decimal number (315 leading zeros plus 760 significant digits) incorrectly:


This is the number 2-1047 + 2-1075. Shown in unnormalized binary scientific notation, it looks like

0.00000000000000000000000010000000000000000000000000001 x 2-1022.

(Bit 53, the rounding bit, is highlighted.)

By the round-half-to-even rule, it should round down to 2-1047, which equals

0.0000000000000000000000001000000000000000000000000000 x 2-1022.

(I included superfluous trailing zeros to pad everything out to 52 bits so that the alignment is obvious.)

Instead, Java converts it to this number, one ULP above its correctly rounded value:

0.0000000000000000000000001000000000000000000000000001 x 2-1022.

Example 2: Half-ULP Below a Power of Two

Java converts this 1075-digit decimal number (318 leading zeros plus 757 significant digits) incorrectly:


This is the number 2-1058 – 2-1075:

0.00000000000000000000000000000000000011111111111111111 x 2-1022.

Again, by half-to-even rounding, it should round up to 2-1058, which equals

0.0000000000000000000000000000000000010000000000000000 x 2-1022.

Instead, Java converts it to this number, one ULP below its correctly rounded value:

0.0000000000000000000000000000000000001111111111111111 x 2-1022.

Example 3: Half-ULP Above a Non Power of Two

Java converts this 1075-digit decimal number (309 leading zeros plus 766 significant digits) incorrectly:


This is the number (2-1027 + 2-1066) + 2-1075:

0.00001000000000000000000000000000000000000001000000001 x 2-1022.

It should round down, converting to 2-1027 + 2-1066, which equals

0.0000100000000000000000000000000000000000000100000000 x 2-1022.

Instead, Java converts it to this number, one ULP above its correctly rounded value:

0.0000100000000000000000000000000000000000000100000001 x 2-1022.

Example 4: Half-ULP Below a Non Power of Two

Java converts this 1075-digit decimal number (318 leading zeros plus 757 significant digits) incorrectly:


This is the number (2-1058 + 2-1063) – 2-1075:

0.00000000000000000000000000000000000100000111111111111 x 2-1022.

It should round up, converting to 2-1058 + 2-1063, which equals

0.0000000000000000000000000000000000010000100000000000 x 2-1022.

Instead, Java converts it to this number, one ULP below its correctly rounded value:

0.0000000000000000000000000000000000010000011111111111 x 2-1022.

Bug Report

I wrote a Java bug report for this problem: Bug ID 7019078: Double.parseDouble() converts some subnormal numbers incorrectly.

A Related Existing Bug

Java Bug 4396272 reports an error in the conversion of 2-1075: it should round to zero (by half-to-even rounding), but instead rounds to 2-1074.


All of the incorrect conversions I found were off by one ULP. This is consistent with other languages — like Visual C++ and GCC C — that don’t use David Gay’s correctly rounding strtod() function. However, Java’s FloatingDecimal class is clearly modeled on David Gay’s code, so I assume it was Java’s intent to do all its conversions correctly.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

(Cookies must be enabled to leave a reduces spam.)

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress