GLIBC strtod() Incorrectly Converts 2^-1075

A reader of my blog, Water Qian, reported a bug to me after reading my article “How GLIBC’s strtod() Works”. I recently tested strtod(), which was was fixed to do correct rounding in glibc 2.17; I had found no incorrect conversions.

Water tested the conversion of 2-1075 — in retrospect an obvious corner case I should have tried — and found that it converted incorrectly to 0x0.0000000000001p-1022. That’s 2-1074, the smallest double-precision value. It should have converted to 0, under round-to-nearest/ties-to-even rounding.

(Update 11/13/13: This bug has been fixed for version 2.19.)

Testcase

I wrote the following program to demonstrate the problem:

#include <stdio.h>
#include <stdlib.h>
int main (void)
{ 
 double d = strtod(
"0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000024703282292062327208828439643411068618252990130716238221279284125033775363510437593264991818081799618989828234772285886546332835517796989819938739800539093906315035659515570226392290858392449105184435931802849936536152500319370457678249219365623669863658480757001585769269903706311928279558551332927834338409351978015531246597263579574622766465272827220056374006485499977096599470454020828166226237857393450736339007967761930577506740176324673600968951340535537458516661134223766678604162159680461914467291840300530057530849048765391711386591646239524912623653881879636239373280423891018672348497668235089863388587925628302755995657524455507255189313690836254779186948667994968324049705821028513185451396213837722826145437693412532098591327667236328125",NULL);
 printf("%.13a\n",d);
}

That 1075 fractional digit string is 2-1075, which I generated with PARI/GP using the expression 2.^-1075

The Bug

In binary, 2-1075 is 1074 leading zeros followed by a single 1 bit. The bit position at 2-1075 is the rounding bit for subnormal numbers, which have a significand of 52 bits. For input 2-1075, the double-precision significand is 0, the rounding bit is 1, and all bits beyond are 0s. This makes it a halfway case, and in this case should round to 0. strtod() thinks there are more bits beyond the rounding bit, and thus rounds up.

In the code, this is what happens: after strtod() does the division, it calls round_and_return(), which calls round_away(). The parameters to round_away() are last_digit_odd == false, half_bit == true, and more_bits == true. This causes it to round up (case FE_TONEAREST). more_bits should be false, which would make it round correctly to even.

I wrote this bug report.

Update 11/13/13: Fixed

This bug was minor and has been quickly fixed. In round_and_return(), the computation of more_bits was changed:

Bug:

for (i = 0; i < RETURN_LIMB_SIZE; ++i)
  more_bits |= retval[i] != 0;

Fix:

for (i = 0; i < RETURN_LIMB_SIZE - 1; ++i)
  more_bits |= retval[i] != 0;

I tested this fix and it works.

glibc added this input to their list of testcases, but in the much more manageable hexadecimal floating-point constant form: 0x1p-1075. This test is valid because the bug is not in the processing of the decimal string, but in the rounding of the converted result.

Dingbat

3 comments

  1. You can easily make it so that your test program (or any test program with a huge string constant) is both compilable and readable on the blog. Just close off each line with a quote and start each new line with a quote. The strings will be concatenated by the compiler. For example:

    const char* pNumber = “0.00321”
    “71234323”;

    This is handy even when you’re not concerned with posting your code because it saves you from having a single line which is over a thousand characters in length.

Comments are closed.

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php