# Decimal Precision of Binary Floating-Point Numbers

Copyright © 2008-2016 Exploring Binary

http://www.exploringbinary.com/decimal-precision-of-binary-floating-point-numbers/

How many decimal digits of precision does a binary floating-point number have?

For example, does an IEEE single-precision binary floating-point number, or *float* as it’s known, have 6-8 digits? 7-8 digits? 6-9 digits? 6 digits? 7 digits? 7.22 digits? 6-112 digits? (I’ve seen all those answers on the Web.)

Part of the reason for multiple answers is that there *is* no single answer; the question is not as well defined as it seems. On the other hand, if you understand what it really means to equate decimal floating-point precision with binary floating-point precision, then only some of those answers make sense. In this article, I will argue that there are only three reasonable answers: “6 digits”, “6-8 digits”, and “slightly more than 7 digits on average”.

(For double-precision binary floating-point numbers, or *doubles*, the three answers are “15 digits”, “15-16 digits”, and “slightly less than 16 digits on average”.)

## Before We Start: The Answer Is Not log_{10}(2^{24}) ≈ 7.22

A common answer is that floats have a precision of about 7.22 digits. While this may be true for integers, where gaps align and are both of size one, it’s not true for floating point numbers (the fact that it gets you in the ballpark notwithstanding). I can’t say it better than David Matula himself, as he did in his 1970 paper “A Formalization of Floating-Point Numeric Base Conversion”:

“Converting integer and fixed-point data to an “equivalent” differently based number system is generally achieved by utilizing essentially log

_{δ}Β times as many digits in the new base δ as were present for representing numbers in the old base Β system. This simplified notion of equivalence does not extend to the conversion of floating-point systems. Actually, conversion between floating-point number systems introduces subtle difficulties peculiar to the structure of these systems so that no such convenient formula for equating the “numbers of significant digits” is even meaningful.”

## Defining Precision

Bruce Dawson has an excellent article on floating-point precision. Here is his main definition of precision, and the one I will adopt:

“For most of our purposes when we say that a format has n-digit precision we mean that over some range, typically [10^k, 10^(k+1)), where k is an integer, all n-digit numbers can be uniquely identified.”

So d-digit precision (d a positive integer) means that if we take all d-digit decimal numbers over a specified range, convert them to b-bit floating-point, and then convert them back to decimal — rounding to nearest to d digits — we will recover all of the original d-digit decimal numbers. In other words, all d-digit numbers in the range will round-trip.

It is the choice of range which leads to the multiple answers. Powers of ten and two interleave to create segments with different relative gap sizes, and it is relative gap size that determines how many decimal digits will round-trip.

So the answer for precision depends on what you are looking for. Do you want to know the precision for one power of ten exponent? For a power of two exponent? For a power of two exponent that crosses a power of ten exponent? For the whole floating-point format?

### My Work Vs. Bruce’s

These are the main differences between my work and Bruce’s:

- I primarily do an analytical analysis based on relative gap sizes instead of running code to check round-trips. (I have run code tests in the past too, but they capture “coincidental” precision, as I’ll explain below.)
- I include analysis for 8-digit precision, not just 7 and 6 digit precision.
- I talk solely about decimal precision of binary numbers so as not to confound precision with floating-point to decimal to floating-point round-trip theory.
- I do a more granular analysis instead of just assigning the maximum guaranteed precision per power of ten exponent.
- I don’t analyze subnormal numbers, where the precision can go down to as low as 0 digits.

## Decimal Precision of floats by Segment

I wrote a PARI/GP script to identify all the different segments with different relative gap sizes over the entire single-precision format. For each segment, I calculated its precision — a single number, 6, 7, or 8. Here is the data condensed by power of ten exponent, which results in a range of precisions for most:

Power | Precision | Power | Precision | Power | Precision | ||
---|---|---|---|---|---|---|---|

10^{-38} |
6-7 | 10^{-12} |
7 | 10^{14} |
7-8 | ||

10^{-37} |
7 | 10^{-11} |
7-8 | 10^{15} |
6-8 | ||

10^{-36} |
7-8 | 10^{-10} |
6-8 | 10^{16} |
7 | ||

10^{-35} |
6-8 | 10^{-9} |
7 | 10^{17} |
7-8 | ||

10^{-34} |
7 | 10^{-8} |
7-8 | 10^{18} |
6-8 | ||

10^{-33} |
7-8 | 10^{-7} |
6-8 | 10^{19} |
7 | ||

10^{-32} |
6-8 | 10^{-6} |
7 | 10^{20} |
7-8 | ||

10^{-31} |
7 | 10^{-5} |
7-8 | 10^{21} |
6-8 | ||

10^{-30} |
7-8 | 10^{-4} |
6-8 | 10^{22} |
7 | ||

10^{-29} |
7-8 | 10^{-3} |
7 | 10^{23} |
7-8 | ||

10^{-28} |
7-8 | 10^{-2} |
7-8 | 10^{24} |
6-8 | ||

10^{-27} |
7-8 | 10^{-1} |
7-8 | 10^{25} |
7 | ||

10^{-26} |
7-8 | 10^{0} |
7 | 10^{26} |
7-8 | ||

10^{-25} |
7-8 | 10^{1} |
7-8 | 10^{27} |
6-8 | ||

10^{-24} |
7-8 | 10^{2} |
7-8 | 10^{28} |
7 | ||

10^{-23} |
7-8 | 10^{3} |
7-8 | 10^{29} |
7-8 | ||

10^{-22} |
6-8 | 10^{4} |
7-8 | 10^{30} |
7-8 | ||

10^{-21} |
7 | 10^{5} |
7-8 | 10^{31} |
7-8 | ||

10^{-20} |
7-8 | 10^{6} |
7-8 | 10^{32} |
7-8 | ||

10^{-19} |
6-8 | 10^{7} |
7-8 | 10^{33} |
7-8 | ||

10^{-18} |
7 | 10^{8} |
7-8 | 10^{34} |
7-8 | ||

10^{-17} |
7-8 | 10^{9} |
6-8 | 10^{35} |
7-8 | ||

10^{-16} |
6-8 | 10^{10} |
7 | 10^{36} |
7-8 | ||

10^{-15} |
7 | 10^{11} |
7-8 | 10^{37} |
6-8 | ||

10^{-14} |
7-8 | 10^{12} |
6-8 | 10^{38} |
7 | ||

10^{-13} |
6-8 | 10^{13} |
7 |

There are 77 powers of ten, although being at the extremes, 10^{-38} and 10^{38} are covered only partially.

When the precision shows a range, like 6-8, it means one segment has 6 digits, another has 7 digits, and another has 8 digits. Actually, it is always in reverse; each power of ten range starts with the higher precision segment and ends with the lower precision segment (as the numbers increase).

A few observations from the table:

- There are 19 powers of ten with a constant precision (and it’s always 7 digits).
- There are 18 powers of ten for which precision dips to 6 digits. (Precision as low as 6 digits may surprise you, especially if you have bought into the log
_{10}(2^{24}= 16,777,216) ≈ 7.22 argument.) - There are three long runs where the precision is 7-8 digits: 10
^{-30}through 10^{-23}, 10^{1}through 10^{8}, and 10^{29}through 10^{36}.

### Beware of Power of Ten Boundaries

A power of ten can get one less digit of precision than advertised — if it converts to a floating-point number less than itself. For example, 1e-4 does not have 8 digits, but rather 7: 9.99999974737875163555145263671875e-5.

## Example: Looking at One Power of Ten Range

Let’s look at the precision of decimal floating-point numbers with decimal exponent -4, the range [10^{-4}, 10^{-3}):

Between 10^{-4} and 10^{-3} there are five segments: [10^{-4}, 2^{-13}), [2^{-13}, 2^{-12}), [2^{-12}, 2^{-11}), [2^{-11}, 2^{-10}), and [2^{-10}, 10^{-3}); they have decimal precision of 8, 7, 7, 7, and 6 digits, respectively.

Let’s look at examples of numbers represented to each of the three different levels of precision:

- 1.21e-4 converts to the single-precision floating-point value 1.209999973070807754993438720703125e-4, which has 8 digits of precision: rounded to 8 digits it’s 1.21e-4, but rounded to 9 digits it’s 1.20999997e-4.
- 1.23e-4 converts to 1.2300000526010990142822265625e-4, which has 7 digits of precision: rounded to 7 digits it’s 1.23e-4, but rounded to 8 digits it’s 1.2300001e-4.
- 9.86e-4 converts to 9.860000573098659515380859375e-4, which has 6 digits of precision: rounded to 6 digits it’s 9.86e-4, but rounded to 7 digits it’s 9.860001e-4.

(When using my decimal to floating-point converter to compute these values, check the boxes ‘Single’ and ‘Normalized decimal scientific notation’.)

You can’t get less precision than each segment supports, but you can get what looks like more.

### Coincidental Precision

In any given segment, you can find examples of numbers that convert to a higher precision than supported by that segment:

- 1.013e-4 converts to 1.01300000096671283245086669921875e-4, which appears to have 9 digits of precision; but it’s in an 8-digit segment.
- 1.24e-4 converts to 1.23999998322688043117523193359375e-4, which appears to have 8 digits of precision; but it’s in a 7-digit segment.
- 9.8e-4 converts to 9.80000011622905731201171875e-4, which appears to have 7 digits of precision; but it’s in a 6-digit segment.
- 2.35098856151472858345576598207153302664571798551798085536592623685

0006129930346077117064851336181163787841796875e-38, an exactly representable number, converts to itself, so it looks like it has 112 digits of precision!

This is not precision, at least as we have defined it; precision is a property of a *range* of n-digit numbers, not a specific n-digit number. I’ll call the above ** coincidental precision**.

Let’s look at 1.013e-4 and its six nearest 9-digit neighbors:

1.01299997e-4

1.01299998e-4

1.01299999e-4

1.013e-4

1.01300001e-4

1.01300002e-4

1.01300003e-4

All seven of those numbers map to the same float, which in turn maps back (to 9 digits) to 1.013e-4. There’s not enough precision to represent all 9-digit numbers.

## Average Precision

Broadly we can say that precision ranges from 6 to 8 digits — but can we say what the *average* precision is?

I computed a simple average from the 329 segments between FLT_MIN and FLT_MAX. There are 254 powers of two in this range, most of which have a single precision value; those that cross a power of ten have two values. There are 77 powers of ten, 75 of which cross powers of two (10^{-38} is less than FLT_MIN, and 10^{0} = 2^{0}). 254 + 75 = 329.

The denominator for my average was 254, the number of powers of two. For those powers of two with a single precision, I assigned a weight of 1. For those powers of two split by powers of ten, I assigned a fractional weight, proportional to where the split occurs.

For example, 2^{115} has 7 digits of precision, and 2^{116} has 7 digits for about 20% of its length (before 10^{35}) and 8 digits for the remaining 80% of its length (after 10^{35}). The average across just those two powers of two would be (7*1 + 7*0.2 + 8*0.8)/2 = 7.4.

With that methodology, I came up with an average decimal precision for single-precision floating-point: **7.09 digits**. 89.27% of the range has 7 digits, 10.1% has 8 digits, and 0.63% has 6 digits.

It’s hard to say what that average would mean in practice, since you will likely be using numbers in a specific range and with a particular distribution. But it does tell you that you are likely to get 7 digits.

(I kind of “linearized” a logarithmic concept, but since I’m talking about integer digit counts, it feels OK.)

## Decimal Precision of Single-Precision Floating-Point

So after that analysis, what is the bottom line?

If you care about the minimum precision you can get from a float, or equivalently, the maximum number of digits *guaranteed* to round-trip through a float, then 6 digits is your answer. (It’s unfortunate we have to be that conservative; only a small percentage of the float range is limited to 6 digits.) If you want to know the *range* of precision, or equivalently, the range of the number of digits that can round-trip through a float (excluding “coincidental” conversions), then 6-8 digits is your answer. If you want to know how much precision you’ll get on average, then your answer is slightly more than 7 digits.

If you could give only one answer, the **safe answer is 6 digits**. That way there will be no surprises (print precision notwithstanding).

## Decimal Precision of Double-Precision Floating-Point

By the same argument above, the precision of a double is not log_{10}(2^{53}) ≈ 15.95.

Doing the same analysis for doubles I computed an average decimal precision of **15.82 digits**. 82.17% of the range has 16 digits and 17.83% has 15 digits.

The three reasonable answers are 15 digits, 15-16 digits, and slightly less than 16 digits on average. The **safe answer is 15 digits**.

## Where are 9 and 17?

Some will say 9 is the upper bound of precision for a float, and likewise, 17 digits is the upper bound for a double (for example, see the Wikipedia articles on single-precision and double-precision). Those numbers come from the theory of round-tripping, from conversions in the opposite direction: floating-point to decimal to floating-point. But you’re not getting that much decimal precision in any range of those IEEE formats. You can determine that based on analyzing gaps, as I did above with my PARI script, or by running some code, as I did with a C program.

For floats, you can iterate through all 9-digit decimal numbers and see if they round-trip. I found that about 97% of 9-digit decimals failed to round-trip. Every float had multiple decimals mapping to it. The minimum count I found was 6; for example, 1.00000004e6 through 1.00000009e6. The maximum count I found was 119, occurring from 9.90353153e27 through 9.90353271e27. (This matches theory, as produced by my PARI/GP script.)

Printing to 9 (or 17) digits is just a way to recover a floating-point number; any one of multiple 9 (or 17) digit numbers will serve that purpose (it doesn’t have to be the closest).

June 12th, 2016 at 1:00 pm

You say that 9-digits for floats comes from the “theory of round-tripping”, but I’d say that it’s more practical than that. The 9-digits for floats comes from those who store floats in text formats. This is often done in order to allow human readable/editable data. It is important in those contexts to know:

a) That if you print with enough digits then your floats will be perfectly preserved.

b) How many digits it takes for your floats to be perfectly preserved.

So, I wouldn’t call it the “theory of round-tripping” I’d call it “the need to support round-tripping from float to text to float”.

But, excellent analysis as always.

June 12th, 2016 at 8:21 pm

Thanks Bruce.

I didn’t mean theory to imply “not useful in practice” — it most certainly is! (I’ve written many an article on the subject.) I was just trying to distinguish limits of precision from limits governing round trips.