I’ve written about the formulas used to compute the number of decimal digits in a binary integer and the number of decimal digits in a binary fraction. In this article, I’ll use those formulas to determine the maximum number of digits required by the double-precision (double), single-precision (float), and quadruple-precision (quad) IEEE binary floating-point formats.

The maximum digit counts are useful if you want to print the full decimal value of a floating-point number (worst case format specifier and buffer size) or if you are writing or trying to understand a decimal to floating-point conversion routine (worst case number of input digits that must be converted).

## Floating-Point Integers

For integers, it’s easy to determine the maximum number of decimal digits — just count the digits of the largest floating-point number. (This may seem obvious, but wait until we discuss fractions.) Smaller numbers may have as many digits, but they can never have more.

### Double-Precision

The largest value of a double, known as DBL_MAX, is a significand of 53 1s starting at the place of the largest power of two exponent, 1023; here it is, expressed in normalized binary scientific notation:

`1.1111111111111111111111111111111111111111111111111111 x 2`^{1023}

Written out longhand in binary it is 1024 bits, 53 1s followed by 971 zeros.

This number can be expressed as (2 – 2^{-52}) · 2^{1023} = (1 – 2^{-53}) · 2^{1024} = 2^{1024} – 2^{971} = (2^{53} – 1) · 2^{971}. In decimal it is

179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368

It has **309 (significant) digits**. You can count them, or use this formula: ⌊log_{10}((2^{53} – 1) · 2^{971})⌋ + 1 = 309.

(Formulas with logarithms can be rewritten so that they are computed more efficiently; for example, the above can be written as ⌊log_{10}(2^{53} – 1) + 971 · log_{10}(2)⌋ + 1. However, to keep things simple, I will not express them that way.)

### Single-Precision

The largest value of a float, known as FLT_MAX, is a significand of 24 1s starting at the place of the largest power of two exponent, 127:

`1.11111111111111111111111 x 2`^{127}

Written out longhand in binary it is 128 bits, 24 1s followed by 104 zeros.

This number can be expressed as (2 – 2^{-23}) · 2^{127} = (1 – 2^{-24}) · 2^{128} = 2^{128} – 2^{104} = (2^{24} – 1) · 2^{104}. In decimal it is

340282346638528859811704183484516925440

It has **39 (significant) digits**: ⌊log_{10}((2^{24} – 1) · 2^{104})⌋ + 1 = 39.

## Floating-Point Fractions

With fractions, finding the maximum required digits is not as simple as counting the digits of the smallest number. Also, we have to specify what we mean by “maximum” — is it the total length of the fraction, which includes leading zeros, or just the length of the significant digits? ^{1} We will be looking for the maximum significant digits, although the maximum number of digits overall will come out in the process.

### Double-Precision

We’ll first look at the smallest values of a double — the smallest *normal* and *subnormal* numbers — and then we’ll look at the numbers with the most significant digits.

#### Doubles With Lots of Significant Digits, But Not The Most

The smallest (positive) normal value of a double, known as DBL_MIN, is 2^{-1022.} In binary it is 1022 bits, 1021 leading zeros followed by a 1. In decimal it is

0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002225073858507201383090232717332404064219215980462331830553327416887204434813918195854283159012511020564067339731035811005152434161553460108856012385377718821130777993532002330479610147442583636071921565046942503734208375250806650616658158948720491179968591639648500635908770118304874799780887753749949451580451605050915399856582470818645113537935804992115981085766051992433352114352390148795699609591288891602992641511063466313393663477586513029371762047325631781485664350872122828637642044846811407613911477062801689853244110024161447421618567166150540154285084716752901903161322778896729707373123334086988983175067838846926092773977972858659654941091369095406136467568702398678315290680984617210924625396728515625

It has **1022 digits**, 307 leading zeros followed by **715 significant digits**. You can count the digits or just use this formula: 1022 + ⌊log_{10}(2^{-1022})⌋ + 1 = 715.

That’s a lot of digits, but it’s neither the maximum total digits nor the maximum significant digits possible.

Let’s look at another number, 2^{-1074}. It’s the smallest subnormal value of a double — the smallest value of a double period. In binary it is 1074 bits, 1073 leading zeros followed by a 1. In decimal it is

0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004940656458412465441765687928682213723650598026143247644255856825006755072702087518652998363616359923797965646954457177309266567103559397963987747960107818781263007131903114045278458171678489821036887186360569987307230500063874091535649843873124733972731696151400317153853980741262385655911710266585566867681870395603106249319452715914924553293054565444011274801297099995419319894090804165633245247571478690147267801593552386115501348035264934720193790268107107491703332226844753335720832431936092382893458368060106011506169809753078342277318329247904982524730776375927247874656084778203734469699533647017972677717585125660551199131504891101451037862738167250955837389733598993664809941164205702637090279242767544565229087538682506419718265533447265625

It has **1074 digits**, 323 leading zeros followed by **751 significant digits**: 1074 + ⌊log_{10}(2^{-1074})⌋ + 1 = 751.

There aren’t numbers with more total digits, but there are numbers with more significant digits.

#### Doubles With The Most Significant Digits

That there are numbers with more digits than 2^{-1022} and 2^{-1074} is not surprising if you’ve read this. The most significant digits come from a binary floating-point number that has the maximum length 1s-filled significand ending at the lowest place — place 1074 for doubles.

The double-precision number that fits the bill has a significand of 53 1s starting at the place of the smallest normal power of two exponent, -1022; it is this 53 significant bit number:

`1.1111111111111111111111111111111111111111111111111111 x 2`^{-1022}

Written out longhand in binary it is 1074 bits, 1021 leading zeros followed by a 53 1s.

This number can be expressed as (2 – 2^{-52}) · 2^{-1022} = (1 – 2^{-53}) · 2^{-1021} = 2^{-1021} – 2^{-1074} = (2^{53} – 1) · 2^{-1074} = (2^{53} – 1) / 2^{1074}. In decimal it is

0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000044501477170144022721148195934182639518696390927032912960468522194496444440421538910330590478162701758282983178260792422137401728773891892910553144148156412434867599762821265346585071045737627442980259622449029037796981144446145705102663115100318287949527959668236039986479250965780342141637013812613333119898765515451440315261253813266652951306000184917766328660755595837392240989947807556594098101021612198814605258742579179000071675999344145086087205681577915435923018910334964869420614052182892431445797605163650903606514140377217442262561590244668525767372446430075513332450079650686719491377688478005309963967709758965844137894433796621993967316936280457084866613206797017728916080020698679408551343728867675409720757232455434770912461317493580281734466552734375

It has **1074 digits**, 307 leading zeros followed by **767 significant digits**: 1074 + ⌊log_{10}(2^{53} – 1) / 2^{1074})⌋ + 1 = 767.

This is the most significant digits a double can have, but it’s not the only double with that many; some (relatively speaking) smaller numbers have just as many. In this case, any significand with bits 1 and 53 equal to 1 — that is, half of the doubles between (2^{52} + 1) / 2^{1074} and (2^{53} – 1) / 2^{1074} — will have a decimal value with 767 significant digits.

There are also some subnormal numbers with the same number of significant digits. Consider the number that is one ULP below DBL_MIN. It has a significand of *52* 1s starting at the place of the largest subnormal power of two exponent, -1023; it is this 52 significant bit number:

`1.111111111111111111111111111111111111111111111111111 x 2`^{-1023}

Written out longhand in binary it is 1074 bits, 1022 leading zeros followed by 52 1s.

This number can be expressed as (2 – 2^{-51}) · 2^{-1023} = (1 – 2^{-52}) · 2^{-1022} = 2^{-1022} – 2^{-1074} = (2^{52} – 1) · 2^{-1074} = (2^{52} – 1) / 2^{1074}. In decimal it is

0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000022250738585072008890245868760858598876504231122409594654935248025624400092282356951787758888037591552642309780950434312085877387158357291821993020294379224223559819827501242041788969571311791082261043971979604000454897391938079198936081525613113376149842043271751033627391549782731594143828136275113838604094249464942286316695429105080201815926642134996606517803095075913058719846423906068637102005108723282784678843631944515866135041223479014792369585208321597621066375401613736583044193603714778355306682834535634005074073040135602968046375918583163124224521599262546494300836851861719422417646455137135420132217031370496583210154654068035397417906022589503023501937519773030945763173210852507299305089761582519159720757232455434770912461317493580281734466552734375

It has **1074 digits**, 307 leading zeros followed by **767 significant digits**: 1074 + ⌊log_{10}(2^{52} – 1) / 2^{1074})⌋ + 1 = 767.

Why does it also have 767 digits? Well of course because the logarithm comes out the same. But let’s look at it in terms of the structure of the number. It has one less significant bit than the number described above, which makes it lose a significant digit. But that’s offset by the lowered exponent, which in this case — as is the case about 70% of the time — adds back a significant digit. (This by the way explains the difference in the number of significant digits between this number and 2^{-1074}, which has 751; the 51 extra bits adds 51 digits, but moving the exponent 51 places higher subtracts about 0.7 * 51 = 36 digits.)

As it turns out, half of the doubles between (2^{51} + 1) / 2^{1074} and (2^{52} – 1) / 2^{1074}, will have a decimal value with 767 significant digits.

The range of 767 significant digit numbers even continues into the next subnormal exponent down (one digit lost, one digit gained), although it does not span all its values; it goes from (2^{50} + 898122626230483) / 2^{1074} through (2^{51} – 1) / 2^{1074}. (I determined that big constant in the first numerator through trial and error.)

Overall, the 767 significant digit numbers live in the range (2^{50} + 898122626230483) / 2^{1074} through (2^{53} – 1) / 2^{1074}. (You can verify this with the logarithm formula.)

### Single-Precision

I’ll do a similar progression for floats, but with less narration.

#### Floats With Lots of Significant Digits, But Not The Most

The smallest normal value of a float, known as FLT_MIN, is 2^{-126.} In binary it is 126 bits, 125 leading zeros followed by a 1. In decimal it is

0.000000000000000000000000000000000000011754943508222875079687365372222456778186655567720875215087517062784172594547271728515625

It has **126 digits**, 37 leading zeros followed by **89 significant digits**: 126 + ⌊log_{10}(2^{-126})⌋ + 1 = 89.

That number has neither the maximum total digits nor the maximum significant digits possible.

Let’s look at another number, 2^{-149}. It’s the smallest subnormal value of a float — the smallest value of a float period. In binary it is 149 bits, 148 leading zeros followed by a 1. In decimal it is

0.00000000000000000000000000000000000000000000140129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125

It has **149 digits**, 44 leading zeros followed by **105 significant digits**: 149 + ⌊log_{10}(2^{-149})⌋ + 1 = 105.

There aren’t numbers with more total digits, but there are numbers with more significant digits.

#### Floats With The Most Significant Digits

A significand of 24 1s starting at the place of the smallest normal power of two exponent (-126) will give us a float with the most significant digits; it is this 24 significant bit number:

`1.11111111111111111111111 x 2`^{-126}

Written out longhand in binary it is 149 bits, 125 leading zeros followed by 24 1s.

This number can be expressed as (2 – 2^{-23}) · 2^{-126} = (1 – 2^{-24}) · 2^{-125} = 2^{-125} – 2^{-149} = (2^{24} – 1) · 2^{-149} = (2^{24} – 1) / 2^{149}. In decimal it is

0.00000000000000000000000000000000000002350988561514728583455765982071533026645717985517980855365926236850006129930346077117064851336181163787841796875

It has **149 digits**, 37 leading zeros followed by **112 significant digits**: 149 + ⌊log_{10}(2^{24} – 1) / 2^{149})⌋ + 1 = 112.

This is not the only float with that many significant digits; some smaller numbers have just as many. In this case, any significand with bits 1 and 24 equal to 1 — that is, half of the floats between (2^{23} + 1) / 2^{149} and (2^{24} – 1) / 2^{149} — will have a decimal value with 112 significant digits.

There are also some subnormal numbers with the same number of significant digits. Consider the number that is one ULP below FLT_MIN. It has a significand of 23 1s starting at the place of the largest subnormal power of two exponent (-127); it is this 23 significant bit number:

`1.1111111111111111111111 x 2`^{-127}

Written out longhand in binary it is 149 bits, 126 leading zeros followed by 23 1s.

This number can be expressed as (2 – 2^{-22}) · 2^{-127} = (1 – 2^{-23}) · 2^{-126} = 2^{-126} – 2^{-149} = (2^{23} – 1) · 2^{-149} = (2^{23} – 1) / 2^{149}. In decimal it is

0.00000000000000000000000000000000000001175494210692441075487029444849287348827052428745893333857174530571588870475618904265502351336181163787841796875

It has **149 digits**, 37 leading zeros followed by **112 significant digits**: 149 + ⌊log_{10}(2^{23} – 1) / 2^{149})⌋ + 1 = 112.

There are more 112 significant digit numbers below that, with the same exponent; they range from (2^{22} + 2941935) / 2^{149} through (2^{23} – 1) / 2^{149}.

Overall, the 112 significant digit numbers live in the range (2^{22} + 2941935) / 2^{149} through (2^{24} – 1) / 2^{149}.

## Quadruple-Precision

Here are the maximum number of significant digits required for quadruple-precision:

*Maximum length integer.*An integer with the maximum number of significant digits is (2^{113}– 1) · 2^{16271}. It has ⌊log_{10}((2^{113}– 1) · 2^{16271})⌋ + 1 =**4,933 digits**.*Maximum length fraction.*A fraction with the maximum number of significant digits is (2^{113}– 1) / 2^{16494.}It has 16,494 digits: 4,931 leading zeros followed by**11,563 significant digits**: 16494 + ⌊log_{10}(2^{113}– 1) / 2^{16494})⌋ + 1 = 11,563!

You can apply the above analysis to find the maximum number of significant digits in other floating-point formats as well.

## Programming Language Support

Some programming languages let you print all these digits. In Python 3, this line will print all 1074 digits of the worst case double-precision example (2^{53} – 1) · 2^{-1074}, as displayed above:

print(format((pow(2,53)-1)*pow(2,-1074),".1074f"))

You can print just the 767 significant digits with this line (uses the ‘g’ presentation type instead of ‘f’):

print(format((pow(2,53)-1)*pow(2,-1074),".767g"))

4.4501477170144022721148195934182639518696390927032912960468522194496444440421538910330590478162701758282983178260792422137401728773891892910553144148156412434867599762821265346585071045737627442980259622449029037796981144446145705102663115100318287949527959668236039986479250965780342141637013812613333119898765515451440315261253813266652951306000184917766328660755595837392240989947807556594098101021612198814605258742579179000071675999344145086087205681577915435923018910334964869420614052182892431445797605163650903606514140377217442262561590244668525767372446430075513332450079650686719491377688478005309963967709758965844137894433796621993967316936280457084866613206797017728916080020698679408551343728867675409720757232455434770912461317493580281734466552734375e-308

In C (GCC and Visual Studio), this line will also print all 1074 digits (must also include *math.h*):

printf("%.1074f\n",(pow(2,53)-1)*pow(2,-1074));

Substituting ‘g’ for ‘f’ will print only the significant digits:

printf("%.767g\n",(pow(2,53)-1)*pow(2,-1074));

(I was happy that all the *pow()* implementations computed 2^{-1074} correctly. It’s a power of two so I expected them to, but you never know.)

Java, PHP, and JavaScript won’t let you print all those digits.

## On Decimal to Floating-Point Conversion

For double-precision, for example, it may seem like 17 significant digits of a decimal input is enough to convert it correctly, but it’s not; 17 digits only helps you return to the proper double once you’ve determined it.

A decimal to floating-point conversion routine has to consider all input digits up to the maximum, plus one digit for rounding. (Any digits beyond can just be considered “sticky”.) For example, for the input decimal representing 2^{-1022} + 2^{-1074} + 2^{-1075}, 1075 digits (768 significant) must be processed.

## Summary

Here is a summary of the digit counts we derived:

Max Integer Digits | Max Fraction Digits | |||
---|---|---|---|---|

Format | Total/Significant | Total | Leading Zero | Significant |

float | 39 | 149 | 37 | 112 |

double | 309 | 1,074 | 307 | 767 |

quad | 4,933 | 16,494 | 4,931 | 11,563 |

It is the maximum number of digits in a fraction that determines the maximum number of digits for a given IEEE format.

Notice the near symmetry between the number of integer digits and the number of fractional leading zeros. (If the absolute values of the minimum and maximum exponents of each format were equal, and if we listed the starting place of the significant digits instead of the count of leading zeros, it’d be symmetric.) For the fractions, we’ve put the biggest significand at the lowest place.

^{1} I’ll use the definition that *significant digits* are all the digits following the leading zeros; don’t think of them as digits of precision.

Update 12/8/16:I added a section called “Programming Language Support” to show some languages that let you print all the digits (I also included links to their documentation to show the use of the term “significant digits” in the way I have).