Pi and e In Binary

Some people are curious about the binary representations of the mathematical constants pi and e. Mathematically, they’re like every other irrational number — infinite strings of 0s and 1s (with no discernible pattern). In a computer, they’re finite, making them only approximations to their true values. I will show you what their approximations look like in five different levels of binary floating-point precision.

The first 43 bits of pi and e
The first 43 bits of pi and e

Binary Floating-Point Formats

In binary floating-point, infinitely precise values are rounded to finite precision. Here’s how rounding works in five different levels of precision:

  • In half-precision, values are rounded to 11 significant bits.
  • In single-precision, values are rounded to 24 significant bits.
  • In double-precision, values are rounded to 53 significant bits.
  • In extended-precision, values are rounded to 64 significant bits.
  • In quadruple-precision, values are rounded to 113 significant bits.

The rounding rule used most often in practice is round-to-nearest, round-half-to-even; that’s the rule I will use. For pi and e, there are no “half to even” cases, since their binary expansions are infinite. This makes the rounding rule simple: if the rounding bit is 0, round down; if the rounding bit is 1, round up.

I will show the correctly rounded approximations of pi and e in these five formats.

Pi (π)

Here are the first 50 decimal digits of pi:

3.1415926535897932384626433832795028841971693993751…

Here are the first 128 bits of pi:

11.001001000011111101101010100010001000010110100011000010001101001100010011000110011000101000101110000000110111000001110011010001...

Here are the 128 bits again, with the rounding bit for each level of precision highlighted (bits 12, 25, 54, 65, and 114):

11.001001000011111101101010100010001000010110100011000010001101001100010011000110011000101000101110000000110111000001110011010001...

Here are the correctly rounded values of pi in each of the five levels of precision, shown in normalized binary scientific notation and as hexadecimal floating-point constants:

Half-Precision

pi = 1.1001001 x 21
   = 0x1.92p+1

This equals 3.140625 in decimal (all binary floating-point numbers have exact decimal representations), which approximates pi accurately to about 4 decimal digits.

(You can verify this conversion by hand, by adding the powers of two corresponding to the positions of the 1 bits: 11.001001 = 21 + 20 + 2-3 + 2-6 = 3.140625.)

Single-Precision

pi = 1.10010010000111111011011 x 21
   = 0x1.921fb6p+1

This equals 3.1415927410125732421875, which approximates pi accurately to about 8 decimal digits.

Double-Precision

pi = 1.1001001000011111101101010100010001000010110100011 x 21
   = 0x1.921fb54442d18p+1

This equals 3.141592653589793115997963468544185161590576171875, which approximates pi accurately to about 16 decimal digits.

Extended-Precision

pi = 1.100100100001111110110101010001000100001011010001100001000110
     101 x 21
   = 0x1.921fb54442d1846ap+1

This equals

3.14159265358979323851280895940618620443274267017841339111328125

which approximates pi accurately to about 20 decimal digits.

Quadruple-Precision

pi = 1.100100100001111110110101010001000100001011010001100001000110
     1001100010011000110011000101000101110000000110111 x 21
   = 0x1.921fb54442d18469898cc51701b8p+1

This equals

3.141592653589793238462643383279502797479068098137295573004504331874296718662975536062731407582759857177734375

which approximates pi accurately to about 35 decimal digits.

Euler’s Number (e)

Here are the first 50 decimal digits of e:

2.7182818284590452353602874713526624977572470936999…

Here are the first 128 bits of e:

10.101101111110000101010001011000101000101011101101001010100110101010111111011100010101100010000000100111001111010011110011110001...

Here are the 128 bits again, with the rounding bits for each level of precision highlighted:

10.101101111110000101010001011000101000101011101101001010100110101010111111011100010101100010000000100111001111010011110011110001...

Here are the correctly rounded values of e in each of the five levels of precision:

Half-Precision

e = 1.010111 x 21
  = 0x1.5cp+1

This equals 2.71875, which approximates e accurately to about 4 decimal digits.

Single-Precision

e = 1.010110111111000010101 x 21 
  = 0x1.5bf0a8p+1

This equals 2.71828174591064453125, which approximates e accurately to about 8 decimal digits.

Double-Precision

e = 1.0101101111110000101010001011000101000101011101101001 x 21
  = 0x1.5bf0a8b145769p+1

This equals 2.718281828459045090795598298427648842334747314453125, which approximates e accurately to about 16 decimal digits.

Extended-Precision

e = 1.010110111111000010101000101100010100010101110110100101010011
    011 x 21
  = 0x1.5bf0a8b145769536p+1

This equals

2.71828182845904523542816810799394033892895095050334930419921875

which approximates e accurately to about 20 decimal digits.

Quadruple-Precision

e =
1.010110111111000010101000101100010100010101110110100101010011010101011111101110001010110001000000010011100111101 x 21
  = 0x1.5bf0a8b1457695355fb8ac404e7ap+1

This equals

2.71828182845904523536028747135266231435842186719354886266923086032766716801933881697550532408058643341064453125

which approximates e accurately to about 34 decimal digits.

Addendum (July 2016)

I wanted to add a new perspective following the publishing of my articles “Number of Digits Required For Round-Trip Conversions” and “Decimal Precision of Binary Floating-Point Numbers”.

For starters, I was inconsistent in how I evaluated decimal precision for the ten conversions. Using the definition that decimal precision is the maximum number of matching digits after rounding, here are the new results (changes in bold):

Precision of pi and e By Floating-Point Format
Format pi (Digits) e (Digits)
Half-precision 3 3
Single-precision 8 7
Double-precision 16 16
Extended-precision 20 20
Quadruple-precision 34 34

The four that changed all have one less digit of precision than I had stated.

Furthermore, I’d now like to consider the representations of pi and e in the context of the precision of their formats. For each binary precision format you can compute a range of equivalent decimal precisions:

Range of Precision By Floating-Point Format
Format Decimal Precision (Digits)
Half-precision 3-4
Single-precision 6-8
Double-precision 15-16
Extended-precision 18-20
Quadruple-precision 33-35

Those values represent the range of decimal precision over the whole format; individual segments of the range will have their own unique precision. For example, in the segment [21, 22), which includes both pi and e, the precision is:

Precision Over [21, 22) By Floating-Point Format
Format Decimal Precision (Digits)
Half-precision 3
Single-precision 7
Double-precision 16
Extended-precision 19
Quadruple-precision 34

You can see that some of the conversions of pi and e are more accurate than their segment allows. For example, the single-precision value of pi is accurate to 8 digits, but only 7 digits of precision are provided in the segment in which it resides. I call this coincidental precision. pi’s proximity to a floating-point number — along with some beneficial rounding — makes its precision look greater than that for numbers in its segment in general.

Dingbat

10 comments

  1. this is pretty pointless…why does it matter how to convert to different floating point precisions?

  2. @Anonymous (sorry for the delay — your comment was marked as spam),

    This article was a response to regular searches on my site for “pi in binary” and “e in binary”. My approach was to show how they look in binary in a computer, in IEEE floating-point in particular. This allowed me to give examples of correct rounding and to show how different levels of binary precision correspond to different levels of decimal precision. And as a side effect, I’ve given hex constants that can be copied and used in code that requires correctly rounded values of these constants.

  3. @vent,

    If I understand your question, just use a formula that calculates e. Any constants in the formula will be automatically converted to binary, and of course calculations will be done in binary.

  4. reply to Anonymous: it might be pointless to someone not having any idea what is truly behind the floating point numbers representation and what effect it has on the correct calculations. To find out one must do tests for different precisions (single, double, quad). The author made it easier to find out Pi end e down to the last bit for each precision mode rather then for one to reinvent the hot water. I really admire him. He put it so much effort into the article. People like him make a difference.

  5. I first learned binary/quinary numeral systems when I was bored by 4th grade schooling. So I joined the Early bird, science club and went to the library a lot to get it right. Since then I have taught many children as well older people, some could barely read but I was teaching them binary with ease and they still show me they kept learning later and used it in daily life to count instead of our base 10. I Teach mime, knife throwing and martial arts but am intrigued by numbers still and I just turned 70. I continue to search for ways to use binary etc. to ameliorate my life in ways I have not discovered but feel close to something great and fun if not user friendly. I am an explorer and like scientific forms of getting to an answer that is correct. Just want to compliment your approach to teaching. I always use my fingers to teach as there may not be objects or pen and pad to write where I have been. Thank you again, Bernie Bang, Mime.

  6. “this is pretty pointless…why does it matter how to convert to different floating point precisions?”

    Finite precision constraints leads to rounding errors. A better understanding of the floating point representation can help avoid pitfalls like comparing floating point numbers or properly ordering floating point operations. What is presented here is a nice reference for defining constants with the correct precision for the primitive type container.

  7. Nothing is useless; people just want others to do the work for them and when it’s not what they expected they have a tantrum.

    Like a full page of: 11.1,100,1,101,1001 etc, is what I was hoping for so I could see the distinction in each decimal digit, but I’m sure someone else would have been equally displeased with that lol. No worries. Take care, bro and don’t sweat the small stuff.

  8. I believe combining einsteins theory, laws of thermodynamics, binary code will lead us to a true form of A.I. I do not think we can create true A.I. Because no matter how random we try to make any computational system at it’s very basic level it still must be programmed by us humans. True inspiration I do not believe is still understood or able to be mathematically represented yet.

Comments are closed.

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php