I’ve discovered that decimal floating-point numbers of 15 significant digits or less don’t always round-trip through SQLite. Consider this example, executed on version 3.7.3 of the pre-compiled SQLite command shell:

```
sqlite> create table t1(d real);
sqlite> insert into t1 values(9.944932e+31);
sqlite> select * from t1;
9.94493200000001e+31
```

SQLite represents a decimal floating-point number that has *real* affinity as a double-precision binary floating-point number — a double. A decimal number of 15 significant digits or less is supposed to be recoverable from its double-precision representation. In SQLite, however, this guarantee is not met; this is because its floating-point to decimal conversion routine is implemented in limited-precision floating-point arithmetic.

## 15 Significant Digit Decimal Numbers Are Supposed to Round-Trip

If you assign a decimal number of 15 significant digits or less to a double, you can recover it by printing the double to 15 significant digits. For example, consider this C program, compiled and run using Visual C++:

#include <stdio.h> int main(void) { printf ("%.15g\n",9.87e+31); printf ("%.15g\n",9.944932e+31); printf ("%.15g\n",8.948471e+15); printf ("%.15g\n",1.23456789012345e+300); printf ("%.15g\n",1.23456e-300); printf ("%.15g\n",0.000001); }

Its output shows that the decimal literals — which are converted to doubles before the calls to printf() — are recovered:

9.87e+031 9.944932e+031 8.948471e+015 1.23456789012345e+300 1.23456e-300 1e-006

(Significant digits are recoverable; the original formatting of a number is recoverable only by chance.)

Round-trip conversion is possible only when the underlying decimal to floating-point and floating-point to decimal conversions are correct — or at least “close” to correct so that any errors are masked by rounding.

## Windows vs. Linux

The Windows build of SQLite uses double-precision calculations in its floating-point to decimal conversion; the Linux build of the same source code uses extended-precision calculations. As a result, a given number may convert differently in the two environments. The example I showed in the introduction, 9.944932e+31, converts incorrectly on both Windows and Linux. But two different examples, 9.87e+31 and 8.948471e+15, convert correctly in one environment and incorrectly in the other:

On Windows:

```
sqlite> create table t1(d real);
sqlite> insert into t1 values(9.87e+31);
sqlite> insert into t1 values(8.948471e+15);
sqlite> select * from t1;
9.87e+31
8.94847100000001e+15
```

On Linux:

```
sqlite> create table t1(d real);
sqlite> insert into t1 values(9.87e+31);
sqlite> insert into t1 values(8.948471e+15);
sqlite> select * from t1;
9.87000000000001e+31
8.948471e+15
```

(Both examples were run on my Intel Core Duo processor.)

## A Peek At the SQLite Source Code

In the SQLite source code file sqlite3.c, doubles retrieved from the database are printed with this call:

sqlite3_snprintf(nByte, pMem->z, "%!.15g", pMem->r);

(The hardcoded "15g" means that output is *limited* to 15 significant digits.)

sqlite3_snprintf() calls sqlite3VXPrintf() which, among other things, does this calculation on the double, which is stored in *realvalue*:

for(idx=precision, rounder=0.5; idx>0; idx--, rounder*=0.1){} ... realvalue += rounder;

The variable *precision* has a value of 14, so rounder is calculated as approximately 5.0000000000000039e-15; this is slightly above the desired value, which is 5e-15 (5e-15 is not representable in binary floating-point). The error made by rounding with this value, whether by itself or in combination with the imprecise calculations that precede it, causes the incorrect conversions. (Thanks to Michael Black for discovering this.)

## How I Found These Examples

I found these examples by building the SQLite source code with a C test program I wrote. The test program does the following in a loop:

- Generates a random decimal string of 15 digits or less.
- Converts the decimal string to a double using David Gay’s strtod() function (to ensure correct decimal to floating-point conversion).
- Calls sqlite3_snprintf() to print the double using SQLite’s conversion algorithm.
- Compares the output of sqlite3_snprintf() with David Gay’s dtoa() function, which I formatted to match SQLite’s formatting. (I didn’t compare the output to the original decimal string, which was in yet another format; I assume that dtoa() always returns the significant digits of the original decimal string.)

I verified that the examples highlighted in this article produced errors when passed through the SQLite command shell as well as through the SQLite C API. Under the covers, I also verified that the original decimal to floating-point conversions were done correctly by SQLite, to rule that out as the reason for the unexpected results. I did this by printing *realvalue* as a hexadecimal floating-point constant and comparing it to the conversion done by David Gay’s strtod().

## Bug Report

Following SQLite’s policy for reporting bugs, I reported this problem in two threads on the SQLite mailing list. Presumably, the fix will be to do the floating-point to decimal conversion in higher-precision arithmetic — or at least do the calculation with the “rounder” in higher-precision.

## Questions

I’m curious about how this works on the Mac, and how people use reals in SQL:

- If you have access to SQLite on a Mac, can you try the examples and let me know what you find?
- If you have used floats in SQLite (or SQL in general), can you tell me what your experience has been, specifically as it relates to correct rounding and round-trip conversions?

Thanks!

It’s not “15 significant digits”…it’s “15 decimal places”…actually 16 significant digits as there’s always a number in front of the decimal point for that format. Don’t confuse the fractional portion of the printf formats. For example %6.3f is up to 6 significant digits…not 3. Could actually be anywhere from 1 to 6 significant digits depending on the number. .001 is only 1 significant digit. 123.456 is 6 digits. The decimal point doesn’t matter.

Michael,

I believe “15 significant digits” is the right term. I’m

notdifferentiating between digits to the left or right of the decimal point. Consider two integers, one 15 digits and one 16 digits; SQLite prints them both to 15 significant digits:sqlite> create table t1(d real);

sqlite> insert into t1 values(123456789012345);

sqlite> insert into t1 values(1234567890123456);

sqlite> select * from t1;

123456789012345.0

1.23456789012346e+15

When SQLite prints in scientific notation, there are only 14 decimal places maximum.

Addendum:Your confusion might be with the ".15g" in the printfs. The printf manual page says this about the "g" format specifier:

“The precision specifies the number of significant digits.”

Contrast this with the "f" format specifier:

prints

"f" prints 12 leading 0s and only 3 significant digits; "g" prints all 5 significant digits.

Learn something new every day…I never realized the difference between %g and %f….guess I’m so used to people confusing the %f type of problem.

Your example is a good one and explains the concept well for others hopefully too.

Thanks for the edification….