I wrote a simple byte to decimal converter app less than two months into starting to learn Jetpack Compose. Now that I have more experience with Compose — in developing a real app and by participating on the #compose channel on Slack (login required) — I wanted to update this demo app to reflect my current understanding of best practices.
IntelliJ IDEA has a code inspection for Kotlin that will warn you if a decimal floating-point literal exceeds the precision of its type (Float or Double). It will suggest an equivalent literal (one that maps to the same binary floating-point number) that has fewer digits, or has the same number of digits but is closer to the floating-point number.
For Doubles for example, every literal over 17-digits should be flagged, since it never takes more than 17 digits to specify any double-precision binary floating-point value. Literals with 16 or 17 digits should be flagged if there is a replacement that is shorter or closer. And no literal with 15 digits or fewer should ever be flagged, since doubles have of 15-digits of precision.
But IntelliJ doesn’t always adhere to that, like when it suggests an 18-digit replacement for a 13-digit literal!
Which is bigger, 64! or 264? 64! is, because it follows from a proof by induction for any integer n greater than or equal to 4. It’s also easy to just reason that 64! is bigger: 264 is 64 factors of 2, whereas 64! has 64 factors, except all but one of them (1) are 2 or greater.
When I saw this problem though I wondered if I could solve it in another way: Could the factors of two alone in 64! be greater than 264? As it turns out, almost.
Do you prefer hexadecimal numbers written with uppercase letters (A-F) or lowercase letters (a-f)?
For example, do you prefer the integer 3102965292 written as B8F37E2C or b8f37e2c? Do you prefer the floating-point number 126.976 written as 0x1.fbe76cp6 or 0x1.FBE76Cp6?
I ran this poll on my sidebar, and after 96 responses, about 70% are “prefer uppercase” and about 9% are “prefer lowercase”. What do you think? (For the “depends on context” answer I meant things other than numeric values, like the memory representation of strings. However, for the purposes of this article, please answer with just numeric values in mind.)
I see these from time to time, but I don’t always capture them; here’s one I saw recently while playing a podcast:
(According to Castbox, this is an error in the ad and is out of their control.)
I’ve been learning Jetpack Compose and Kotlin (and Android for that matter) so I decided to create a simple binary conversion app to demonstrate how easy it is to create (at least basic) UI in Compose.
(This app has been updated; see Jetpack Compose Byte Converter App: 2022 Version.)
For my recent search for short examples of double rounding errors in decimal to double to float conversions I wrote a Kotlin program to generate and test random decimal strings. While this was sufficient to find examples, I realized I could do a more direct search by generating only decimal strings with the underlying double rounding error bit patterns. I’ll show you the Java BigDecimal based Kotlin program I wrote for this purpose.
In my previous exploration of double rounding errors in decimal to float conversions I showed two decimal numbers that experienced a double rounding error when converted to float (single-precision) through an intermediate double (double-precision). I generated the examples indirectly by setting bit combinations that forced the error, using their corresponding exact decimal representations. As a result, the decimal numbers were long (55 digits each). Mark Dickinson derived a much shorter 17 digit example, but I hadn’t contemplated how to generate even shorter numbers — or whether they existed at all — until Per Vognsen wrote me recently to ask.
The easiest way for me to approach Per’s question was to search for examples, rather than try to find a way to construct them. As such, I wrote a simple Kotlin1 program to generate decimal strings and check them. I tested all float-range (including subnormal) decimal numbers of 9 digits or fewer, and tens of billions of random 10 to 17 digit float-range (normal only) numbers. I found example 7 to 17 digit numbers that, when converted to float through a double, suffer a double rounding error.
Google is celebrating Leibniz’s 372nd birthday today, recognizing him for his writings on binary numbers and binary arithmetic:
The drawing shows the binary code for the ASCII characters that spell “Google”:
I’ve written about the formulas used to compute the number of decimal digits in a binary integer and the number of decimal digits in a binary fraction. In this article, I’ll use those formulas to determine the maximum number of digits required by the double-precision (double), single-precision (float), and quadruple-precision (quad) IEEE binary floating-point formats.
The maximum digit counts are useful if you want to print the full decimal value of a floating-point number (worst case format specifier and buffer size) or if you are writing or trying to understand a decimal to floating-point conversion routine (worst case number of input digits that must be converted).
The binary fraction 0.101 converts to the decimal fraction 0.625; the binary fraction 0.1010001 converts to the decimal fraction 0.6328125; the binary fraction 0.00111011011 converts to the decimal fraction 0.23193359375. In each of those examples, the binary fraction converts to a decimal fraction — that is, a terminating decimal representation — that has the same number of digits as the binary fraction has bits.
One digit per bit? We know that’s not true for binary integers. But it is true for binary fractions; every binary fraction of length n has a corresponding equivalent decimal fraction of length n.
This is the reason why you get all those “extra” digits when you print the full decimal value of an IEEE binary floating-point fraction, and why glibc strtod() and Visual C++ strtod() were once broken.