Excluding 0 and 1, it takes more digits to express an integer in binary than in decimal. How many more? The commonly given answer is log2(10) ≈ 3.32 times as many. But this is misleading; the ratio actually depends on the specific integer. So where does ‘log2(10) bits per digit’ come from? It’s a theoretical limit, a value that’s approached only as integers grow large. I’ll show you how to derive it.