# Gangnam Style Video Overflows YouTube Counter

On Monday, Psy’s Gangnam Style video exceeded the limit of YouTube’s view counter; this is what Google had to say (hat tip: Digg):

“We never thought a video would be watched in numbers greater than a 32-bit integer (=2,147,483,647 views)…”

2,147,483,647 is 231 – 1, the maximum positive value a 32-bit signed integer can contain.

Google has since fixed the counter, but they didn’t say how (32-bit unsigned integer? 64-bit integer?). (Update: By deduction from this Wall Street Journal article, Google is now using 64-bit signed integers — although the number they cite is 263, not 263 – 1.)

The interesing thing is the “Easter egg” Google placed. If you hover your mouse over the counter, it spins like a slot machine; if you hold the mouse there long enough it will show a negative number. But the negative number is not what I expected. Is there a bug in the Easter egg?

Here is a screenshot of the normal count:

Here is a screenshot of the negative count:

The 32 bit unsigned value 2,152,382,740, when treated as a two’s complement signed value, is -2,142,584,556. (In binary, 2,152,382,740 is 10000000010010101100000100010100; that leading 1-bit means it’s a negative number in two’s complement.) Why does the counter show -2,142,584,554?

Here is a little C program to test this yourself.

```#include <stdio.h>
int main (void)
{
unsigned int gCount = 2152382740;
printf("%d\n", (int) gCount); // Prints -2142584556
}
```

(I loaded the video several times; the count is always off by 2.)

1. Pascal says:

Hello, Rick.

It is my time to make you sorry that I am reading your blog, as I bring news that printf(“%d\n”, ); is undefined behavior in C. Clause 7.21.6.1:9 makes this clear: “[…] If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.”

(The correct type for the conversion specifier %d is int.)

The program can be fixed by writing printf(“%d\n”, (int) gCount), which is longer than the original version, or by declaring gCount as int, which is shorter. Either way an overflow occurs in a conversion (for the usual 32-bit compilers). The results of this overflow are implementation-defined, which is much better than undefined behavior.

2. Pascal,

I wanted to print an unsigned int as an int, so the %d/unsigned int mismatch was intentional. I knew it was otherwise a mistake, but I didn’t know it was undefined; I was getting the answer I expected. (I ran this in Visual Studio). I updated the code to cast gCount to int in the printf (it still gives me the same answer.) In any case, this undefined/implementation defined behavior illustrates my point.

You are always welcome here. Thanks.

3. Jörg says:

C11 Standard says:

6.2.5 Types

6. For each of the signed integer types, there is a corresponding (but different) unsigned
integer type (designated with the keyword unsigned) that uses the same amount of

9 The range of nonnegative values of a signed integer type is a subrange of the
corresponding unsigned integer type, and the representation of the same value in each
type is the same.

So converting unsigned to int does not make any sense in case of an argument to printf (to remove a warning it does make sense). But doing so could even raise an implementation-defined signal defined in section:

6.3.1.3 Signed and unsigned integers

1 When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
3 Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.