Gangnam Style Video Overflows YouTube Counter

On Monday, Psy’s Gangnam Style video exceeded the limit of YouTube’s view counter; this is what Google had to say (hat tip: Digg):

“We never thought a video would be watched in numbers greater than a 32-bit integer (=2,147,483,647 views)…”

2,147,483,647 is 231 – 1, the maximum positive value a 32-bit signed integer can contain.

Google has since fixed the counter, but they didn’t say how (32-bit unsigned integer? 64-bit integer?). (Update: By deduction from this Wall Street Journal article, Google is now using 64-bit signed integers — although the number they cite is 263, not 263 – 1.)

The interesing thing is the “Easter egg” Google placed. If you hover your mouse over the counter, it spins like a slot machine; if you hold the mouse there long enough it will show a negative number. But the negative number is not what I expected. Is there a bug in the Easter egg?

Continue reading “Gangnam Style Video Overflows YouTube Counter”

Decimal/Binary Conversion of Integers in App Inventor

To complete my exploration of numbers in App Inventor, I’ve written an app that converts integers between decimal and binary. It uses the standard algorithms, which I’ve just translated into blocks.

Continue reading “Decimal/Binary Conversion of Integers in App Inventor”

App Inventor Computes With Big Integers

I recently wrote that App Inventor represents its numbers in floating-point. I’ve since discovered something curious about integers. When typed into math blocks, they are represented in floating-point; but when generated through calculations, they are represented as arbitrary-precision integers — big integers.

Continue reading “App Inventor Computes With Big Integers”

Floating-Point Will Still Be Broken In the Year 2091

Variants of the question “Is floating point math broken?” are asked every day on Stackoverflow.com. I don’t think the questions will ever stop, not even by the year 2091 (that’s the year that popped into my head after just reading the gazillionth such question).

https://www.exploringbinary.com/wp-content/uploads/SO.2091.small.png
A representative floating-point question asked on stackoverflow.com in 2009, projected to be just as popular in 2091 (click thumbnail to enlarge)

Continue reading “Floating-Point Will Still Be Broken In the Year 2091”

Testing Some Edge Case Conversions In App Inventor

After discovering that App Inventor represents numbers in floating-point, I wanted to see how it handled some edge case decimal/floating-point conversions. In one group of tests, I gave it numbers that were converted to floating-point incorrectly in other programming languages (I include the famous PHP and Java numbers). In another group of tests, I gave it numbers that, when converted to floating-point and back, demonstrate the rounding algorithm used when printing halfway cases. It turns out that App Inventor converts all examples correctly, and prints numbers using the round-half-to-even rule.

Continue reading “Testing Some Edge Case Conversions In App Inventor”

Numbers In App Inventor Are Stored As Floating-Point

I am exploring App Inventor, an Android application development environment for novice programmers. I am teaching it to my kids, as an introductory step towards “real” app development. While playing with it I wondered: are its numbers implemented in decimal? No, they aren’t. They are implemented in double-precision binary floating-point. I put together a few simple block programs to demonstrate this, and to expose the usual floating-point “gotchas”.

App Inventor Blocks To Display 0.1 to 17 Digits
App Inventor Blocks To Display 0.1 to 17 Digits

Continue reading “Numbers In App Inventor Are Stored As Floating-Point”

GCC Avoids Double Rounding Errors With Round-To-Odd

GCC was recently fixed so that its decimal to floating-point conversions are done correctly; it now calls the MPFR function mpfr_strtofr() instead of using its own algorithm. However, GCC still does its conversion in two steps: first it converts to an intermediate precision (160 or 192 bits), and then it rounds that result to a target precision (53 bits for double-precision floating-point). That is double rounding — how does it avoid double rounding errors? It uses round-to-odd rounding on the intermediate result.

Continue reading “GCC Avoids Double Rounding Errors With Round-To-Odd”

Hour of Code: Binary Numbers and Binary Addition

I want to contribute to the Hour of Code event happening now during Computer Science Education Week.

I don’t write about computer programming, but I do write extensively about how computers work — in particular, about how they do arithmetic with binary numbers. For your “hour of code” I’d like to introduce you to binary numbers and binary addition. I’ve selected several of my articles for you to read, and I’ve written some exercises you can try on my online calculators.

Continue reading “Hour of Code: Binary Numbers and Binary Addition”

real.c Rounding Is Perfect (GCC Now Converts Correctly)

GCC, the GNU Compiler Collection, recently fixed this eight and a half year old bug: “GCC Bugzilla – Bug 21718: real.c rounding not perfect.” This bug was the cause of incorrect decimal string to binary floating-point conversions. I first wrote about it over three years ago, and then recently in September and October. I also just wrote a detailed description of GCC’s conversion algorithm last month.

This fix, which will be available in version 4.9, scraps the old algorithm and replaces it with a call to MPFR function mpfr_strtofr(). I tested the fix on version 4.8.1, replacing its copy of gcc/real.c with the fixed one. I found no incorrect conversions.

Continue reading “real.c Rounding Is Perfect (GCC Now Converts Correctly)”

Gay’s strtod() Returns Zero For Inputs Just Above 2^-1075

While running some of GCC’s string to double conversion testcases I discovered a bug in David Gay’s strtod(): it converts some very small subnormal numbers incorrectly. Unlike numbers 2-1075 or smaller, which should convert to zero under round-to-nearest/ties-to-even rounding, numbers between 2-1075 and 2-1074 should convert to 2-1074, the smallest number representable in double-precision binary floating-point. strtod() correctly converts the former to 0, but it incorrectly converts the latter to 0 as well.

(Update 11/25/13: This bug has been fixed.)

Continue reading “Gay’s strtod() Returns Zero For Inputs Just Above 2^-1075”

College Notebook: When I Was Taught Floating-Point

In my article “Floating-Point Questions Are Endless on stackoverflow.com” I showed examples of the many questions asked that demonstrate lack of knowledge of the most basic property of floating-point — that not all decimal values are representable in binary. In response to a reader’s comment on my article I wrote:

It would be interesting to know how it’s taught today (it’s been a very long time since I was taught it). I can’t imagine though that the person teaching it wouldn’t say — within a sentence or two of saying “floating-point” — that it “can’t represent all decimal numbers accurately”.

That prompted me to look through my box of thirty plus year old college (undergraduate) notebooks. I found notebooks for four classes in which I was taught floating-point. The notes from three of those classes confirm what I thought — that we were warned early of the decimal/binary mismatch. But in the first class of the four — the beginner’s class — it’s less clear what we were told. I’ll show you images of the relevant excerpts from my notes. (I notice I had some elements of cursive in my handwriting back then.)

Continue reading “College Notebook: When I Was Taught Floating-Point”

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php