The following section of code was designed to show a wrong bit of programming, but it has a weird side effect that I cannot explain. It seems that when using
-= 1.0
on an 'unsigned int', the variable's bits wrap around from zero to the maximum value as expected, and then continues to decrement. But on an 'unsigned long' the variable oscillates between 0 and the
maximum value for an unsigned long.
I suspect this has something to do with the bit representation of the variables, or of -1.0 when cast into an unsigned int or long, but cannot figure out why. Any ideas what makes the difference in this otherwise
wrong code?
My compiler is Gnu g++ on a Macbook.
UPDATE: It seems any level of optimization above -O0 causes the unsigned int and unsigned long to ignore the -=1.0 (which is smarter, maybe).
Thanks!
That would fix it, but my question is "What makes the difference in the bad behavior between the unsigned ints and the unsigned longs?" i.e. Why do the two variable types behave differently?
longMinusOne -= 1.0 is equivalent to longMinusOne = longMinusOne - 1.0, and longMinusOne - 1.0 is of type double.
If longMinusOne == 0 then longMinusOne - 1.0 == -1.0, which when implicitly converted to unsigned long is 2^64 - 1, because -1.0 is downgraded to a 64-bit -1, which is represented in memory as 0xFFFFFFFFFFFFFFFF, which is also the in-memory representation of 2^64 - 1.
But if longMinusOne == 2^64 - 1, longMinusOne - 1.0 == (double)2^64. This is because double doesn't have enough significant bits to represent 2^64 - 2, so the floating point implementation is forced to use the closest value, which is 2^64. In fact, for values of this magnitude, double is only able to represent multiples of 2048. Obviously when this value is converted back to unsigned long, only the least significant 64 bits of the double value can be used, which all happen to be zero. Thus, longMinusOne returns to zero.