Is this only about how your debugger returns a value off by something negligible when the code internally works fine?
I don't see why you have to worry about something being off by a zillion-th, when the result is just fine.
(If this isn't only in the debugger, show your code please, because in that case, your scenario doesn't make any sense unless there is a poorly made conversion going on).
If you still don't understand, this is pretty much how printf works, it assumes that the precision is always 6 (which is the maximum size of a float), which means if the left hand decimal is higher, the precision is lower, so there are many scenarios where even though the value is small, it corrupts the value by quite a bit, for example:
And this is fixable by doing this:
1 2
|
//needs <float.h>, lower the precision more if the number is higher.
printf( "The number of fields input is %.*f\n", FLT_DIG-1, fp );
| |
But this is a very limited solution because when the decimal side goes up, the less precise it is, the more you subtract, sometimes you flip a coin and it shows a rounded number, sometimes it doesn't.
Or you could use a double, the above code will work (if you change sscanf to use "%ld"), and it should always work much better than float since the decimal can be higher and still be in under 6 precision.
Overall, I assume the debugger also does something similar, and decided to make the precision maximum, printf style, when it really shouldn't (but perhaps it's because of performance reasons, legacy code, debugging floating point number purposes, who knows).
You can also test this out by setting the precision to DLB_DIG, but no "-1", and now 1.001 will have repeating nines, just like how float will have repeating nines if you gave it 98.001..., but note that a precision of 16 is quite large and means you could have 15 repeating zeros before it cannot store data, so to say double is worse than float is a long shot.
This mostly has to do with the fact that like the posters above have already said, float has 6 or 7 significant digits, so it is just a coincidence that 1.001 fits perfectly in a float without the lack of precision coming from the fact that the number is simulated by a inverse power of 2 and somehow ends up rounding the number up to 1.001, and for doubles, its a coincidence that 1.001 is a unlucky number which if you present a double to its maximum most realistic value, like how your debugger does it (even if you round it, any forward operations will still use the original off by a zillion value simply because that is how it can only be represented in memory).
You should fix your typo of
%ld
because that is a long "decimal" (unless there is something I don't know about vs specifiers).
And like other posters have said before, you cannot have exacts, you must use relatives, you should be familiar with something called an "epsilon", or if you restraint yourself to only using values from 0 to 1.0, the values will be absolute and not corrupted (if you print it in the right amount of precision).
And remember that long doubles is equal to doubles in visual studios. And as an extra, the same is with long and long long.