Hi all,
I have an extensive program package for scientific purposes, too big to post anything here. It has been in use for a long time. I do not use any special libraries. I want my results to be double precision.
I have a strong suspicion that there is some operation in my program which makes it only single precision, after all (setting an important variable to single precision gives the same result).
I have made sure that
- all floating point variables are declared as "double"
- Input for double variables is read in with "%lf" or "%lg", for instance
- floating point literals are "2.0" and not "2", for example.
- the only explicit type casts I use are "double(3)", etc
I want to detect the unwanted, implicit type cast which is taking place somewhere.
Do you have a suggestion what I could look for? Are there tools I could use?
The only thing I can think of is that there float literals (e.g. 12.0f), but AFAIK, C/++ always converts to the largest type. If 'float' doesn't appear anywhere on your program then you can rest assured that everything is double. All the cmath functions take and return double, so there shouldn't be any precision loss.
Do you have anything in particular that gives you that suspicion?
The situation is this: depending on the precision of my variables, my result will make sense and be accurate to a "finite order" N. It turns out that my simulation blows up (returns NaN (intel icpc) or astronomically high numbers (gcc)) for N>=22.
Now, if I set one or more of my variables from "double" to "float", the precision of my calculation should drop if before everything was "double". However, I do get identical output. From which I conclude that the initial "double" program was not so "double" after all. Therefore I suspect that somewhere sth. is converted from double to float where I have not thought of it.
Moreover, there is a group which calculates the identical task in Fortran and double precision. They can easily go up to N=30 and probably beyond. This increases my suspicion.
That sounds more like there's a bug in one of the formulas (or whatever they are) that's giving you ludicrous results.
I'd try to find where the results are going bad by zeroing in on the function that's causing it and taking a long hard look at it.
The way to obtain results up to a given order N is totally determined by the lower orders. You can set the order highest order N like a flag, there are no new formulas which come into play. It is all automatic. Up to N=21 the results totally agree order by order with what other, independent codes do. Total agreement for instance with that one Fortran code. The N=22 blowup comes abruptly.
So it should surprise me if there is something wrong in the formulas because then the lower orders would have to be wrong also.
Is it possible that Fortran might achieve a higher accuracy than equivalent C++ code if both run at double precision for a purely numerical project?