Inaccurate division of doubles (Visual C ++ 2008)
I have code to convert the time value returned from the QueryPerformanceCounter to a double value in milliseconds, since this is more convenient to read with.
The function looks like this:
double timeGetExactTime() {
LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
QueryPerformanceCounter(&timerPerformanceCounter);
if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
}
return 0.0;
}
The problem I am facing recently (I don’t think I had this problem before, and there were no changes in the code) is that the result is not very accurate. The result does not contain decimal places, but it is even less accurate than 1 millisecond.
When I enter the expression into the debugger, the result is as accurate as I expected.
I understand that double cannot hold the precision of a 64-bit integer, but at this time the PerformanceCounter only required 46 bits (and the double should be able to store 52 bits without loss) Also, it seems odd that the debugger would use a different format for splitting ...
Here are some of the results I got. The program was compiled in debug mode, the Floating Point mode in C ++ options was set to the default (Precise (/ fp: exact))
timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000
double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000
double result = perfCounter / perfFrequency;
2114117248.0000000
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000
Result with same expression in debugger:
2114117188.0396111
Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646
Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036
Does anyone know why the precision is different from the Watch debugger versus the output in my program?
Update: I tried to subtract 30270310439445 from timerPerformanceCounter.QuadPart before doing the transformation and division and it now looks accurate in all cases. Maybe the reason I am seeing this behavior now may be due to the fact that my computer has been running for 16 days, so the value is greater than I am used to? So it looks like a division precision issue with large numbers, but it still doesn't explain why the division was still correct in the Clock window. Does it use a higher precision type than double to give it a result?
a source to share
Thanks, using a decimal would probably be a solution too. For now, I've taken a slightly different approach that also works well, at least until my program runs for more than a week or so without restarting. I just remember the performance counter when my program was run and subtracted that from the current counter before converting to double and doing the division.
I'm not sure which solution would be the fastest, I would probably have to check this metric first.
bool perfTimerInitialized = false;
double timerPerformanceFrequencyDbl;
LARGE_INTEGER timerPerformanceFrequency;
LARGE_INTEGER timerPerformanceCounterStart;
double timeGetExactTime()
{
if (!perfTimerInitialized) {
QueryPerformanceFrequency(&timerPerformanceFrequency);
timerPerformanceFrequencyDbl = ((double)timerPerformanceFrequency.QuadPart) / 1000.0;
QueryPerformanceCounter(&timerPerformanceCounterStart);
perfTimerInitialized = true;
}
LARGE_INTEGER timerPerformanceCounter;
if (QueryPerformanceCounter(&timerPerformanceCounter)) {
timerPerformanceCounter.QuadPart -= timerPerformanceCounterStart.QuadPart;
return ((double)timerPerformanceCounter.QuadPart) / timerPerformanceFrequencyDbl;
}
return (double)timeGetTime();
}
a source to share
Adion,
If you don't mind a performance hit, put the QuadPart numbers in decimal, not double, before doing the division. Then, return the resulting number to double.
You know the size of the numbers correctly. This resets the floating point precision.
For more on this than you probably ever wanted to know, see:
What Every Computer Scientist Should Know About Floating Point Arithmetic http://docs.sun.com/source/806-3568/ncg_goldberg.html
a source to share