If you insist that time is expressed in units of "seconds", then obviously you will need
floating-point math in order to retain millisecond (or microsecond) information. But, just as well, you can decide to represent time as units of "milliseconds", "microseconds" or even "nanoseconds" – in which case using
integers will provide sufficient resolution (precision) for all practical purposes. For example, Windows typically uses a 64-Bit
integer counter of "100-nanosecond intervals" (since January 1, 1601) to represent time values:
https://learn.microsoft.com/en-us/windows/win32/api/minwinbase/ns-minwinbase-filetime
Similarly, Linux/Unix uses
timespec
with a "whole seconds" field
plus a "nanoseconds" (rest) field, both of which are integer values:
https://linux.die.net/man/3/clock_gettime
Of course, you need to take care when converting from "CPU cycles" (or "timer ticks") to the desired time unit, but it can be done with integer math just fine! This is how MSVCRT computes the
clock()
value from the "high precision" timer value and its frequency:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
|
// Scales a 64-bit counter from the QueryPerformanceCounter frequency to the
// clock() frequency defined by CLOCKS_PER_SEC.
static long long scale_count(long long timer_count)
{
long long scaled_count = (timer_count / source_frequency) * CLOCKS_PER_SEC;
// To minimize error introduced by scaling using integer division, separately
// handle the remainder from the above division by multiplying the left-over
// counter by the destination frequency, then diviting by the input frequency:
timer_count %= source_frequency;
scaled_count += (timer_count * CLOCKS_PER_SEC) / source_frequency;
return scaled_count;
}
| |
Here you could replace
CLOCKS_PER_SEC
with whatever you like, e.g. use
1000000, if you want time in "microsecond" units.