Looking at the source code, this seems to be fixed
at 1000000000 independent of hardware. I'm not sure
if this is supposed to be the same on all Windows XP systems, but it's wrong on my machine at least.
getCPUTime always returns a multiple of 15625000000
I use getCPUTime to measure the start and end times of some code I want to benchmark and need to know cpuTimePrecision to ensure that time quantisation is not a significant source of error and to normalise the times to some grokable number of "ticks".
I don't know what part of windows API should be used for this, but 15.625 mS seems too coarse resolution (at least 4 orders of magnitude too coarse assuming a non-brain dead OS) and the figure given by cpuTimePrecision is in any case wrong.
If it can't be made right then it would be better not to have it at all, or have some function which determines it empirically (you could use unsafePerformIO to keep the existing API I guess).
Unfortunately the mess that is MSDN doesn't make it at all easy to figure out what can or should be done about this. I can't figure it out but perhaps someone else knows.
The queryPerformanceFrequency and queryPerformanceCounter functions seem to offer better resolution, but AFAICT measure "wall clock" time rather than CPU time and may not be available on all windows systems (according to MSDN).
The module CPUTime is part of the Haskell 98 report (but not the Haskell 2010 report).
Computation getCPUTime returns the number of picoseconds of CPU time used by the current program. The precision of this result is given by cpuTimePrecision. This is the smallest measurable difference in CPU time that the implementation can record, and is given as an integral number of picoseconds.
getCPUTime
getCPUTime always returns a multiple of 15625000000
the clock is updated 64 times per second, at the clock tick interrupt. Or in other words 1 / 64 = 0.015625 between ticks
And 0.015625 seconds = 15625000000 picoseconds.
Edit: on my Linux system, the precision of getCPUTime seems to be one microsecond. This makes sence, given that the implementation calls getrusage, which returns a timeval structure containing seconds and microseconds as longs.
(to be sure this is really this is really a long:)
/usr/include/x86_64-linux-gnu/bits/typesizes.h:
#if defined __x86_64__ && defined __ILP32__# define __SYSCALL_SLONG_TYPE __SQUAD_TYPE# define __SYSCALL_ULONG_TYPE __UQUAD_TYPE#else# define __SYSCALL_SLONG_TYPE __SLONGWORD_TYPE# define __SYSCALL_ULONG_TYPE __ULONGWORD_TYPE#endif
/usr/include/x86_64-linux-gnu/bits/types.h:
#define __SLONGWORD_TYPE long int
(why does it alway have to be this complicated)
cpuTimePrecision
The current implementation of cpuTimePrecision looks at the value of clockTicks = clk_tck() (see #7519 (closed)), converted to picoseconds.
On my Linux system, clk_tck calls sysconf(_SC_CLK_TCK) and always returns 100
(see libraries/base/cbits/sysconf.c, and note that CLK_TCK is only defined when #undef __USE_XOPEN2K, see /usr/include/time.h and /usr/include/x86_64-linux-gnu/bits/time.h).
On Windows, clk_tck seems to always return CLK_TCK = 1000 (see ./opt/ghc-7.10.3/mingw/x86_64-w64-mingw32/include/time.h).
These values don't seem related in any way to the precision of getCPUTime.
From man sysconf:
clock ticks - _SC_CLK_TCK
The number of clock ticks per second. The corresponding vari‐
able is obsolete. It was of course called CLK_TCK. (Note:
the macro CLOCKS_PER_SEC does not give information: it must
it's the number of times the timer interrupts the CPU for scheduling and other tasks, 100Hz is a common value, higher frequency equals higher timer resolution and more overhead.
Solution ?
Does anyone happen to know where we can get real values for this for any platforms?
But maybe we should just give up, since CPUTime is not part of the Haskell report anymore. Keep the function cpuTimePrecision for backward compatibility, but change the docstring to say what it really does: return the number of picoseconds between clock ticks.
But maybe we should just give up, since CPUTime is not part of the Haskell report anymore. Keep the function cpuTimePrecision for backward compatibility, but change the docstring to say what it really does: return the number of picoseconds between clock ticks.
This sounds reasonable to me.
For the record, I believe the relevant Linux interface here is clock_getres (which itself only has nanosecond resolution at best).