Thread overview
Phobos: std.c.time.CLOCKS_PER_SEC
Nov 03, 2004
Thomas Kuehne
Nov 03, 2004
Sean Kelly
Nov 04, 2004
Sean Kelly
Nov 05, 2004
Sean Kelly
Nov 05, 2004
Sean Kelly
November 03, 2004
On POSIX systmes like Linux std.c.time.CLOCKS_PER_SEC should be 1000000 and not 1000.

Thomas
November 03, 2004
In article <cmaptj$1mp4$1@digitaldaemon.com>, Thomas Kuehne says...
>
>On POSIX systmes like Linux std.c.time.CLOCKS_PER_SEC should be 1000000 and not 1000.


Changed in http://home.f4.ca/sean/d/stdc.zip

The above is an attempt at full standard C99 library support in D.  I only have a Windows machine to test on (and a FreeBSD machine for reference) and would love input on the Linux/POSIX side as I know more work has to be done with versioning and such.


Sean


November 04, 2004
FYI, I posteed some small fixes to the headers this morning.  math.d had some latent references to "long double" and complex.d had a function called "creal," which I commented out.  The full set of headers compiles just fine on Windows now, though unit tests are still forthcoming.


Sean


November 05, 2004
Thomas Kuehne wrote:

> On POSIX systmes like Linux std.c.time.CLOCKS_PER_SEC should be 1000000 and
> not 1000.

And on Darwin BSD (Mac OS X), it should be 100.

/usr/include/time.h:
> #include <machine/limits.h>	/* Include file containing CLK_TCK. */
> 
> #define CLOCKS_PER_SEC  (CLK_TCK)

/usr/include/ppc/limits.h:
/usr/include/i386/limits.h:
> #define	CLK_TCK		100		/* ticks per second */


So the end result becomes something like:

version (Windows)
{
    clock_t CLOCKS_PER_SEC = 1000;
}
else version (darwin)
{
    clock_t CLOCKS_PER_SEC = 100;
}
else
{
    clock_t CLOCKS_PER_SEC = 1000000;
}

--anders

PS. `uname` is "Darwin", but D's versions use schizophrenic casing.
     (similarly, `arch` is "ppc" but the version to use is "PPC"...)
November 05, 2004
In article <cmg2mr$8ro$1@digitaldaemon.com>, =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...
>
>Thomas Kuehne wrote:
>
>> On POSIX systmes like Linux std.c.time.CLOCKS_PER_SEC should be 1000000 and not 1000.
>
>And on Darwin BSD (Mac OS X), it should be 100.

Crazy.  So how does one get an accurate tick count in Darwin?


Sean


November 05, 2004
Sean Kelly wrote:

>>And on Darwin BSD (Mac OS X), it should be 100.
> 
> Crazy.  So how does one get an accurate tick count in Darwin?

Good question, I belive CLOCKS_PER_SEC = 100 is general BSD...

I think one should use "gettimeofday" which has milliseconds?

But I haven't tried it. Anyway, clock() returns 10 ms approx.

--anders
November 05, 2004
In article <cmg78m$1453$1@digitaldaemon.com>, =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...
>
>Sean Kelly wrote:
>
>>>And on Darwin BSD (Mac OS X), it should be 100.
>> 
>> Crazy.  So how does one get an accurate tick count in Darwin?
>
>Good question, I belive CLOCKS_PER_SEC = 100 is general BSD...
>
>I think one should use "gettimeofday" which has milliseconds?
>
>But I haven't tried it. Anyway, clock() returns 10 ms approx.

Good to know.  I've updated this in my C headers as well (and cleaned up the
version blocks a bit).


Sean


November 05, 2004
>> Crazy.  So how does one get an accurate tick count in Darwin?
> 
> Good question, I belive CLOCKS_PER_SEC = 100 is general BSD...
> 
> I think one should use "gettimeofday" which has milliseconds?
> But I haven't tried it. Anyway, clock() returns 10 ms approx.

Sorry, make that *microseconds* for "gettimeofday"... :-)


Used something like this:

> #include <sys/time.h>
> 
> struct timeval t;
> 
> gettimeofday(&t,NULL);
> 
> float sec = (float) t.tv_sec + (float) t.tv_usec * 0.000001f;

I did a small loop, and it seems to count correctly...


The main difference is that clock() counts the *CPU* time:

(man page from Darwin)
> DESCRIPTION
>      The clock() function determines the amount of processor time used since
>      the invocation of the calling process, measured in CLOCKS_PER_SECs of a
>      second.
[...]
> STANDARDS
>      The clock() function conforms to ISO/IEC 9899:1990 (``ISO C89'').  How-
>      ever, Version 2 of the Single UNIX Specification (``SUSv2'') requires
>      CLOCKS_PER_SEC to be defined as one million.  FreeBSD does not conform to
>      this requirement; changing the value would introduce binary incompatibil-
>      ity and one million is still inadequate on modern processors.

While as the above function counts real-life / calendar time ?
(the time returned is seconds since the Epoch started, in GMT)


However, "getrusage" is also available for getting at CPU time ?
(it has two similar timeval structs, one each for user/system)

Unfortunately, that resolution also applies to OS X's "getrusage"...
(I do believe it has a somewhat higher resolution in e.g. FreeBSD?)


However, as far as I can tell, the *same thing* goes for Linux too?

(man page from Linux)
> DESCRIPTION
>        The clock() function returns an approximation of processor
>        time used by the program.
[...]
> CONFORMING TO
>        POSIX requires that CLOCKS_PER_SEC equals 1000000
>        independent of the actual resolution.

At least when testing in Red Hat Linux 7.3, I got the same results:

> clock: 0.000000         gettimeofday: 0.000002
> clock: 0.090000         gettimeofday: 0.088923
> clock: 0.180000         gettimeofday: 0.177936
> clock: 0.270000         gettimeofday: 0.265852
> clock: 0.360000         gettimeofday: 0.354387


So, it seems that real issue here is that clock() has low resolution...
(independant of which "scale factor" is being used for CLOCKS_PER_SEC)

--anders


PS. Here are the C declarations from /usr/include/sys/time.h :

> /*
>  * Structure returned by gettimeofday(2) system call,
>  * and used in other calls.
>  */
> struct timeval {
> 	int32_t	tv_sec;		/* seconds */
> 	int32_t	tv_usec;	/* and microseconds */
> };
[...]
> int	gettimeofday (struct timeval *, struct timezone *);

The return code is an error flag, 0 is OK and -1 means look in errno.