Thread overview
Weird timing issue with Thread.sleep
Aug 03, 2011
Andrej Mitrovic
Aug 03, 2011
Andrej Mitrovic
Aug 03, 2011
Jacob Carlborg
Aug 03, 2011
Andrej Mitrovic
Aug 04, 2011
Jacob Carlborg
Aug 04, 2011
Andrej Mitrovic
Aug 03, 2011
Andrej Mitrovic
Aug 15, 2011
Marco Leise
August 03, 2011
Take a look at this:

import std.stdio;
import core.thread;

void main()
{
    foreach (x; 0 .. 1000)
    {
        Thread.sleep(dur!("usecs")(999));
        writeln(x);
    }

    foreach (x; 0 .. 1000)
    {
        Thread.sleep(dur!("usecs")(1000));
        writeln(x);
    }
}

Compile and run it. The first foreach loop ends in an instant, while the second one takes much much longer to finish, which is puzzling since I've only increased the sleep while for a single microsecond. What's going on?
August 03, 2011
On Wed, 03 Aug 2011 13:14:50 -0400, Andrej Mitrovic <andrej.mitrovich@gmail.com> wrote:

> Take a look at this:
>
> import std.stdio;
> import core.thread;
>
> void main()
> {
>     foreach (x; 0 .. 1000)
>     {
>         Thread.sleep(dur!("usecs")(999));
>         writeln(x);
>     }
>
>     foreach (x; 0 .. 1000)
>     {
>         Thread.sleep(dur!("usecs")(1000));
>         writeln(x);
>     }
> }
>
> Compile and run it. The first foreach loop ends in an instant, while
> the second one takes much much longer to finish, which is puzzling
> since I've only increased the sleep while for a single microsecond.
> What's going on?

I can only imagine that the cause is the implementation is using an OS function that only supports millisecond sleep resolution.  So essentially it's like sleeping for 0 or 1 millisecond.  However, without knowing your OS, it's hard to say what's going on.  On my linux install, the timing seems equivalent.

-Steve
August 03, 2011
That could be the reason. I'm testing on Windows.

I was using sleep() as a quick hack around slowing down the framerate of an OpenGL display. There are better way to do this but I didn't have time to find a proper solution yet.
August 03, 2011
On 2011-08-03 19:42, Andrej Mitrovic wrote:
> That could be the reason. I'm testing on Windows.
>
> I was using sleep() as a quick hack around slowing down the framerate
> of an OpenGL display. There are better way to do this but I didn't
> have time to find a proper solution yet.

Why would you want to slow down framerate?

-- 
/Jacob Carlborg
August 03, 2011
On Wed, 03 Aug 2011 13:42:34 -0400, Andrej Mitrovic <andrej.mitrovich@gmail.com> wrote:

> That could be the reason. I'm testing on Windows.

Windows only supports millisecond resolution.

A valid solution to this is probably to have anything > 0 and < 1 ms sleep for at least 1ms.  Or maybe it can round up to the next ms.

For now, you can simply sleep for 1ms.

-Steve
August 03, 2011
On 8/3/11, Jacob Carlborg <doob@me.com> wrote:
> Why would you want to slow down framerate?

Because the examples were written in the 90s and CPUs and graphic cards are so fast these days that the old code runs at an enormous framerate.

Anyway, after a bit of googling I've found a solution:

enum float FPS = 60.0;
auto t_prev = Clock.currSystemTick();
while (!done)
{
    auto t = Clock.currSystemTick();

    if ((t - t_prev).usecs > (1_000_000.0 / FPS))
    {
        t_prev = t;
        DrawGLScene();
    }

    SwapBuffers(hDC);
}

I can also use currAppTick() which is similar.

I'm using "enum float" instead of just "enum FPS" because creeping integer truncation bugs lurk into my code all the time. i.e. I end up having an expression like "var1 / var" evaluate to an integer instead of a float because a variable was declared as an integer.

Here's what I mean:
enum FPS = 60;

void main()
{
    auto fraction = (1 / FPS);  // woops, actually returns 0
}

Using "enum float FPS = 60;" fixes this. It's a very subtle thing and easily introducable as a bug.
August 03, 2011
On 8/3/11, Andrej Mitrovic <andrej.mitrovich@gmail.com> wrote:
>     if ((t - t_prev).usecs > (1_000_000.0 / FPS))
>     {
>         t_prev = t;
>         DrawGLScene();
>     }
>
>     SwapBuffers(hDC);

My mistake here, SwapBuffers belongs inside the if body, there's an unrelated keyboard bug that made me push it there but I've found what's causing it. Anyway this is offtopic.
August 04, 2011
On 2011-08-03 20:36, Andrej Mitrovic wrote:
> On 8/3/11, Jacob Carlborg<doob@me.com>  wrote:
>> Why would you want to slow down framerate?
>
> Because the examples were written in the 90s and CPUs and graphic
> cards are so fast these days that the old code runs at an enormous
> framerate.

I would say that the correct solution is to rewrite the examples to work with any CPU speed. But as you say, it's examples, may not be worth it.

-- 
/Jacob Carlborg
August 04, 2011
On 8/4/11, Jacob Carlborg <doob@me.com> wrote:
> I would say that the correct solution is to rewrite the examples to work with any CPU speed.
>
> --
> /Jacob Carlborg
>

That's what I did. The framerate isn't clamped, and the threads don't sleep, there's no spinning going on, I've replaced all of that with timers. The old code used spinning in some examples, which of course maxes out an entire core. That's not how things should be done these days. :)
August 15, 2011
Am 03.08.2011, 19:21 Uhr, schrieb Steven Schveighoffer <schveiguy@yahoo.com>:

> On Wed, 03 Aug 2011 13:14:50 -0400, Andrej Mitrovic <andrej.mitrovich@gmail.com> wrote:
>
>> Take a look at this:
>>
>> import std.stdio;
>> import core.thread;
>>
>> void main()
>> {
>>     foreach (x; 0 .. 1000)
>>     {
>>         Thread.sleep(dur!("usecs")(999));
>>         writeln(x);
>>     }
>>
>>     foreach (x; 0 .. 1000)
>>     {
>>         Thread.sleep(dur!("usecs")(1000));
>>         writeln(x);
>>     }
>> }
>>
>> Compile and run it. The first foreach loop ends in an instant, while
>> the second one takes much much longer to finish, which is puzzling
>> since I've only increased the sleep while for a single microsecond.
>> What's going on?
>
> I can only imagine that the cause is the implementation is using an OS function that only supports millisecond sleep resolution.  So essentially it's like sleeping for 0 or 1 millisecond.  However, without knowing your OS, it's hard to say what's going on.  On my linux install, the timing seems equivalent.
>
> -Steve

I would have guessed it comes down to time shares. If the scheduler works at a rate of 1000Hz, you get 1ms delays, if it works at 250Hz you get 4ms. Going down to an arbitrarily small sleep interval may be unfeasible. It's just an idea, I haven't actually looked that up.