November 05, 2010
That sound exactly like what I wanted, thanks!

BTW, when I said I think std.datetime should use core.time.Duration as its duration type, I meant that in terms of where the Duration type was located, not which implementation should be used.  I guess I didn't really make that clear. Definitely, we should use the most complete implementation.

-Steve



----- Original Message ----
> From: Jonathan M Davis <jmdavisProg at gmx.com>
> 
> On Thursday 04 November 2010 12:51:36 Steve Schveighoffer wrote:
> > Without  looking at it at all, my firm belief is that std.datetime should use  core.time.Duration as its duration type.  One of the big issues I  had with Tango before implementing the time types was that you had 3 or  4 different ways to specify time.
> > 
> > In all likelyhood you  are going to be using std.datetime to do most of your code since it  provides mechanisms that work with the local clock.  If you then  have to convert your std.datetime structs to core.time structs in order  to call core functions, that's going to be a huge turnoff.
> > 
> > In  addition, std.datetime should publicly import core.time so it's seamless
> >  to the person who wants to work with time structures.
> > 
> > I know  the datetime stuff is not final yet, but Jonathan, can we look at what  duration type should be moved to core.time?
> 
> Looking at core.time, I'd  really suggest just moving over
>std.datetime.Duration
>
> to core.time along  with TickDuration, FracSec, and the dur!() function for creating Durations,  possibly along with some of the helper functions (which I

> believe are  primarily restricted to template constraints). Most of the unit tests would  have to be made to not use my nice unit test functions, so the
>unit
>
> tests  would become a bit nastier, and some of the unit tests would probably
>have
>
> to be left in std.datetime, since they rely on Clock.currSystemTick() (or it could be removed from Clock, though I'd prefer not), but it can be  done.
> 
> In any case, with some alterations, I think that  std.datetime.Duration and the
>
> other pieces of std.datetime that it relies on  can be moved into core.time fairly easily. I really think that on the whole,  std.datetime.Duration is superior to core.time.Duration, so I definitely  don't want to replace std.datetime's Duration with core.time's  Duration.
> 
> I can spend some time to create a version which could replace  what's currently
>
> in core.time, and then std.datetime can publicly import  core.time as Steve suggests.
> 
> - Jonathan M  Davis
> _______________________________________________
> D-runtime mailing  list
> D-runtime at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/d-runtime
> 



November 05, 2010
On Nov 4, 2010, at 6:43 PM, Jonathan M Davis wrote:
> 
> Looking at core.time, I'd really suggest just moving over std.datetime.Duration to core.time along with TickDuration, FracSec, and the dur!() function for creating Durations, possibly along with some of the helper functions (which I believe are primarily restricted to template constraints).

Are there still 4 distinct duration types?
November 05, 2010
On 5-nov-10, at 15:06, Sean Kelly wrote:

> On Nov 4, 2010, at 6:43 PM, Jonathan M Davis wrote:
>>
>> Looking at core.time, I'd really suggest just moving over
>> std.datetime.Duration
>> to core.time along with TickDuration, FracSec, and the dur!()
>> function for
>> creating Durations, possibly along with some of the helper
>> functions (which I
>> believe are primarily restricted to template constraints).
>
> Are there still 4 distinct duration types?

I really think that using doubles with the number of seconds for those
"normal" usages is really better.
That is what NSDate does (Next->Apple), and also libev.
QT on the other hand uses a structure similar to the *nix timings, and
Boost/std/tango
The advantage of double is that it is a simple type, and introduces 0
dependencies, and is still flexible enough for most uses.
Using integers one has to somehow give access to the time unit in some
way, and so make the interface more complex.
If different timers have different resolution different structures
have to be used.
Thus they lead to a more complex interface.

Converting various native timers to a double might cost a little bit,
but normally that is not a problem.
Still for the performance timers I did use an integer based timer
(uniform accuracy, and very small conversion cost).

Fawzi


November 05, 2010
On Nov 5, 2010, at 9:12 AM, Fawzi Mohamed wrote:

> On 5-nov-10, at 15:06, Sean Kelly wrote:
> 
>> On Nov 4, 2010, at 6:43 PM, Jonathan M Davis wrote:
>>> 
>>> Looking at core.time, I'd really suggest just moving over std.datetime.Duration to core.time along with TickDuration, FracSec, and the dur!() function for creating Durations, possibly along with some of the helper functions (which I believe are primarily restricted to template constraints).
>> 
>> Are there still 4 distinct duration types?
> 
> I really think that using doubles with the number of seconds for those "normal" usages is really better.
> That is what NSDate does (Next->Apple), and also libev.
> QT on the other hand uses a structure similar to the *nix timings, and Boost/std/tango
> The advantage of double is that it is a simple type, and introduces 0 dependencies, and is still flexible enough for most uses.

This would render the -nofloat compiler flag useless, but perhaps that isn't a huge issue.  It was one of the arguments for changing double->long for Thread.sleep() in Tango though.

> Using integers one has to somehow give access to the time unit in some way, and so make the interface more complex.
> If different timers have different resolution different structures have to be used.
> Thus they lead to a more complex interface.

I disagree.  With the Duration code, the call looks like this:

    Thread.sleep(seconds(5) + milliseconds(10))

How is that complex?  Though I'll admit that the double version is pretty straightforward too:

    Thread.sleep(5.01)

Or the conversion routines could be changed to return a double instead of a Duration:

    double seconds(T)(T val) {
        return cast(double) val;
    }

etc.  I'll admit that I'm not absolutely crazy about dealing with doubles inside the wait routines themselves, but this could be solved by the addition of some library functions.  I'd probably basically just rip out Duration and replace it with a bunch of free functions that did essentially the same thing: long hours(double), long nanoseconds(double), long totalNanoseconds(double), etc.

> Converting various native timers to a double might cost a little bit, but normally that is not a problem.

Since this will currently only be used for wait routines, the performance impact is irrelevant.
November 05, 2010
On 5-nov-10, at 18:59, Sean Kelly wrote:

> On Nov 5, 2010, at 9:12 AM, Fawzi Mohamed wrote:
>
>> On 5-nov-10, at 15:06, Sean Kelly wrote:
>>
>>> On Nov 4, 2010, at 6:43 PM, Jonathan M Davis wrote:
>>>>
>>>> Looking at core.time, I'd really suggest just moving over
>>>> std.datetime.Duration
>>>> to core.time along with TickDuration, FracSec, and the dur!()
>>>> function for
>>>> creating Durations, possibly along with some of the helper
>>>> functions (which I
>>>> believe are primarily restricted to template constraints).
>>>
>>> Are there still 4 distinct duration types?
>>
>> I really think that using doubles with the number of seconds for
>> those "normal" usages is really better.
>> That is what NSDate does (Next->Apple), and also libev.
>> QT on the other hand uses a structure similar to the *nix timings,
>> and Boost/std/tango
>> The advantage of double is that it is a simple type, and introduces
>> 0 dependencies, and is still flexible enough for most uses.
>
> This would render the -nofloat compiler flag useless, but perhaps that isn't a huge issue.  It was one of the arguments for changing double->long for Thread.sleep() in Tango though.

I don't know it it was changed to a long at some point, but I did
discuss about it, and advocated a double for these reasons, and it is
still a double now.
Anyway is there any platform out there where D could probably be
ported that does not support floating point? thanks to 3D and DSP
basically all modern processors support it, also in the embedded field
(read ARM).
Yes there are embedded processors that don't support floats, but I
don't think that D will target them.
(by the way I said NSDate, but I should have said NSTimeInterval which
is a double)

>> Using integers one has to somehow give access to the time unit in
>> some way, and so make the interface more complex.
>> If different timers have different resolution different structures
>> have to be used.
>> Thus they lead to a more complex interface.
>
> I disagree.  With the Duration code, the call looks like this:
>
>    Thread.sleep(seconds(5) + milliseconds(10))
>
> How is that complex?  Though I'll admit that the double version is pretty straightforward too:

You are right, It is not complex if you use it only for that.
But if later you want to use the same/similar structure for different
clocks (probably outside the core), then you have problems if the
other clock has a different resolution.

I thought that having a uniform type outside the druntime was part of
the reason for choosing Duration.
In that case I find using a floating point number (normally a double)
as duration is a good choice.

>    Thread.sleep(5.01)
>
> Or the conversion routines could be changed to return a double instead of a Duration:
>
>    double seconds(T)(T val) {
>        return cast(double) val;
>    }
>
> etc.  I'll admit that I'm not absolutely crazy about dealing with doubles inside the wait routines themselves, but this could be solved by the addition of some library functions.  I'd probably basically just rip out Duration and replace it with a bunch of free functions that did essentially the same thing: long hours(double), long nanoseconds(double), long totalNanoseconds(double), etc.

I remember a discussion were also Walter didn't like floating points, but to say the truth, I think that sunning them is a thing of the past. (this has nothing to do with the fact that the clock itself should better use integers).

>> Converting various native timers to a double might cost a little bit, but normally that is not a problem.
>
> Since this will currently only be used for wait routines, the performance impact is irrelevant.


November 05, 2010
On Friday, November 05, 2010 07:06:59 Sean Kelly wrote:
> On Nov 4, 2010, at 6:43 PM, Jonathan M Davis wrote:
> > Looking at core.time, I'd really suggest just moving over std.datetime.Duration to core.time along with TickDuration, FracSec, and the dur!() function for creating Durations, possibly along with some of the helper functions (which I believe are primarily restricted to template constraints).
> 
> Are there still 4 distinct duration types?

No. Just two. Much as I liked the 4, and I thought that they worked quite well (for the most part, you didn't have to care about the types), pretty much everyone else thought that it was overly complicated. So, I reduced it to two. Now it's TickDuration (which was SHOO's Ticks), which is used when getting the time from the system, and Duration (which essentially was HNSecDuration), which holds the number of hnsecs. It's similar to what you have, but it does have functionality which yours doesn't have to fit in better with std.datetime, and it makes heavier use of templates than yours does. For instance, to create one, you'd use calls like dur!"seconds(5) or dur!"(usecs)(502), and getter functions like seconds are aliases for get - e.g. get!"seconds" - so it's far better suited to generic programming.

I'll try and have a proposed core.time tonight or tomorrow, though it should be noted that it may need further changes based on how the review of the current datetime code goes, even if it seems entirely acceptable on its own.

- Jonathan M Davis
November 05, 2010
On Nov 5, 2010, at 11:43 AM, Fawzi Mohamed wrote:

> 
> On 5-nov-10, at 18:59, Sean Kelly wrote:
> 
>> On Nov 5, 2010, at 9:12 AM, Fawzi Mohamed wrote:
>>> 
>>> Using integers one has to somehow give access to the time unit in some way, and so make the interface more complex.
>>> If different timers have different resolution different structures have to be used.
>>> Thus they lead to a more complex interface.
>> 
>> I disagree.  With the Duration code, the call looks like this:
>> 
>>   Thread.sleep(seconds(5) + milliseconds(10))
>> 
>> How is that complex?  Though I'll admit that the double version is pretty straightforward too:
> 
> You are right, It is not complex if you use it only for that.
> But if later you want to use the same/similar structure for different clocks (probably outside the core), then you have problems if the other clock has a different resolution.

Resolution is only relevant insofar as the maximum and minimum duration that can be represented.  Using a signed 64-bit value at a nanosecond resolution, the maximum duration that can be represented is roughly 300 years, and bumping the resolution to 100ns increments (as in C#) would make this 30,000 years.  Is there really a need for multiple-resolution durations?  Perhaps duration should use a double internally as well?  One of my concerns of using std.datetime as-is is that I'd need 4 overloads of Thread.sleep(), one for each duration type.  This suggests to me that durations in std.datetime really weren't intended for independent use.

> I thought that having a uniform type outside the druntime was part of the reason for choosing Duration. In that case I find using a floating point number (normally a double) as duration is a good choice.
> 
>>   Thread.sleep(5.01)
>> 
>> Or the conversion routines could be changed to return a double instead of a Duration:
>> 
>>   double seconds(T)(T val) {
>>       return cast(double) val;
>>   }
>> 
>> etc.  I'll admit that I'm not absolutely crazy about dealing with doubles inside the wait routines themselves, but this could be solved by the addition of some library functions.  I'd probably basically just rip out Duration and replace it with a bunch of free functions that did essentially the same thing: long hours(double), long nanoseconds(double), long totalNanoseconds(double), etc.
> 
> I remember a discussion were also Walter didn't like floating points, but to say the truth, I think that sunning them is a thing of the past.
> (this has nothing to do with the fact that the clock itself should better use integers).

I wouldn't fight the use of double if it came to that.  More important to me is that durations be communicated clearly and that slicing them up is easy and error-free.  This could easily be solved by a set of functions similar to how Duration is used now without stepping on any toes by defining an actual Duration type.
November 05, 2010
On Nov 5, 2010, at 1:04 PM, Jonathan M Davis wrote:

> On Friday, November 05, 2010 07:06:59 Sean Kelly wrote:
>> On Nov 4, 2010, at 6:43 PM, Jonathan M Davis wrote:
>>> Looking at core.time, I'd really suggest just moving over std.datetime.Duration to core.time along with TickDuration, FracSec, and the dur!() function for creating Durations, possibly along with some of the helper functions (which I believe are primarily restricted to template constraints).
>> 
>> Are there still 4 distinct duration types?
> 
> No. Just two. Much as I liked the 4, and I thought that they worked quite well (for the most part, you didn't have to care about the types), pretty much everyone else thought that it was overly complicated. So, I reduced it to two. Now it's TickDuration (which was SHOO's Ticks), which is used when getting the time from the system, and Duration (which essentially was HNSecDuration), which holds the number of hnsecs. It's similar to what you have, but it does have functionality which yours doesn't have to fit in better with std.datetime, and it makes heavier use of templates than yours does. For instance, to create one, you'd use calls like dur!"seconds(5) or dur!"(usecs)(502), and getter functions like seconds are aliases for get - e.g. get!"seconds" - so it's far better suited to generic programming.
> 
> I'll try and have a proposed core.time tonight or tomorrow, though it should be noted that it may need further changes based on how the review of the current datetime code goes, even if it seems entirely acceptable on its own.

I rolled these changes into druntime now because I figure there's plenty of time before the next release to sort out the details, so no rush :-)
November 05, 2010
On Friday, November 05, 2010 13:09:39 Sean Kelly wrote:
> On Nov 5, 2010, at 11:43 AM, Fawzi Mohamed wrote:
> > On 5-nov-10, at 18:59, Sean Kelly wrote:
> >> On Nov 5, 2010, at 9:12 AM, Fawzi Mohamed wrote:
> >>> Using integers one has to somehow give access to the time unit in some way, and so make the interface more complex. If different timers have different resolution different structures have to be used. Thus they lead to a more complex interface.
> >> 
> >> I disagree.  With the Duration code, the call looks like this:
> >>   Thread.sleep(seconds(5) + milliseconds(10))
> >> 
> >> How is that complex?  Though I'll admit that the double version is pretty
straightforward too:
> > You are right, It is not complex if you use it only for that.
> > But if later you want to use the same/similar structure for different
> > clocks (probably outside the core), then you have problems if the other
> > clock has a different resolution.
> 
> Resolution is only relevant insofar as the maximum and minimum duration that can be represented.  Using a signed 64-bit value at a nanosecond resolution, the maximum duration that can be represented is roughly 300 years, and bumping the resolution to 100ns increments (as in C#) would make this 30,000 years.  Is there really a need for multiple-resolution durations?  Perhaps duration should use a double internally as well?  One of my concerns of using std.datetime as-is is that I'd need 4 overloads of Thread.sleep(), one for each duration type.  This suggests to me that durations in std.datetime really weren't intended for independent use.

The idea with the different durations that were in std.datetime was that you'd do calls like Dur.years(1) + Dur.days(2) and have exactly the correct duration types without worrying about what they actually were. The Dur functions returned the correct duration type for that unit, and any arithmetic done on them resulted in the correct duration type. So, for the most part, you didn't have to worry about the duration types. However, any function that took a duration would typically have been templated on the duration type. In the case of sleep, it would only have been able to take an HNSecDuration or TickDuration anyway, since MonthDuration and JointDuration wouldn't have been convertable without a specific date (hence why they existed in the first place rather than just having HNSecDuration for normal duration stuff and TickDuration for the few things that actually cared about clock ticks).

The problem, of course, was that anyone looking at the situation saw multiple duration types and was wondering why and what they were for and how to use them, even though the use case was really simple, and you really didn't have to worry about the types for the most part (passing durations to functions that you wrote was nearly the only place that it would have mattered). So, there was a fair bit of confusion there. It's a bit like how many of the return types in std.algorithm tend to scare people when liberal use of auto (and possible std.array.array) solves the problem quite easily, and you don't have to worry about them. So, I liked them, but it seemed like the confusion caused was too great to keep them. years and months now get dealt with separately from durations.

When I was designing them, I certainly wasn't thinking about using them with stuff in core like Thread.sleep() and the like, but I'm not sure that I would have done anything differently if I had. I'd have just made Thread.sleep() take an HNSecDuration, since MonthDuration and JointDuration would have made no sense in that context, and you could always cast TickDuration to an HNSecDuration if you really wanted to use a TickDuration for some reason (though I think that about the only place that a typical programmer would use TickDuration directly would be when using StopWatch). So, that's essentially, what I'm going to propose now, except that it's Duration instead of HNSecDuration (it was renamed on the demise of MonthDuration and JointDuration). Thread.sleep() and its friends can take a Duration, and if someone really wants to use a TickDuration, they can cast it.

Now, Duration has 100 ns precision, so if you really want a sleep function that goes to nanosecond resolution, then you'd have to have it also take a TickDuration - if a clock tick is a high enough resolution, TickDuration could be used to get something as precise as nanoseconds - but while it can theoretically hold nanoseconds, it can't actually return them or be assigned them without doing the calculations yourself (hnsecs is the highest resolution that it supports directly, but it holds its length in clock ticks). But since Linux doesn't seem to support a resolution higher than microseconds (on any system I've used anyway), and while Windows does support a higher resolution, it's not much higher than microseconds IIRC, I'm not sure if there's much point in trying to support resolutions higher than hnsecs.

- Jonathan M Davis
November 05, 2010
On 5-nov-10, at 21:51, Jonathan M Davis wrote:

> On Friday, November 05, 2010 13:09:39 Sean Kelly wrote:
>> On Nov 5, 2010, at 11:43 AM, Fawzi Mohamed wrote:
>>> On 5-nov-10, at 18:59, Sean Kelly wrote:
>>>> On Nov 5, 2010, at 9:12 AM, Fawzi Mohamed wrote:
>>>>> Using integers one has to somehow give access to the time unit
>>>>> in some
>>>>> way, and so make the interface more complex. If different timers
>>>>> have
>>>>> different resolution different structures have to be used. Thus
>>>>> they
>>>>> lead to a more complex interface.
>>>>
>>>> I disagree.  With the Duration code, the call looks like this:
>>>>  Thread.sleep(seconds(5) + milliseconds(10))
>>>>
>>>> How is that complex?  Though I'll admit that the double version is pretty
> straightforward too:
>>> You are right, It is not complex if you use it only for that.
>>> But if later you want to use the same/similar structure for
>>> different
>>> clocks (probably outside the core), then you have problems if the
>>> other
>>> clock has a different resolution.
>>
>> Resolution is only relevant insofar as the maximum and minimum
>> duration
>> that can be represented.  Using a signed 64-bit value at a nanosecond
>> resolution, the maximum duration that can be represented is roughly
>> 300
>> years, and bumping the resolution to 100ns increments (as in C#)
>> would
>> make this 30,000 years.  Is there really a need for multiple-
>> resolution
>> durations?  Perhaps duration should use a double internally as
>> well?  One
>> of my concerns of using std.datetime as-is is that I'd need 4
>> overloads of
>> Thread.sleep(), one for each duration type.  This suggests to me that
>> durations in std.datetime really weren't intended for independent
>> use.
>
> The idea with the different durations that were in std.datetime was
> that you'd do
> calls like Dur.years(1) + Dur.days(2) and have exactly the correct
> duration
> types without worrying about what they actually were. The Dur
> functions returned
> the correct duration type for that unit, and any arithmetic done on
> them
> resulted in the correct duration type. So, for the most part, you
> didn't have to
> worry about the duration types. However, any function that took a
> duration would
> typically have been templated on the duration type. In the case of
> sleep, it
> would only have been able to take an HNSecDuration or TickDuration
> anyway, since
> MonthDuration and JointDuration wouldn't have been convertable
> without a specific
> date (hence why they existed in the first place rather than just
> having
> HNSecDuration for normal duration stuff and TickDuration for the few
> things that
> actually cared about clock ticks).
>
> The problem, of course, was that anyone looking at the situation saw
> multiple
> duration types and was wondering why and what they were for and how
> to use them,
> even though the use case was really simple, and you really didn't
> have to worry
> about the types for the most part (passing durations to functions
> that you wrote
> was nearly the only place that it would have mattered). So, there
> was a fair bit
> of confusion there. It's a bit like how many of the return types in
> std.algorithm tend to scare people when liberal use of auto (and
> possible
> std.array.array) solves the problem quite easily, and you don't have
> to worry
> about them. So, I liked them, but it seemed like the confusion
> caused was too
> great to keep them. years and months now get dealt with separately
> from
> durations.
>
> When I was designing them, I certainly wasn't thinking about using
> them with
> stuff in core like Thread.sleep() and the like, but I'm not sure
> that I would
> have done anything differently if I had. I'd have just made
> Thread.sleep() take
> an HNSecDuration, since MonthDuration and JointDuration would have
> made no sense
> in that context, and you could always cast TickDuration to an
> HNSecDuration if
> you really wanted to use a TickDuration for some reason (though I
> think that
> about the only place that a typical programmer would use
> TickDuration directly
> would be when using StopWatch). So, that's essentially, what I'm
> going to
> propose now, except that it's Duration instead of HNSecDuration (it
> was renamed
> on the demise of MonthDuration and JointDuration). Thread.sleep()
> and its
> friends can take a Duration, and if someone really wants to use a
> TickDuration,
> they can cast it.
>
> Now, Duration has 100 ns precision, so if you really want a sleep
> function that
> goes to nanosecond resolution, then you'd have to have it also take a
> TickDuration - if a clock tick is a high enough resolution,
> TickDuration could
> be used to get something as precise as nanoseconds - but while it can
> theoretically hold nanoseconds, it can't actually return them or be
> assigned
> them without doing the calculations yourself (hnsecs is the highest
> resolution
> that it supports directly, but it holds its length in clock ticks).
> But since
> Linux doesn't seem to support a resolution higher than microseconds
> (on any
> system I've used anyway), and while Windows does support a higher
> resolution,
> it's not much higher than microseconds IIRC, I'm not sure if there's
> much point
> in trying to support resolutions higher than hnsecs.
>
> - Jonathan M Davis
I know that you went for a very different approach, but as said, I
find something like what was used in NextStep/Openstep (that was cross
platform) and now Apple iOS and OSX a cleaner approach, and one that
doesn't need many templates.
The fact that libev (that is quite performance minded) also uses
doubles just confirms that it is a good approach even if you want high
performance.

The basic structure is reflected also in the C based Core Foundation types that are used to implement them nowardays:

http://developer.apple.com/library/ios/#documentation/CoreFoundation/ Conceptual/CFDatesAndTimes/Concepts/DataReps.html%23//apple_ref/doc/ uid/20001139-CJBEJBHH

(the Objective-C types based on them are)

http://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/ DatesAndTimes/DatesAndTimes.html%23//apple_ref/doc/uid/10000039i

Note that these are basically lifted from the open specification of OpenStep (that Gnustep for example tries to implement)

http://docs.sun.com/app/docs/doc/802-2112?l=en

Fawzi