July 24, 2014
https://issues.dlang.org/show_bug.cgi?id=6725

--- Comment #36 from Sobirari Muhomori <dfj1esp02@sneakemail.com> ---
(In reply to Vladimir Panteleev from comment #34)
> (In reply to Sobirari Muhomori from comment #32)
> > It's meaningful to sleep for 200ms, but not for 0.2s. When you need a better precision, you switch to the appropriate unit. How would you specify 1/60 fraction of a minute?
> 
> I still don't understand this argument. 200ms and 0.2s are the same thing, how can it be meaningful to sleep for 200ms but not for 0.2s?

Not quite the same. A second is an inadequate precision for specification of time with sub-second precision. Don't misunderstand me, I don't disallow you to use floating point for time specification, I only think it should not be encouraged for wide use, hence it should not be included in standard lib, but done in your code, which is trivial: floatDur is a very small function and can be included in docs to help people get it right with warning and discouraging at the same time, I think, it's an optimal choice given the tradeoffs and goals.

BTW, just thought about another possible confusion: one can mistake 0.30h for 30 minutes. I use decimal time system at work, but it requires a considerable amount of time to get used to it (requires thinking about time in quanta of 6 minutes) and can be confusing when you see it for the first time.

> > Digital signature is an important example. Cryptographic security is an important technology enjoying wide use.
> 
> So are thousands and thousands of other technologies being in use on your computer right now.

That was a reply to your assertion that hashing of timestamps is an esoteric example. If the timestamp should be signed, you can't avoid it, if a duration should be specified with sub-second precision, you can use millisecond unit, if it should be specified with a week precision, you can use week unit, I don't see, how second can address all needs for duration specifications. Units+integers interface provided by standard library is superior to float interface.

> > Millisecond exists precisely for that purpose. In my experience millisecond precision works fine up to a scale of a minute (even though you don't need milliseconds for durations >2s).
> 
> Are you saying that the program should just accept an integer number at the lowest precision it needs? That's just wrong: 1) it puts abstract technical reasoning before user experience; and 2) it strays from a well-established convention.

Program should accept values in units of appropriate precision. I recommend 7zip as a example of good interface for specification of values in a wide range - from bytes to megabytes. I didn't check it, but believe it doesn't support float values, but frankly it doesn't need it.

> > It's again a need for a precision better than a second. Though, I'd still question that 1.5s is much better than 1 or 2 seconds.
> 
> I don't understand this argument. Are you saying that no program should ever need to sleep for 1.5 seconds?

Even if it does, specifying 1500ms is not an issue, but I'd still question its utility, I don't see how 25% difference can be practically notable.

--
July 24, 2014
https://issues.dlang.org/show_bug.cgi?id=6725

--- Comment #37 from Steven Schveighoffer <schveiguy@yahoo.com> ---
(In reply to Jonathan M Davis from comment #31)
> Okay. I was going to say that allowing stuff like seconds(.033) would encourage a lack of precision even in cases where precision was required, and I really didn't lie that idea, but it looks like the lack of precision really isn't all that bad. The largest that the error gets is one hnsec:
> 
> import std.algorithm;
> import std.datetime;
> import std.stdio;
> import std.string;
> 
> void main()
> {
>     long i = 0;
>     immutable units = convert!("seconds", "hnsecs")(1);
>     immutable mult = 1.0 / units;
>     for(double d = 0; d < 1; d += mult, ++i)
>     {
>         auto result = cast(long)(d * units);
>         assert(result == i || result == i - 1 || result == i + 1,
>                format("%s %s %s", i, d, result));
>     }
> }

This is not a very good test, because mult as you defined it cannot be represented exactly in floating point. This means you are multiplying the error by quite a bit. A literal or parsed floating point value will not be so prone to error.

(In reply to Walter Bright from comment #35)
> (In reply to Vladimir Panteleev from comment #34)
> > 200ms and 0.2s are the same thing,
> > how can it be meaningful to sleep for 200ms but not for 0.2s?
> 
> 0.2 cannot be represented exactly as a floating point value. Therefore, rounding error starts creeping in if you start adding many 0.2 increments. At the end, the books don't balance.

This is if you do all your time calculations in FP. This doesn't make any sense, use Duration for your math.

In other words:

durFromInterval(someFPValue) * 100000 => no error, if we convert the FP value
properly.
durFromInterval(someFPValue * 100000) => Perhaps some error, but less stable
than the above.

> Roundoff errors must be handled by the user, not the core library, because the core library cannot know what the user is doing with their FP calculations.

I disagree, we can add a small epsilon when converting. We know more than the user what the epsilon should be, since we define the discrete step (hnsec).

Note, the user can ALREADY do this:

dur!"nsecs"(cast(long)(someFPValue * 1_000_000_000));

We can't stop them from using FP.

But let's not discount that this ISN'T the normal way to create a duration, it's a secondary option. And I'd much rather us define a mechanism to convert FP to duration than let the user deal with it.

--
1 2 3 4
Next ›   Last »