April 03, 2006
Fredrik Olsson wrote:
> I see your point, and will try to explain why I have chosen double as I have.

I have some more thoughts on using double <g>.

> Using double I get the same scale for dates for timestamps, as for dates; the integer part is days.
> 
> Having dates a days with times as fractions is also how the astronomers do it, they call it Julian Days, and base it on monday, january 1, 4713 BCE as the epoch. But the idea is the same.
> 
> It is datatype used by many database implementations (PostgreSQL, MySQL, MS SQL Server 7 (and beyond?)).

This may be a consequence of Microsoft Basic implementing time as a double. Chicken or egg?

> 
> A double can represent infinity, -infinity, and not a number can be not a date.

std.date's d_time offers d_time_nan, which fills the role of nan for times. I don't see a purpose for infinity or -infinity when dealing with calendar dates or file times. There is a purpose for such when doing physics math, but that is way beyond the scope of std.date.

> +-270 years is sort of an limitation :), even a simple genealogy application would hit that limit quite soon. Using a double is based on the idea that the farther away from today, the less relevant is precision.

Double can appear to represent far more precision. But the system clocks give quantized time (usually in millisecond precision). Doubles cannot exactly represent milliseconds, so when you convert from system time to doubles and back to system time, it's very possible that you can get a different system time. This will play havoc with file utilities and programs like make.
April 04, 2006
Walter Bright wrote:
> Double has another problem when used as a date - there are embedded processors in wide use that don't have floating point hardware. This
> means that double shouldn't be used in core routines that are not implicitly related to doing floating point calculations.

Ignoring the issue of date, I have a comment on processors:

IIRC, D will never be found on a processor less than 32 bits. Further, it may take some time before D actually gets used in something embedded.

By that time, IMHO, it is unlikely that a 32b processor would not contain a math unit.

---

Of course this may warrant a discussion here, which is good, because then we might end up with a more clear set of goals, both for library development and for D itself.
April 04, 2006
Georg Wrede wrote:
> Walter Bright wrote:
>> Double has another problem when used as a date - there are embedded processors in wide use that don't have floating point hardware. This
>> means that double shouldn't be used in core routines that are not implicitly related to doing floating point calculations.
> 
> Ignoring the issue of date, I have a comment on processors:
> 
> IIRC, D will never be found on a processor less than 32 bits. Further, it may take some time before D actually gets used in something embedded.
> 
> By that time, IMHO, it is unlikely that a 32b processor would not contain a math unit.
> 
> ---
> 
> Of course this may warrant a discussion here, which is good, because then we might end up with a more clear set of goals, both for library development and for D itself.

At the start that D wasn't going to accommodate 16 bit processors for very good reasons, there are 32 bit processors in wide use in the embedded market that do not have hardware floating point. There is no reason to gratuitously not run on those systems.
April 05, 2006
Walter Bright wrote:
> Georg Wrede wrote:
>> Walter Bright wrote:
>> 
>>> Double has another problem when used as a date - there are
>>> embedded processors in wide use that don't have floating point
>>> hardware. This means that double shouldn't be used in core
>>> routines that are not implicitly related to doing floating point
>>> calculations.
>> 
>> 
>> Ignoring the issue of date, I have a comment on processors:
>> 
>> IIRC, D will never be found on a processor less than 32 bits.
>> Further, it may take some time before D actually gets used in
>> something embedded.
>> 
>> By that time, IMHO, it is unlikely that a 32b processor would not contain a math unit.
>> 
>> ---
>> 
>> Of course this may warrant a discussion here, which is good,
>> because then we might end up with a more clear set of goals, both
>> for library development and for D itself.
> 
> 
> At the start that D wasn't going to accommodate 16 bit processors for
> very good reasons, there are 32 bit processors in wide use in the embedded market that do not have hardware floating point. There is no
> reason to gratuitously not run on those systems.

Ok, that was exactly the answer I thought I'd get.

Currently, this issue is not entirely foreign to me. I'm delivering a HW + SW solution to a manufacturer of plastics processing machines, where my solution will supervise the process and alert an operator whenever the machine "wants hand-holding".

For that purpose, the choice is between an 8-bit and a 16-bit processor. Very probably a PIC. (So no D here. :-), I'll end up doing it in C.)

Now, considering Moore, and the fact that the 80387 math coprocessor didn't have all too many transistors, the marginal price of math is plummeting. Especially compared with the minimum number of transistors needed for a (general purpose) 32-bit CPU.

Also, since the purveyors of 32-bit processors are keen on showing the ease of use and versatility of their processors, it is likely that even if math is not on the chip, they at least deliver suitable libraries to emulate that in software.

---

As I see it, there are mainly two use cases for D with embedded processors (correct me if I'm wrong): First (and probably the more popular scenario), there either exists a rudimentary (probably even a real-time) OS for the processor (or application domain), delivered (for free) by the HW manufacturer, or, they deliver the necessary libraries to be used either with their compiler or for GCC cross compiling.

Second use case being, one is about to develop the entire SW for an application "from scratch".

Now, in the former case, math is either on-chip, or included in the libraries. In the latter, either we don't use math, or we make (or acquire) the necessary functions from other sources.

---

The second use case worries me. (Possibly unduely?) D not being entirely decoupled from Phobos, at least creates an illusion of potential problems for "from-scratch" SW development for embedded HW.

---

We do have to remember the reasons leading to choosing a 32-bit processor in the first place: if the process to be cotrolled is too complicated or otherwise needs more power than a 16-bit CPU can deliver, only then should one choose a 32-bit CPU. Now, at that time, it is likely that requirements for RAM, address space, speed, and other things are big enough that the inclusion of math (in HW or library) becomes minor. (Oh, and some of the current 16-bit (and even some 8-bit) processors do actually deliver astonishing horsepower already.)

So, assuming D has access to math on _all_ of the processors and HW it'll ever be on, suddenly doesn't seem so arbitrary.
April 05, 2006
Georg Wrede wrote:
> Walter Bright wrote:
>> At the start that D wasn't going to accommodate 16 bit processors for
>> very good reasons, there are 32 bit processors in wide use in the embedded market that do not have hardware floating point. There is no
>> reason to gratuitously not run on those systems.
> 
> Ok, that was exactly the answer I thought I'd get.
> 
> Currently, this issue is not entirely foreign to me. I'm delivering a HW + SW solution to a manufacturer of plastics processing machines, where my solution will supervise the process and alert an operator whenever the machine "wants hand-holding".
> 
> For that purpose, the choice is between an 8-bit and a 16-bit processor. Very probably a PIC. (So no D here. :-), I'll end up doing it in C.)

So, you're not even using a 32 bit processor, but a 16 bit design. I know for a fact that there are *new* embedded systems designs going on using 32 bit processors that don't have FPUs.


> Now, considering Moore, and the fact that the 80387 math coprocessor didn't have all too many transistors, the marginal price of math is plummeting. Especially compared with the minimum number of transistors needed for a (general purpose) 32-bit CPU.

So why are you using a 16 bit design? I can guess - cost. And that's why embedded systems for 32 bit processors often don't have FPUs. Cost, where even a few cents matter. (Also power consumption.)


> Also, since the purveyors of 32-bit processors are keen on showing the ease of use and versatility of their processors, it is likely that even if math is not on the chip, they at least deliver suitable libraries to emulate that in software.

I have such a library (needed for the DOS-32 support). Although it works fine, it is 100 times slower than hardware floating point. Embedded CPUs are often strapped for speed, so why gratuitously require floating point?


> Now, in the former case, math is either on-chip, or included in the libraries. In the latter, either we don't use math, or we make (or acquire) the necessary functions from other sources.

Or design out unnecessary uses of floating point.


> The second use case worries me. (Possibly unduely?) D not being entirely decoupled from Phobos, at least creates an illusion of potential problems for "from-scratch" SW development for embedded HW.

Phobos doesn't require floating point support from the processor unless one actually uses floating point in the application code.

I also really don't understand why anyone using D would require not using Phobos. What's the problem?



> We do have to remember the reasons leading to choosing a 32-bit processor in the first place: if the process to be cotrolled is too complicated or otherwise needs more power than a 16-bit CPU can deliver, only then should one choose a 32-bit CPU. Now, at that time, it is likely that requirements for RAM, address space, speed, and other things are big enough that the inclusion of math (in HW or library) becomes minor.

All I can say is I posed the same question to embedded systems people using 32 bit CPUs sans FPU, and they tell me the costs are not minor - either in money or power consumption.
April 05, 2006
Walter Bright wrote:
[snip]
> Phobos doesn't require floating point support from the processor unless one actually uses floating point in the application code.
> 
> I also really don't understand why anyone using D would require not using Phobos. What's the problem?

Phobos does not suit everyone's ideal of a runtime library. Enforcing it's usage as part of the D language is no better than the tight coupling of the Java libraries that you've happily denigrated in the past.

There would be no problem with Phobos at all, if you'd avoid hooking it directly into the language. For example, TypeInfo recently changed to import std.string, which itself imports a slew of otherwise redundant code.

I truly hope you can see the ironic humour in that :)


>> We do have to remember the reasons leading to choosing a 32-bit processor in the first place: if the process to be cotrolled is too complicated or otherwise needs more power than a 16-bit CPU can deliver, only then should one choose a 32-bit CPU. Now, at that time, it is likely that requirements for RAM, address space, speed, and other things are big enough that the inclusion of math (in HW or library) becomes minor.
> 
> 
> All I can say is I posed the same question to embedded systems people using 32 bit CPUs sans FPU, and they tell me the costs are not minor - either in money or power consumption.

I spend a lot of time with MCUs. The cost issue is not so much the register width, but the pin count. That is, a 32-bit device, perhaps with embedded FPU, is not really such a big cost issue (even for battery life, when you talk about static-cmos design at 10MHz to 100Mhz). But you need to feed it with something useful, which tends to increase the trace-count quite quickly (which then leads to other costs, etc, etc).

On the other hand, 8-bit designs are often implemented with as little as 14 pins. That makes an entire system trivial to produce. Heck, there's a Hitachi MCU with 32bit registers on a 64pin package, just to keep the pin-count down (it can address only a few KB though). I'm rather familiar with that one, and can attest to it being able to execute realtime FFTs at 20Mhz, via FP emulation using its wide registers. Without those 32bit registers, that just wouldn't be feasible.

Once you get to PDA/Phone land, one is generally talking about 200+ pins on the MCU. Overall costs are up notably at that point, but then the devices support vast address spaces (now heading for the GB range). Such devices are now starting to gain dedicated 3D graphics coprocessors on the board (jeez!), so adding FPU support is surely not a cost issue there?

Still; at the both ends of the scale, it's quite likely that one would wind up facing a DSP-oriented design instead of a MCU+FPU design ~ simply because they're readily available and highly competitive (and with formidable libraries available).

I think the upshot is that one probably shouldn't /rely/ on FP support on MCUs, and thus a DateTime library targeted at such devices would be a trifle foolhardy to do so ~ especially when the alternatives are typically just fine?

I'm sure this has now gone completely off-topic;
April 05, 2006
Georg Wrede wrote:
> Walter Bright wrote:
>> Double has another problem when used as a date - there are embedded processors in wide use that don't have floating point hardware. This
>> means that double shouldn't be used in core routines that are not implicitly related to doing floating point calculations.
> 
> Ignoring the issue of date, I have a comment on processors:
> 
> IIRC, D will never be found on a processor less than 32 bits. Further, it may take some time before D actually gets used in something embedded.
> 
> By that time, IMHO, it is unlikely that a 32b processor would not contain a math unit.
> 

I have developed for UltraSparc (Sparc V8) CPUs (in millions of PowerTV boxes), that doesn't have FPU unit (or, what much worse, have damaged FPU unit due to faulty manufacturing process).

We had to force software FP implementation when compiled code with GCC, otherwise it could just hang at first FP instruction encountered.

and since we talk of double (64-bit), I think 64-bit integer whould be enough to pack time with microseconds accuracy for at least year 137438
I we need another (higher) accuracy, general time/date format is useless anyway
April 05, 2006
Walter Bright skrev:
> Fredrik Olsson wrote:
>> I see your point, and will try to explain why I have chosen double as I have.
> 
> I have some more thoughts on using double <g>.
> 
>> Using double I get the same scale for dates for timestamps, as for dates; the integer part is days.
>>
>> Having dates a days with times as fractions is also how the astronomers do it, they call it Julian Days, and base it on monday, january 1, 4713 BCE as the epoch. But the idea is the same.
>>
>> It is datatype used by many database implementations (PostgreSQL, MySQL, MS SQL Server 7 (and beyond?)).
> 
> This may be a consequence of Microsoft Basic implementing time as a double. Chicken or egg?
> 
For Basic I guess that can be true, why they choose it for OLE/COM/ActiveX compenents as general is another question? For PostgreSQL I guess they must have a reason. You can choose to use 64bit int for timestamp when compiling PostgreSQL but double is still the default.

>>
>> A double can represent infinity, -infinity, and not a number can be not a date.
> 
> std.date's d_time offers d_time_nan, which fills the role of nan for times. I don't see a purpose for infinity or -infinity when dealing with calendar dates or file times. There is a purpose for such when doing physics math, but that is way beyond the scope of std.date.
> 
I find a good infinity to be nice to have, when calling say something like: isInRange(aDate, now(), infinity); A date way into the future would be just as good for most purposes, but clean and readable code is nice.

>> +-270 years is sort of an limitation :), even a simple genealogy application would hit that limit quite soon. Using a double is based on the idea that the farther away from today, the less relevant is precision.
> 
> Double can appear to represent far more precision. But the system clocks give quantized time (usually in millisecond precision). Doubles cannot exactly represent milliseconds, so when you convert from system time to doubles and back to system time, it's very possible that you can get a different system time. This will play havoc with file utilities and programs like make.

This along with floating point not always supported have convinced me though. It is rewritten with d_timestamp as a 64bit long.

// Fredrik
April 05, 2006
Fredrik Olsson wrote:
>>> A double can represent infinity, -infinity, and not a number can be not a date.
>>
>> std.date's d_time offers d_time_nan, which fills the role of nan for times. I don't see a purpose for infinity or -infinity when dealing with calendar dates or file times. There is a purpose for such when doing physics math, but that is way beyond the scope of std.date.
>>
> I find a good infinity to be nice to have, when calling say something like: isInRange(aDate, now(), infinity); A date way into the future would be just as good for most purposes, but clean and readable code is nice.

Why not write it as:

	if (now() <= aDate) ...

?
April 06, 2006
Walter Bright skrev:
> Fredrik Olsson wrote:
>>>> A double can represent infinity, -infinity, and not a number can be not a date.
>>>
>>> std.date's d_time offers d_time_nan, which fills the role of nan for times. I don't see a purpose for infinity or -infinity when dealing with calendar dates or file times. There is a purpose for such when doing physics math, but that is way beyond the scope of std.date.
>>>
>> I find a good infinity to be nice to have, when calling say something like: isInRange(aDate, now(), infinity); A date way into the future would be just as good for most purposes, but clean and readable code is nice.
> 
> Why not write it as:
> 
>     if (now() <= aDate) ...
> 
> ?

Perhaps a better example:
Item[] itemsInRange(Item[] items, d_date start, d_date end) {
  Item[] ret;
  foreach (Item item; items) {
    if (isInRange(item.date, start, end)
      ret ~= item;
  }
  return ret;
}

Introducing itemsBefore() and itemsAfter() could be done, but less code for the same functionality would be to simply send "infinity" to itemsInTange's start or end. And now it would be nice with a set standard for "what is infinity".

Best would be if the properties min and max could be made for typedefs, and maybe introduce your own, such as nad for "not a date".

// Fredrik