July 09, 2009
Jérôme M. Berger wrote:
> Andrei Alexandrescu wrote:
>> Jérôme M. Berger wrote:
>>> Andrei Alexandrescu wrote:
>>>> Jérôme M. Berger wrote:
>>>>> Andrei Alexandrescu wrote:
>>>>>> Jérôme M. Berger wrote:
>>>>>>> Andrei Alexandrescu wrote:
>>>>>>>> Derek Parnell wrote:
>>>>>>>>> It seems that D would benefit from having a standard syntax format for
>>>>>>>>> expressing various range sets;
>>>>>>>>>  a. Include begin Include end, i.e. []
>>>>>>>>>  b. Include begin Exclude end, i.e. [)
>>>>>>>>>  c. Exclude begin Include end, i.e. (]
>>>>>>>>>  d. Exclude begin Exclude end, i.e. ()
>>>>>>>>
>>>>>>>> I'm afraid this would majorly mess with pairing of parens.
>>>>>>>>
>>>>>>>     I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like:
>>>>>>>  a. Include begin Include end, i.e. [  a .. b  ]
>>>>>>>  b. Include begin Exclude end, i.e. [  a .. b ^]
>>>>>>>  c. Exclude begin Include end, i.e. [^ a .. b  ]
>>>>>>>  d. Exclude begin Exclude end, i.e. [^ a .. b ^]
>>>>>>
>>>>>> I think Walter's message really rendered the whole discussion moot. Post of the year:
>>>>>>
>>>>>> =========================
>>>>>> I like:
>>>>>>
>>>>>>    a .. b+1
>>>>>>
>>>>>> to mean inclusive range.
>>>>>> =========================
>>>>>>
>>>>>> Consider "+1]" a special symbol that means the range is to be closed to the right :o).
>>>>>>
>>>>>     Ah, but:
>>>>>  - This is inconsistent between the left and right limit;
>>>>>  - This only works for integers, not for floating point numbers.
>>>>
>>>> How does it not work for floating point numbers?
>>>>
>>>     Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)...
>>
>> It wasn't a trick question, or it was of sorts. If you iterate with e.g. foreach through a floating-point range that has b == b + 1, you're bound to get in a lot of trouble because the running variable will be incremented.
>>
>     Well:
>  - A floating point range should allow you to specify the iteration step, or else it should allow you to iterate through all numbers that can be represented with the corresponding precision;
>  - The second issue remains: what if I want to include b but not b+ε for any ε>0?
> 
>         Jerome
I'd say that a floating point range requires a lazy interpretation, and should only get evaluated on an as-needed basis.  But clearly open, half-open, and closed intervals aren't the same kind of thing as ranges.  They are more frequently used for making assertions about when something is true (or false).  I.e., they're used as an integral part of standard mathematics, but not at all in computer science (except in VERY peculiar cases).   In math one makes a assertion that, say, a particular equation holds for all members of an interval, and open or closed is only a statement about whether the end-points are included in the interval.  Proof isn't usually be exhaustive calculation, but rather by more abstract reasoning.

It would be nice to be able to express mathematical reasoning as parts of a computer program, but it's not something that's likely to be efficiently implementable, and certainly not executable.  Mathematica can do that kind of thing, I believe, but it's a bit distant from a normal computer language.
July 09, 2009
Andrei Alexandrescu wrote:
> Robert Jacques wrote:
>> On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
>>> Robert Jacques wrote:
>>>>  That's really cool. But I don't think that's actually happening (Or are these the bugs you're talking about?):
>>>>      byte x,y;
>>>>     short z;
>>>>     z = x+y;  // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to short
>>>>      // Repeat for ubyte, bool, char, wchar and *, -, /
>>>
>>> http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it.
>>
>> Added. In summary, + * - / % >> >>> don't work for types 8-bits and under. << is inconsistent (x<<1 errors, but x<<y compiles). All the op assigns (+= *= -= /= %= >>= <<= >>>=) and pre/post increments (++ --) compile which is maddeningly inconsistent, particularly when the spec defines ++x as sugar for x = x + 1, which doesn't compile.
>>
>>>> And by that logic shouldn't the following happen?
>>>>      int x,y;
>>>>     int z;
>>>>     z = x+y;  // Error: cannot implicitly convert expression (cast(long)x + cast(long)y) of type long to int
>>>
>>> No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long.
>>>
>>>> i.e. why the massive inconsistency between byte/short and int/long? (This is particularly a pain for generic i.e. templated code)
>>>
>>> I don't find it a pain. It's a practical decision.
>>
>> Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain.
> 
> Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long.
> 
>>>> BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness.
>>>
>>> I think they shouldn't be closed because they overflow for relatively small values.
>>
>> Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck.
> 
> I understand, but also keep in mind that making small integers closed is the less safe option. So we'd be hurting everyone for the sake of the image manipulation folks.
> 
> 
> Andrei
You could add modular arithmetic types.  They are frequently useful...though I admit that of the common 2^n basis bytes are the most useful, I've often needed base three or other.  (Probably not worth the effort, but modular arithmetic on 2^n for n from 1 to, say 64 would be reasonably easy.
July 09, 2009
Leandro Lucarella wrote:
> Walter Bright, el  5 de julio a las 22:05 me escribiste:
>> Something for everyone here.
>>
>>
>> http://www.digitalmars.com/d/1.0/changelog.html
>> http://ftp.digitalmars.com/dmd.1.046.zip
>>
>>
>> http://www.digitalmars.com/d/2.0/changelog.html
>> http://ftp.digitalmars.com/dmd.2.031.zip
> 
> I incidentally went through all the D2 bug reports that had being fixed in
> this release and I was really surprised about how much of them had patches
> by Don (the vast majority!).
> 
> Thanks Don! I think it's great that more people are becoming major
> D contributors.

Thanks! Yeah, I did a major assault on the segfault/internal compiler error bugs. I figured that right now, the most useful thing I could do was to make the compiler stable. I have few more to give to Walter, but in general it should be quite difficult to crash the compiler now.

A couple of my other bug patches -- 1994 and 3010 -- appear to be fixed in this release, though they are not in the changelog. Also the ICE from 339 is fixed.



July 13, 2009
On Sun, 05 Jul 2009 22:05:10 -0700, Walter Bright <newshound1@digitalmars.com> wrote:

>Something for everyone here.
>
>
>http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.046.zip
>
>
>http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.031.zip

Nice release. Thanks!

I wonder if expression tuples had been considered for use in the multiple case statement? And if yes, what was the reason they were discarded? Some examples:

case InclusiveRange!('a', 'z'):
case StaticTuple!(1, 2, 5, 6):
case AnEnum.tupleof[1..3]:
11 12 13 14 15 16 17 18 19 20 21
Next ›   Last »