May 26, 2007
Bill Baxter wrote:
> Dave wrote:
>> Bill Baxter wrote:
>>> renoX wrote:
>>>> Bill Baxter a écrit :
>>>
>>>>> The thing with this is it may catch the error but it doesn't help the compiler generate more optimized code.  The C99 'restrict' keyword does, though.  I suppose the compiler could be smart enough to deduce 'restrict' semantics are in effect from the assert message above.  That would be pretty neat.
>>>>
>>>> Well, if I understood correctly Jarrett Billingsley's answer, the semantic on 'in' is the same as 'restrict': the fastest as the compiler will be free to optimize but the most fragile from programmers point of view because if you violate the restrict restriction the result is unpredictable (no bug, a bug depending on optimization flags, fun!)
>>>>
>>>> Sure, it'd be nice if the compiler was able to do this kind of optimization without the in|restrict hint but this is hard.
>>>
>>> The 'in' parameter type does nothing currently.  Leaving it off is the same as putting it in.  So I don't believe it acts like 'restrict'.  I 
>>
>> That's right (verifiable through the asm).
>>
>>> certainly don't remember reading anywhere in the spec that D assumes all 'in' array parameters are unaliased.  That would be a big break from C/C++ if it were so, so I think the spec would go out of its way to point out the difference.
>>>
>>
>> I believe that's part of the overall justification for the new for 2.0 'in', which will mean 'scope const final' for any type of reference parameter (correct me if I'm wrong Walter). Then the optimizer will be free to do its thing with those references, w/o any complicated pointer analysis or runtime checking.
>>
>> IIUC, the compiler front-end will do it's best to enforce it (unlike C's 'restrict') and I believe it won't be as easy to cast away like C++'s 'const&'.
>>
>> If so that'd be pretty cool, because it would make D competitive with Fortran in this regard.
> 
> I don't think Walter has said that 'in' as 'scope const final' is going to imply no aliasing.  I think it's kind of an orthogonal concern.
> 

I haven't seen any direct mention of it by Walter either. But whether or not it's orthogonal may depend on what he meant by "const - the function will not attempt to change the contents of what is referred to" in his post over in d.D.announce.

From Walter's original post, the description for "const" can be taken in either way as I read it. The most strict interpretation being "It is an error for the function to change the contents of a reference passed via 'in'" (which would mean 'in' is also like C's 'restrict' to a function consumer).

I think the usage of the example you gave is pretty rare but agree it may occasionally lead to hard to find bugs if the function developer doesn't check for overlap or make all of the parameters 'mutable' (one of which they should if 'in' also implies 'restrict').

But programmer expectations (i.e.: knowing that a bug could be caused by mixing mutable and const parameters in D [like your example]) could mitigate that problem quite a bit by making that type of bug less likely and time consuming to find, because they would know where to look and what to avoid as long as 'in' is documented well.

- Dave

> Say you have a vector difference function:
> 
>     void sub(mutable float[] ret, in float[] a, in float[] b) {
>         for(size_t i=1; i<ret.length; i++) {
>            ret[i] = a[i] - b[i];
>         }
>     }
> 
> Should you expect this to fail?
>     float[10] c;
>     for(i=0; i<=10; i++) c[i]=i;
>     sub(c[0..9],c[1..$],c[0..9]);
> 
> Looking at the loop it should be fine.  It should calc
>         c[0] = c[1] - c[0];
>         c[1] = c[2] - c[1];
>         c[2] = c[3] - c[2];
>         ...
> and leave you with c[0..9] containing all 1's.
> But a smarty pants compiler might decide for some reason that it's faster to rewrite the loop as:
>        memcpy(ret,a,float.sizeof*a.length);
>        for (i=0;i<N;i++) { ret[i] -= b[i]; }
> 
> That's a fine transformation if b and ret are distinct.  But it isn't fine when b and ret are aliased.
> 
> That's the reason why C doesn't have restrict as default. The thinking is that whatever the compiler does under the hood, the result should be output that looks exactly like if you really did run the C code on a C virtual machine in the exact order written.  Even if users get tricky with inputs and do things that are guaranteed to give incorrect results, it should still give the "exact" incorrect result.  It really is a separate issue from whether the parameter can be modified or not.
> 
> That's C's philosophy anyway.  And I haven't heard anything from Walter that sounded to me like D was taking a different approach.  I think a 'restrict' type keyword probably is the best way to handle it in a C-like language.  Most code doesn't need the performance boost as much as it needs predictability and reliability.  Aliasing bugs can also be very tricky to debug because things might work great on your computer but only fail on some other machine when compiled with SSE3 instructions and -03 optimizations.  So it makes sense for performance critical code to declare in a visible way that it's placing some restrictions on the kind of inputs that will work.
> 
> So I think D could still use a separate restrict-like keyword.  But I think it could be made a little more user friendly than C's by allowing it to apply to a whole function rather than having to say 'restrict' 'restrict' 'restrict' for every single parameter.
> 
> --bb
1 2
Next ›   Last »