December 10, 2011
2011/12/11 bearophile <bearophileHUGS@lycos.com>:
> kenji hara:
>
>> It breaks IFTI rule,
>
> What do you mean?

I mean following code comment will not become true.

void func(T)(T prm){}
void main(){
   X arg;
   func(arg);  // T is deduced to typeof(arg)
}

>
>
>> and adding special case will make difficult to learn language.
>
> I think that change proposed by Andrei A. (that I like) doesn't add a special case to D2. Currently (2.057beta) this works:
>
>
> void main() {
>    immutable(int[]) a = [1, 2];
>    immutable(int)[] b = a;
> }
>
> Calling a function that accepts a immutable(int)[] with a immutable(int[]) means creating a local slice, it's similar. So I think Andrei A. removes a special case.
>
> Bye,
> bearophile
December 10, 2011
2011/12/11 Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>:
> On 12/10/11 4:31 PM, kenji hara wrote:
>>
>> Treating whole constant arrays as ranges by automatically shedding the
>> top-level const is good.
>> But realizing it by language semantic change is definitely bad.It
>> breaks IFTI rule, and adding special case will make difficult to learn
>> language.
>>
>> Instead of language change, we can add specializations that receive
>> non-ranges and convert them to ranges by removing top-level const.
>> I believe that it is Phobos issue and is never the issue of language.
>
>
> I should add there is precedent. C++ also removes top-level const when passing objects by value to templates. Deducing top-level const with pass-by-value is inherently nonsensical.
>
> Andrei
>

Hmm, it's for sure.
----
void print_type(int){}

template <typename T>
void f(T p)
{
    int n;
    p = &n;     // OK, head is mutable
    //*p = 10;  // NG, tail is const
    print_type(p);
                // Error: need explicit cast from int const * to int
                // T is deduced as int const * == top const is removed
}
int main()
{
    int n;
    int const * const p = &n;
    f(p);
    return 0;
}

Kenji
December 10, 2011
On 12/10/2011 11:41 PM, Andrei Alexandrescu wrote:
> On 12/10/11 4:31 PM, kenji hara wrote:
>> Treating whole constant arrays as ranges by automatically shedding the
>> top-level const is good.
>> But realizing it by language semantic change is definitely bad.It
>> breaks IFTI rule, and adding special case will make difficult to learn
>> language.
>>
>> Instead of language change, we can add specializations that receive
>> non-ranges and convert them to ranges by removing top-level const.
>> I believe that it is Phobos issue and is never the issue of language.
>
> I should add there is precedent. C++ also removes top-level const when
> passing objects by value to templates. Deducing top-level const with
> pass-by-value is inherently nonsensical.
>
> Andrei
>

Yes, but in C++ const is not transitive, so this change necessarily introduces some inconsistency. (for the better, I think)

struct S{int* x;}

void foo(T)(T t){ ... }

immutable int x;
foo(immutable(S)(&x)); // deduced type needs to be immutable(S)
December 10, 2011
On Sunday, December 11, 2011 08:01:57 kenji hara wrote:
> 2011/12/11 bearophile <bearophileHUGS@lycos.com>:
> > kenji hara:
> >> It breaks IFTI rule,
> > 
> > What do you mean?
> 
> I mean following code comment will not become true.
> 
> void func(T)(T prm){}
> void main(){
>    X arg;
>    func(arg);  // T is deduced to typeof(arg)
> }

Why does that matter? What does it affect?

As far as functions go, it means that

immutable str = "hello";
func(str);

would instantiate to

func!string

instead of

func!(immutable string)

which will no affect on the internals of the function except for the fact that it's no possible to alter the function parameter. The contents of the array are immutable regardless.

In what situation would it matter that it's now func!string instead of func!
(immutable string)?

- Jonathan M Davis
December 10, 2011
OK. I agree to the suggestion.

I've been afraid that increasing IFTI rule is making the language
learning difficult.
It comes from the experience from implementing inout deduction for
template function.

But also it is useful that removing top const when passing arguments by value. C++ precedent convinced me.

Thanks.

Kenji Hara

2011/12/11 kenji hara <k.hara.pg@gmail.com>:
> 2011/12/11 Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>:
>> On 12/10/11 4:31 PM, kenji hara wrote:
>>>
>>> Treating whole constant arrays as ranges by automatically shedding the
>>> top-level const is good.
>>> But realizing it by language semantic change is definitely bad.It
>>> breaks IFTI rule, and adding special case will make difficult to learn
>>> language.
>>>
>>> Instead of language change, we can add specializations that receive
>>> non-ranges and convert them to ranges by removing top-level const.
>>> I believe that it is Phobos issue and is never the issue of language.
>>
>>
>> I should add there is precedent. C++ also removes top-level const when passing objects by value to templates. Deducing top-level const with pass-by-value is inherently nonsensical.
>>
>> Andrei
>>
>
> Hmm, it's for sure.
> ----
> void print_type(int){}
>
> template <typename T>
> void f(T p)
> {
>    int n;
>    p = &n;     // OK, head is mutable
>    //*p = 10;  // NG, tail is const
>    print_type(p);
>                // Error: need explicit cast from int const * to int
>                // T is deduced as int const * == top const is removed
> }
> int main()
> {
>    int n;
>    int const * const p = &n;
>    f(p);
>    return 0;
> }
>
> Kenji
December 10, 2011
On 2011-12-10 21:47:13 +0000, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> said:

> We decided to fix this issue by automatically shedding the top-level const when passing an array or a pointer by value into a function.

It seems strange that if you implemented the exact same semantic as an a dynamic array or a pointer with a struct it wouldn't work. Basically, you're making a special case by giving two language-defined types characteristics that can't be replicated by user types. I'm not concerned that much, but I thought you worried about those things Andrei.

Also seems strange to me that class references aren't included in that list, but then I though about how tail-const still doesn't work with objects. You'd need my const(Object)ref patch to make that work, and Walter hasn't taken time to look at it yet…

-- 
Michel Fortin
michel.fortin@michelf.com
http://michelf.com/

December 11, 2011
Posted a test patch to realize the suggestion. https://github.com/D-Programming-Language/dmd/pull/554

Kenji Hara

2011/12/11 kenji hara <k.hara.pg@gmail.com>:
> OK. I agree to the suggestion.
>
> I've been afraid that increasing IFTI rule is making the language
> learning difficult.
> It comes from the experience from implementing inout deduction for
> template function.
>
> But also it is useful that removing top const when passing arguments by value. C++ precedent convinced me.
>
> Thanks.
>
> Kenji Hara
>
> 2011/12/11 kenji hara <k.hara.pg@gmail.com>:
>> 2011/12/11 Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>:
>>> On 12/10/11 4:31 PM, kenji hara wrote:
>>>>
>>>> Treating whole constant arrays as ranges by automatically shedding the
>>>> top-level const is good.
>>>> But realizing it by language semantic change is definitely bad.It
>>>> breaks IFTI rule, and adding special case will make difficult to learn
>>>> language.
>>>>
>>>> Instead of language change, we can add specializations that receive
>>>> non-ranges and convert them to ranges by removing top-level const.
>>>> I believe that it is Phobos issue and is never the issue of language.
>>>
>>>
>>> I should add there is precedent. C++ also removes top-level const when passing objects by value to templates. Deducing top-level const with pass-by-value is inherently nonsensical.
>>>
>>> Andrei
>>>
>>
>> Hmm, it's for sure.
>> ----
>> void print_type(int){}
>>
>> template <typename T>
>> void f(T p)
>> {
>>    int n;
>>    p = &n;     // OK, head is mutable
>>    //*p = 10;  // NG, tail is const
>>    print_type(p);
>>                // Error: need explicit cast from int const * to int
>>                // T is deduced as int const * == top const is removed
>> }
>> int main()
>> {
>>    int n;
>>    int const * const p = &n;
>>    f(p);
>>    return 0;
>> }
>>
>> Kenji
December 11, 2011
On 12/10/11 6:24 PM, kenji hara wrote:
> Posted a test patch to realize the suggestion.
> https://github.com/D-Programming-Language/dmd/pull/554
>
> Kenji Hara

Many thanks, Kenji, for executing so fast on this!

Andrei
December 11, 2011
On 12/10/11 5:20 PM, Michel Fortin wrote:
> On 2011-12-10 21:47:13 +0000, Andrei Alexandrescu
> <SeeWebsiteForEmail@erdani.org> said:
>
>> We decided to fix this issue by automatically shedding the top-level
>> const when passing an array or a pointer by value into a function.
>
> It seems strange that if you implemented the exact same semantic as an a
> dynamic array or a pointer with a struct it wouldn't work. Basically,
> you're making a special case by giving two language-defined types
> characteristics that can't be replicated by user types. I'm not
> concerned that much, but I thought you worried about those things Andrei.

I do worry about those things, and this decision comes at the end of a long deliberation.

There would be several aspects to discuss here. First, you are right that this confers built-in arrays a property that's not reproducible to user-defined types. And that's a bad thing.

Second, that's not really as bad because the real issue is elsewhere. Remember the type "new T[]"? That was supposed to be the array type, which was to be supplanted by its range type, "T[]". After experimentation we decided to give up on that. Why? Because its benefits didn't justify the complication.

So the issue we're solving here is that arrays and the ranges that crawl on them are represented by the same type. User-defined types have the option of defining distinct types for that.

To truly confer user-defined types the same capability, we should define opPassByValue() which is implicitly invoked whenever an object is passed by value into a function. By default that is a do-nothing operator; for arrays it would do the cast thing (or, equivalently, invoke "[]" on the array), and people could define it to do whatever. We could do all that. The question is, is the added complexity justified?

> Also seems strange to me that class references aren't included in that
> list, but then I though about how tail-const still doesn't work with
> objects. You'd need my const(Object)ref patch to make that work, and
> Walter hasn't taken time to look at it yet…

Have you continued to use your fork in daily work? If so, how does it pan out?


Andrei
December 11, 2011
On Sunday, December 11, 2011 01:16:28 Andrei Alexandrescu wrote:
> On 12/10/11 5:20 PM, Michel Fortin wrote:
> > On 2011-12-10 21:47:13 +0000, Andrei Alexandrescu
> > 
> > <SeeWebsiteForEmail@erdani.org> said:
> >> We decided to fix this issue by automatically shedding the top-level const when passing an array or a pointer by value into a function.
> > 
> > It seems strange that if you implemented the exact same semantic as an a dynamic array or a pointer with a struct it wouldn't work. Basically, you're making a special case by giving two language-defined types characteristics that can't be replicated by user types. I'm not concerned that much, but I thought you worried about those things Andrei.
> 
> I do worry about those things, and this decision comes at the end of a long deliberation.
> 
> There would be several aspects to discuss here. First, you are right that this confers built-in arrays a property that's not reproducible to user-defined types. And that's a bad thing.
> 
> Second, that's not really as bad because the real issue is elsewhere. Remember the type "new T[]"? That was supposed to be the array type, which was to be supplanted by its range type, "T[]". After experimentation we decided to give up on that. Why? Because its benefits didn't justify the complication.
> 
> So the issue we're solving here is that arrays and the ranges that crawl on them are represented by the same type. User-defined types have the option of defining distinct types for that.
> 
> To truly confer user-defined types the same capability, we should define opPassByValue() which is implicitly invoked whenever an object is passed by value into a function. By default that is a do-nothing operator; for arrays it would do the cast thing (or, equivalently, invoke "[]" on the array), and people could define it to do whatever. We could do all that. The question is, is the added complexity justified?

I think that it's completely justified. We need a way to define tail-constness for ranges. Given const's transitiveness, it's very easy to end up in a situation where you have a const range, and having a means to get a tail-const version of that range would be very valuable. I don't know if opPassByValue is the best solution, but if not, we at least need a similar one.

I'd say that in general, with opPassByValue, in the worst case, some people don't use it for their range types and those ranges don't work properly in cases that require tail-const. But those people who _do_ use it can benefit from it. It doesn't force anything on anyone. Yes, it's one more thing to learn about know about in D, but it _is_ something that a number of us have cited a need for for a while.

- Jonathan M Davis