September 07, 2004
On Mon, 6 Sep 2004 17:37:13 -0700, antiAlias <fu@bar.com> wrote:
> "Regan Heath" <regan@netwin.co.nz> wrote in message
> The name resolution rules are identical to those in C/C++ and Walter has
> given reasons for them being that way.
>
> ====================
>
> That doesn't mean it's the best way to do it. Or even that it's a /good/
> approach.

Of course. I agree. However, I have yet to be presented with an approach which I would consider better. I have seen the Java way which suffers from a potential hidden bug, which IMO makes it a worse solution than what we currently have.

> I rather suspect the driving force behind all that was backward
> compatibility with existing C implicit conversions (rather than make such
> conversion-usage be explicit, for a more strongly-typed language). Note that D claims to be more strongly-typed ...

The whole implicit/explicit problem seems to me to be a line in the sand problem, some people want to draw that line in one place, and others in another, Walter has drawn the line where he thinks it should be, which is the same? or similar? to where C/C++ has it's line.

Implicit conversions give you ease-of-use. Requiring Explicit conversions give you robustness, each programmer seems to want a different amount of each of those things.

> Heck; Dennis Ritchie even noted that while C types were influenced heavily by Algol-68, the designers of the latter would hardly approve of the
> type-implementation. I'd wager that he was talking about implicit
> conversions <g>

I don't know anything about Algol-68. Is it a strongly typed language? If so, you could well be right.

> Of course, that doesn't mean D has to follow that same old route, since it doesn't have to compile raw, poorly written, C code. Nor does it mean we
> have to /like/ the current name-resolution implementation :-)

Each person is entitled to their own opinion, mine is: while I can see the problems you (and many others, myself included) have with the current name resolution sceme, I cannot think of a better one.

> This name-resolution issue is one of the few things about D that feels
> flat-out wrong;

Is that because you are coming to it from Java? or ..

> especially when other modern languages are not bothered by such arcane nonsense.

I know that Java (which is more modern than C++) uses a different sceme, what other 'modern' languages are you referring to, perhaps it would be beneficial to investigate a few and attempt to make a list of pros/cons for each?

> The one common thing about any design that makes my skin crawl is the "band-aid".

Are you referring to 'alias'?

> I doubt very much that I'm alone in that respect.

No, you're not, quite a few people have expressed dissatisfaction with the current system, myself included, however until a 'better' system is proposed it will likely remain.

Regan

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
September 07, 2004
In article <opsdxfc5o15a2sq9@digitalmars.com>, Regan Heath says...
>
>On Mon, 6 Sep 2004 07:16:44 +0000 (UTC), Arcane Jill <Arcane_member@pathlink.com> wrote:
>> In article <opsdvuwlje5a2sq9@digitalmars.com>, Regan Heath says...
>>> On Fri, 3 Sep 2004 10:51:58 -0700, antiAlias <fu@bar.com> wrote:
>>
>>>> Unfortunately, all this implicit conversion conflicts badly with method resolution. For instance, if I have two methods:
>>>>
>>>> # myFunc (char[] string) { ... }
>>>> # myFunc (wchar[] string) { ... }
>>>>
>>>> the compiler now can't tell which one should be called (vis-a-vis the
>>>> prior example).
>>>
>>> Why does it matter which method is chosen?
>>
>> i think antiAlias was referring to current behavior, not future possible behavior.
>
>I know. Currently it conflicts, I was suggesting a fix (by asking leading questions), it could simply take the first one it finds, as in, if you have:
>
>myFunc("hello world");
>
>the constant string has not specific type, so could potentially match any of the 3 methods, however, it doesn't matter which method it matches as they all (presumably) do the same thing. (*)

I would restrict this a bit, such that if overloads occur within a module then the first one is chose, but when overloads are encountered across multiple modules then an error is displayed.  One way to prevent such an error would be to use explicit module qualifiers.  Here's an example:

module a;

void print( char[] c ) {}
void print( wchar[] c ) {}

module b;

void print( dchar[] c ) {}

module c;
import a;
import b;

print( "hello world" ); // 1
a.print( "hello world" ); // 2

Line 1 is ambiguous because there are multiple valid overloads spanning more than one module.  Line 2 is unambiguous because module a is specified, so print( char[] ) will be chosen as it's the first appropriate overload encountered in module a.  My reasoning is this: if overloads occur within a module then they are likely intended to serve the same purpose, but if the overloads span modules then they likely serve different purposes.

I'm still not sure if this overload resolution change may cause more problems than it solves, but it seems less potentially dangerous than it would be without the refinement I've suggested.  My only concern is that I don't like special cases, handling overload resolution differently for string and numeric literals than for variables seems like kind of a hack.  Would it be appropriate to extend these rules to overload resolution in general?  I'm inclined to say no, but I'd have to think about it more before I could say why.

>(*) Can anyone think of a potential bug this could cause? to me this implicit conversion idea seems identical to the implicit conversion that happens between long,int,short etc during method resolution.

True enough.  Then perhaps it isn't that bad.


Sean


September 07, 2004
"Regan Heath" <regan@netwin.co.nz>
Of course. I agree. However, I have yet to be presented with an approach
which I would consider better. I have seen the Java way which suffers from
a potential hidden bug, which IMO makes it a worse solution than what we
currently have.

===================
On whether that is a bug or something else entirely, I will quote yourself:
"IMO this example could be a red herring, engineered to show the bug, not
at all likely in reality"

I fully agree with your statement Regan. The fact that such a (at best)
"questionable" example should mandate this direction is beyond the pale :-)

And what of other, similar, bugs that D doesn't bother to handle? The required use of 'override' would catch a bunch of rather nasty, and rather common, ones (unlike that "square" example of a particular engineering travesty). Yet, 'override' is not required. That kind of argument against what you call "the Java way" is somewhat futile on several fronts.


> I doubt very much that I'm alone in that respect.

No, you're not, quite a few people have expressed dissatisfaction with the current system, myself included, however until a 'better' system is proposed it will likely remain.

=====================
Fair enough. Two notions immediately spring to mind:

1) How does Java successfully combine method-name resolution with implicit primitive conversion? And what is wrong with adopting that model instead, if it resolves these problems far more successfully than the existing D approach does?

2) forget about implicit conversion altogether. Even MSVC v6.0 requires me to explicitly downcast an int to a char (unless I'm prepared to live with warnings, which I'm not).

Consider this:

a) isn't it often a sign of inconsistent and potentially buggy design where an implicit cast is actually generated? In other cases, it's sheer laziness on the part of the programmer. That's not a criminal offence, but support for that shouldn't cripple significant aspects of the language either.

b) D will quietly convert a long argument to an int or char parameter without so much as a peep! This is much looser than MSVC, for crying out loud ... and should be considered a potential bug, if not an outright one (and D is supposed to be more typesafe than C?)

c) the implicit conversions of primitive-types makes a total mess of overload, especially with respect to inheritance. Just look at some of the bizarre examples given in the past for justifying the current name-resolution and method-alias scheme <g>

d) implicit conversion also makes a mess of certain operator-overloads, such that the compiler quits with an error message. The same is true of certain constructor patterns.

e) method-name alias would not be required, simplifying compiler implementation and removing a considerable learning impediment from the D language.

I could go on and on. All this for the sake of the very few explicit conversions where (a) is not actually a potential bug?

I don't think we're disagreeing here, Regan; other than I'm not satisfied with the status-quo, whereas you apparently are. The notion of adding additional implicit-conversion is guaranteed to bring this up again, for those who care about it.



September 07, 2004
In article <chj8i6$1vt6$1@digitaldaemon.com>, antiAlias says...
>
>1) How does Java successfully combine method-name resolution with implicit primitive conversion? And what is wrong with adopting that model instead, if it resolves these problems far more successfully than the existing D approach does?

Good question.  How does the Java overload resolution scheme work?  And I'm sure Walter is familiar with it as (IIRC) he's implemented a Java compiler before.

>b) D will quietly convert a long argument to an int or char parameter without so much as a peep! This is much looser than MSVC, for crying out loud ... and should be considered a potential bug, if not an outright one (and D is supposed to be more typesafe than C?)

This is incorrect behavior IMO.  Implicit narrowing conversion should not be allowed.  A messy alternative would be to throw an exception at runtime when an implicit narrowing conversion results in a loss of data.  All this leaves is how to handle string and numeric literals, which would either be a cast or a storage specifier.


Sean


September 07, 2004
"Sean Kelly" <sean@f4.ca> wrote in message
>b) D will quietly convert a long argument to an int or char parameter without so much as a peep! This is much looser than MSVC, for crying out loud ... and should be considered a potential bug, if not an outright one (and D is supposed to be more typesafe than C?)

This is incorrect behavior IMO.  Implicit narrowing conversion should not be
allowed.  A messy alternative would be to throw an exception at runtime when
an
implicit narrowing conversion results in a loss of data.  All this leaves is
how
to handle string and numeric literals, which would either be a cast or a
storage
specifier.

Sean

======================

Better, I assert, to produce a compile-time error ~ requiring an explicit conversion to remedy the situation (such as the cast or storage-specifier you speak of). Of course, that's only for those cases where the compiler didn't actually just show you a bug. Sure makes compiler implementation easier too :-)




September 07, 2004
On Mon, 6 Sep 2004 20:08:41 -0700, antiAlias <fu@bar.com> wrote:
> "Regan Heath" <regan@netwin.co.nz>
> Of course. I agree. However, I have yet to be presented with an approach
> which I would consider better. I have seen the Java way which suffers from
> a potential hidden bug, which IMO makes it a worse solution than what we
> currently have.
>
> ===================
> On whether that is a bug or something else entirely, I will quote yourself:
> "IMO this example could be a red herring, engineered to show the bug, not
> at all likely in reality"
>
> I fully agree with your statement Regan. The fact that such a (at best)
> "questionable" example should mandate this direction is beyond the pale :-)

The statement above was only saying it 'could' be a red herring, I was not convinced either way at the time of writing it.

The example given _was_ engineered to show the bug, but then, of course it was, how else would Walter get an example.

For those interested here are the examples given at the time...

<Walter>
Stroustrup gives two examples (slightly modified here):

---------------------------------
class X1 { void f(int); }

 // chain of derivations X(n) : X(n-1)

class X9: X8 { void f(double); }

void g(X9 p)
{
    p.f(1);    // X1.f or X9.f ?
}
-----------------------------------
His argument is that one can easilly miss an overload of f() somewhere in a
complex class heirarchy, and argues that one should not need to understand
everything about a class heirarchy in order to derive from it. The other
example involves operator=(), but since D doesn't allow overloading
operator=() instead I'll rewrite it as if it were a function that needs to
alter a class state, and a derived class written later that 'caches' a
computation on the derived state:

class B
{    long x;
     void set(long i) { x = i; }
    void set(int i) { x = i; }
    long squareIt() { return x * x; }
}
class D : B
{
    long square;
    void set(long i) { B.set(i); square = x * x; }
    long squareIt() { return square; }
}

Now, imagine B were a complex class with a lot of stuff in it, and our
optimizing programmer missed the existence of set(int). Then, one has:

    long foo(B b)
    {
        b.set(3);
        return b.squareIt();
    }

and we have an obscure bug.
</Walter>


> And what of other, similar, bugs that D doesn't bother to handle? The
> required use of 'override' would catch a bunch of rather nasty, and rather
> common, ones

I agree, I am a proponent (sp?) for the required use of the override keyword.


> (unlike that "square" example of a particular engineering
> travesty). Yet, 'override' is not required. That kind of argument against
> what you call "the Java way" is somewhat futile on several fronts.

I am only calling it "the Java way" because Java is the only language I know of to do name resolution in the manner you want.



>> I doubt very much that I'm alone in that respect.
>
> No, you're not, quite a few people have expressed dissatisfaction with the
> current system, myself included, however until a 'better' system is
> proposed it will likely remain.
>
> =====================
> Fair enough. Two notions immediately spring to mind:
>
> 1) How does Java successfully combine method-name resolution with implicit primitive conversion?

It doesn't, "Roberto Mariottini" tried Walters example above in Java, he found..

<Roberto Mariottini>
I translated your example in Java, and it shows the "wrong" (non-C++) behaviour.
The attached Test.java prints:

The square is: 0
</Roberto Mariottini>


> And what is wrong with adopting that model instead, if it resolves these problems far more successfully than the existing D
> approach does?

IMO it doesn't resolve them 'more sucessfully', that is what is wrong with adopting it.



> 2) forget about implicit conversion altogether. Even MSVC v6.0 requires me to explicitly downcast an int to a char (unless I'm prepared to live with
> warnings, which I'm not).

What you describe above int -> char is a narrowing conversion, I agree, a narrowing conversion should have to be explicit.



> Consider this:
>
> a) isn't it often a sign of inconsistent and potentially buggy design where an implicit cast is actually generated? In other cases, it's sheer laziness on the part of the programmer. That's not a criminal offence, but support
> for that shouldn't cripple significant aspects of the language either.

I wouldn't call it 'sheer laziness' I would call it 'convenience' and as I said earlier, each programmer desires a different balance between convenience and robustness.



> b) D will quietly convert a long argument to an int or char parameter
> without so much as a peep! This is much looser than MSVC, for crying out
> loud ... and should be considered a potential bug, if not an outright one
> (and D is supposed to be more typesafe than C?)

I agree, if D does that narrowing conversion (it seems to in my tests) then it is bad. This may be a case where Walter has drawn the line too far on the side of convenience.



> c) the implicit conversions of primitive-types makes a total mess of
> overload, especially with respect to inheritance. Just look at some of the bizarre examples given in the past for justifying the current
> name-resolution and method-alias scheme <g>
>
> d) implicit conversion also makes a mess of certain operator-overloads, such that the compiler quits with an error message. The same is true of certain constructor patterns.

I need examples for the 2 above, my memory isn't good enough to remember them, or what to search for to find them.


> e) method-name alias would not be required, simplifying compiler
> implementation and removing a considerable learning impediment from the D
> language.

The 'alias' keyword is used to explicitly import symbols into the current scope.. I thought you were all for explicit <g>

So, on one hand you don't want implicit type conversion, as it introduces bugs... but on the other you do want implicit inclusion of symbols in the child class scope... and this also introduces bugs... but you're happy with one set of bugs and not the other?


> I could go on and on. All this for the sake of the very few explicit
> conversions where (a) is not actually a potential bug?

> I don't think we're disagreeing here, Regan; other than I'm not satisfied
> with the status-quo, whereas you apparently are.

Well..  I'm not satisfied with it, but, it's the best I've seen so far.

Regan

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
September 07, 2004
On Mon, 6 Sep 2004 20:36:53 -0700, antiAlias <fu@bar.com> wrote:
> "Sean Kelly" <sean@f4.ca> wrote in message
>> b) D will quietly convert a long argument to an int or char parameter
>> without so much as a peep! This is much looser than MSVC, for crying out
>> loud ... and should be considered a potential bug, if not an outright one
>> (and D is supposed to be more typesafe than C?)
>
> This is incorrect behavior IMO.  Implicit narrowing conversion should not be
> allowed.  A messy alternative would be to throw an exception at runtime when
> an
> implicit narrowing conversion results in a loss of data.  All this leaves is
> how
> to handle string and numeric literals, which would either be a cast or a
> storage
> specifier.
>
> Sean
>
> ======================
>
> Better, I assert, to produce a compile-time error ~ requiring an explicit
> conversion to remedy the situation (such as the cast or storage-specifier
> you speak of). Of course, that's only for those cases where the compiler
> didn't actually just show you a bug. Sure makes compiler implementation
> easier too :-)

I agree. Narrowing conversions should have to explicit IMO.

Regan

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
September 07, 2004
On Tue, 7 Sep 2004 02:52:51 +0000 (UTC), Sean Kelly <sean@f4.ca> wrote:

> In article <opsdxfc5o15a2sq9@digitalmars.com>, Regan Heath says...
>>
>> On Mon, 6 Sep 2004 07:16:44 +0000 (UTC), Arcane Jill
>> <Arcane_member@pathlink.com> wrote:
>>> In article <opsdvuwlje5a2sq9@digitalmars.com>, Regan Heath says...
>>>> On Fri, 3 Sep 2004 10:51:58 -0700, antiAlias <fu@bar.com> wrote:
>>>
>>>>> Unfortunately, all this implicit conversion conflicts badly with method
>>>>> resolution. For instance, if I have two methods:
>>>>>
>>>>> # myFunc (char[] string) { ... }
>>>>> # myFunc (wchar[] string) { ... }
>>>>>
>>>>> the compiler now can't tell which one should be called (vis-a-vis the
>>>>> prior example).
>>>>
>>>> Why does it matter which method is chosen?
>>>
>>> i think antiAlias was referring to current behavior, not future possible
>>> behavior.
>>
>> I know. Currently it conflicts, I was suggesting a fix (by asking leading
>> questions), it could simply take the first one it finds, as in, if you
>> have:
>>
>> myFunc("hello world");
>>
>> the constant string has not specific type, so could potentially match any
>> of the 3 methods, however, it doesn't matter which method it matches as
>> they all (presumably) do the same thing. (*)
>
> I would restrict this a bit, such that if overloads occur within a module then
> the first one is chose, but when overloads are encountered across multiple
> modules then an error is displayed.  One way to prevent such an error would be
> to use explicit module qualifiers.  Here's an example:
>
> module a;
>
> void print( char[] c ) {}
> void print( wchar[] c ) {}
>
> module b;
>
> void print( dchar[] c ) {}
>
> module c;
> import a;
> import b;
>
> print( "hello world" ); // 1
> a.print( "hello world" ); // 2
>
> Line 1 is ambiguous because there are multiple valid overloads spanning more
> than one module.  Line 2 is unambiguous because module a is specified, so print(
> char[] ) will be chosen as it's the first appropriate overload encountered in
> module a.  My reasoning is this: if overloads occur within a module then they
> are likely intended to serve the same purpose, but if the overloads span modules
> then they likely serve different purposes.
>
> I'm still not sure if this overload resolution change may cause more problems
> than it solves, but it seems less potentially dangerous than it would be without
> the refinement I've suggested.  My only concern is that I don't like special
> cases, handling overload resolution differently for string and numeric literals
> than for variables seems like kind of a hack.  Would it be appropriate to extend
> these rules to overload resolution in general?  I'm inclined to say no, but I'd
> have to think about it more before I could say why.
>
>> (*) Can anyone think of a potential bug this could cause? to me this
>> implicit conversion idea seems identical to the implicit conversion that
>> happens between long,int,short etc during method resolution.
>
> True enough.  Then perhaps it isn't that bad.

I think I agree with you.. I will have to think about it some more.

Regan

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
September 07, 2004
"Regan Heath" <regan@netwin.co.nz> wrote in message
> a) isn't it often a sign of inconsistent and potentially buggy design
> where an implicit cast is actually generated? In other cases, it's sheer
> laziness on the part of the programmer. That's not a criminal offence,
> but support
> for that shouldn't cripple significant aspects of the language either.

I wouldn't call it 'sheer laziness' I would call it 'convenience' and as I said earlier, each programmer desires a different balance between convenience and robustness.

========
You'd happily trade-off robustness for a few keystrokes? If you're serious,
then Matthew's choice-phrase for that sort of attitude towards design &
coding fits the bill quite nicely. I think you should start another topic on
that one :-)

That aside, you avoided the point.



> e) method-name alias would not be required, simplifying compiler implementation and removing a considerable learning impediment from the D language.

So, on one hand you don't want implicit type conversion, as it introduces bugs... but on the other you do want implicit inclusion of symbols in the child class scope... and this also introduces bugs... but you're happy with one set of bugs and not the other?

===============
I'm truly surprised, Regan. You seemed to understand the issue, and then say
this?


September 07, 2004
In article <opsdxyhltt5a2sq9@digitalmars.com>, Regan Heath says...

>I agree. Narrowing conversions should have to explicit IMO.

Absolutely I agree 100%.

Of course, string conversions are not narrowing, in /any/ direction. Whether you go from char[] to dchar[], dchar[] to char[], or any other combination, the conversion yields zero loss of information, and so should be considered neither widening nor narrowing. String conversions (between the UTFs) are always lossless.

Arcane Jill