February 28, 2005
>> Oh, and let's not forget that D will implicitly, and silently, convert a
> long
>> argument into an int or byte. Not even a peep from the compiler. MSVC
>> 6
>> certainly will not allow one to be so reckless.
>
> Actually, it's the defined, standard behavior of both C and C++ to do
> this.
> MSVC 6 will allow it without a peep unless you crank up the warning
> level.

That being the case, why have all those commercially-minded compiler vendors added in warnings to detect them? What makes you/D different from them?

> At one point I had such disallowed, but it produced more error
> messages than
> it was worth, requiring the insertion of many daft cast expressions

You say they're daft, and yet we're denied any mechanism to determine the daftness for ourselves (save spending the very significant amount of time in hacking away at the compiler front-end).

When we discuss the issue, you present us with your favoured examples, but this might well be only 1% of the totality of contexts for narrowing conversions.

Unless and until we have a *standard* compiler option that turns a narrowing conversion into an error, or at least a warning, such that we can judge the situation for ourselves, we're just going to continue to think that you're wrong and partial.

> (and I
> know how you don't like them with string literals!).

Disingenuous. Regan: please let me know (privately, if you wish) what
kind of fallacy this is. :-)

> Unlike C and C++,
> implicit conversions of floating point expressions to integral ones is
> disallowed in D, and that doesn't seem to cause any problems.

That's good. But, erm, so what? You're telling us you're fixing some bad things in C/C++ and not some others. Or that floating=>integral narrowing is wrong in *all* cases, and integral=>integral narrowing  is right in *all* cases. Doesn't stand up to common sense. Without being able to test it out for ourselves, we're left with only two options: Walter is smarter than us in all circumstances, or Walter is wrong but not listening/caring.

For all that warnings and options are lambasted by one or two people - and Bob knows I don't want to get a GCC level of insanity - there seems no acknowledgement of the fact that it affords the *user* the ability to make choices for themselves. Given that the inventor of C++ (nor the stds members) did not foresee the incredibly arcane yet powerful uses for templates, is it so hard to accept that one man, however bright (pun intended, but not sarcastically), cannot see the extent of D world?

> I suppose if you look at it from a Java perspective, it might seem
> like a
> coverup. But if you look at it from a C++ perspective, it behaves just
> as
> one would expect and be used to. Implicit casting has been in C and
> C++ for
> a very long time

Yes, and it totally frigging stinks. Right now I'm running the last couple of cycles on STLSoft 1.8.3, and am building unittest projects again and again with 19 C/C++ compilers. *Many* times most of the compilers have let me through with 0E0W, only for one to reject (by dint of -wx) an implicit integral conversion. In at least 50% of these cases, I am bloody glad to have had at least one compiler that has sussed it out, so I can (re-)consider the ramifications. In many cases, say at least 5%, this has shown a bug.

So, once again, I claim that your insight and experience, though huge, and probably much larger than mine, is not total. Therefore, the all-or-nothing approach is flawed.

> Surely there's a more elegant solution
> all
>> round, both for the compiler and for the user? Less special-cases is
> better for
>> everyone.
>
> Nobody likes special cases, but they are an inevitable result of
> conflicting
> requirements.

Exactly. And if there is even 1 special case in a particular area, intransigence on the part of the compiler is the absolute wrong approach.

Maybe we're just going to have to wait until there are several D compilers (for a given platform) such that one can only trust one's code when it's been multiplexed through a bunch of them. Don't see how that's any kind of evolution ...




February 28, 2005
In article <cvubs2$1qdl$1@digitaldaemon.com>, Walter says...
>
>
>"Kris" <Kris_member@pathlink.com> wrote in message news:cvu5go$1k2j$1@digitaldaemon.com...
>> I've already noted some in a prior post in this thread (today). For more,
>you
>> might refer to the very examples you used against the issues brought up
>via the
>> old "alias peek-a-boo game" thread. Those examples are both speculative
>cases
>> where issues arise over implicit-casting combined with method-overloading.
>For
>> want of a better term, I'll refer to the latter as ICMO.
>>
>> At that time, you argued such examples were the reason why it was so
>important
>> for any and all overloaded superclass-methods be /deliberately hidden/
>from any
>> subclass ~ and using an 'alias' to (dubiously, IMO) bring them back into
>scope
>> was the correct solution. I noted at that time my suspicion this usage of 'alias' was a hack. Indeed, it is used as part of an attempt to cover up
>some of
>> the issues surrounding ICMO.
>
>Yes, we discussed that at length. I don't think either of us have changed our minds.


What you didn't note at the time was those issues are all due primarily to implicit-casting. We did not discuss that at all. That's why it's being brought up again. Perhaps you didn't recognise the other culprit.


>> Here's another old & trivial example of this type of bogosity:
>>
>> void print (char[] s);
>> void print (wchar[] s);
>>
>> {print ("bork");}
>>
>> Because the char literal can be implicitly cast to wchar[], the compiler
>fails.
>> One has to do this instead:
>>
>> print (cast(char[]) "bork");
>>
>> This is daft, Walter. And it hasn't been fixed a year later.
>
>That kind of thing happens when top down type inference meets bottom up type inference. It's on the list of things to fix. (It's technically not really an implicit casting issue. A string literal begins life with no type, the type is inferred from its context. This falls down in the overloading case you mentioned.)


Agreed. It's easier to talk about these things in simplistic terms though.


>> Oh, and let's not forget that D will implicitly, and silently, convert a
>long
>> argument into an int or byte. Not even a peep from the compiler. MSVC 6 certainly will not allow one to be so reckless.
>
>Actually, it's the defined, standard behavior of both C and C++ to do this. MSVC 6 will allow it without a peep unless you crank up the warning level.


Not true. It will even complain about casting int to uint, if the argument is an expression rather than a simple variable. That's without any warning levels explicitly set (as I recall).


>At one point I had such disallowed, but it produced more error messages than it was worth, requiring the insertion of many daft cast expressions


Some might say that lots of implicit-casts are a sign of sloppy code. Will you allow us to try it out please? How about a test? There's a vast sea of code in Mango, for example. I'd very much like to try a strict compiler on the code there. Frankly, it could only serve to tighten up my coding habits. Don't you think the results would be useful for purposes of discussion?

You do realize, don't you, all this nonsense with ICMO simply vanishes (along with some special cases) if implicit-casting were to be narrowed in scope? Have you thought about how a model might be constructed, whereby implicit-casting is /not/ the norm?

That's another serious question for you.


>(and I know how you don't like them with string literals!).
>implicit conversions of floating point expressions to integral ones is
>disallowed in D, and that doesn't seem to cause any problems.


I agree that it doesn't cause problems. If you're changing the basic type some value, then you'd better be explicit about it. This is not the case with char[] (as you appear to imply by proximity). So ~ great! Let's see more of that!


>> There's many, many more examples. Here's another old one that I always
>found
>> somewhat amusing: news:cgat6b$1424$1@digitaldaemon.com
>
>That's the same issue as you brought up above - which happens first, name lookup or overloading? C++ does it one way, Java the other. D does it the C++ way. C++ can achieve the Java semantics with a 'using' declaration, D provides the same with 'alias'. In fact, D goes further by offering complete control over which functions are overloaded with which by using alias. Java has no such capability.


I'm sorely pushed to call you out on this one. You're falling back on the old defensive posture of Java vs C++ again. Perhaps you think I'm some kind of Java bigot? I'm not. There's no benefit in that: those languages are not D. And who really cares about how it's implemented? The end result is what's important.

In addition, you're claiming the use of alias gives "complete control" over these ICMO issue. You know (or ought to know) very well that alias is not so fine-grained. It will pull in all methods of a particular name, which can just as surely lead to the ICMO issues you claim your 'method-hiding' approach is 'resolving'. The 'horror' of those examples you used in your arguments, so long ago, is back again in a slightly smaller dose.

Not only that; any and all alias's are inherited down the subclass chain. So one can easily imagine a scenario where the poor hapless sod using someone else's code runs into the same minefield ~ wearing snow shoes rather than Ballet Pumps. Except this time the minefield has been "blessed" by alias instead. Great.

It's broken, and it's a sham. Implicit-casting combined with method-overloading is borked. Changing my mind on that would be sticking my head into the sand.

This is not about language comparison, Walter. And this is not about who's nuts are bigger. It's about usability, maintainability, and deterministic assertion. D has taken on board all the baggage of C++ regarding ICMO, and tried to cover it up with something else.

There are serious issues here. Pretending it's all OK doesn't make them go away, and it would be far more productive if you were at least open to thinking about how the whole mess could be resolved cleanly and elegantly. I am not trying to cut your limbs off!  :-)

Are you not /open/ to considering *any* other alternative?


>> I'm not saying that implicit-casting is necessarily bad, and I'm not
>saying that
>> method-overloading is bad. I am saying the combination of the two leads to
>all
>> sort of nastiness; some of which you've attempted to cover-up by
>implicitly
>> hiding superclass method-names overloaded by a subclass,
>
>I suppose if you look at it from a Java perspective, it might seem like a coverup. But if you look at it from a C++ perspective, it behaves just as one would expect and be used to. Implicit casting has been in C and C++ for a very long time, and it's necessary to cope with the plethora of basic types (Java has a sharply reduced number of basic types, reducing the need for implicit conversions.)


There's a whole raft of assumption in there. Again, I really don't give a whit about how Java or C++ does it. D is supposed to be better, so let's at least discuss how that could happen?


>> It shouldn't have to be like this. Surely there's a more elegant solution
>all
>> round, both for the compiler and for the user? Less special-cases is
>better for
>> everyone.
>
>Nobody likes special cases, but they are an inevitable result of conflicting requirements.


What are these conflicting requirements? Please spell them out. Are they:

1) implicit conversion?

2) overloaded methods?


And, please, I'm completely serious about testing a strict compiler on Mango. It would be a good test case -- and who knows what it might tell us? At least there would be some concrete evidence to discuss. That's often so much better than personal opinion.

- Kris


February 28, 2005
Implicit narrowing conversions:

"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cvue5s$1sh4$1@digitaldaemon.com...
> Or that floating=>integral
> narrowing is wrong in *all* cases, and integral=>integral narrowing  is
> right in *all* cases. Doesn't stand up to common sense.

Of course it doesn't stand up to common sense, and it is not what I said. What I did say, perhaps indirectly, is that implicit floating->integral conversions in the wild tend to nearly always be a mistake. For the few cases where it is legitimate, a cast can be used. But the reverse is true for other integral conversions. They happen frequently, legitimately. Having to insert casts for them means you'll wind up with a lot of casts in the code.

Having lots of casts in the code results in *less* type safety, not more. Having language rules that encourage lots of casting is not a good idea. Casts should be fairly rare.

It isn't all or nothing. It's a judgement call on the balance between the risk of bugs from a narrowing conversion and the risk of bugs from a cast masking a totally wrong type. With integral conversions, it falls on the one side, with floating conversions, it falls on the other.

Warnings:

I'm pretty familiar with warnings. If a compiler has 15 different warnings, then it is compiling 15! (15 factorial) different languages. Warnings tend to be unique to each compiler vendor (sometimes even contradictory), making more and more different languages that call themselves C++. Even worse, with each warning often comes a pragma to turn it off here and there, because, well, warnings are inherently wishy-washy and often wrong. Now I download Bob's Cool Project off the net, and compile it. I get a handful of warnings. What do I do about them? I don't know Bob's code, I don't know if they're bugs or not. I just want a clean compile so I can use the program. If the language behaves consistently, I've got a better chance of that happening. Warnings make a language inconsistent.

There are a lot of things Java has done wrong, and a lot they've done right. One of the things they've done right is specify the language so it is consistent from platform to platform. A Java program compiles successfully or it does not, there are no warnings. If it compiles successfully and runs correctly on one platform, chances are far better than they are for C++ that it will compile and run correctly without modification on another platform. You know as well as anyone how much work it is to get a complex piece of code to run on multiple platforms with multiple C++ compilers (though this has gotten better in recent years).

The idea has come up before of creating a "lint" program out of the D front end sources. It's an entirely reasonable thing to do, and I encourage anyone who wants to to do it. It would serve as a nice tool for proving out ideas about what should be an error and what shouldn't.


February 28, 2005
In article <cvuceu$1r0p$1@digitaldaemon.com>, Walter says...
>
>
>"Derek Parnell" <derek@psych.ward> wrote in message news:1aryxa64qi6wv.1tueiq1jegdq2.dlg@40tude.net...
>> I find myself asking, "why did the compiler implicitly cast the constant
>12
>> to and int
>
>It didn't. The literal 12 is an int. Hence:
>
>C:mars>dmd test
>test.d(9): function test.foo called with argument types:
>        (char[],int,int)
>matches both:
>        test.foo(char[],int,uint)
>and:
>        test.foo(char[],uint,uint)
>
>both match with implicit casts, hence the ambiguity error. Also, storage classes (such as inout) are ignored for overload matching, *only* argument types are looked at.

Well, of course they both match with implicit casts Walter -- you went and stripped off the important distinguishing traits before the matching process occurred! Doh!

Pointer types are treated more strictly, yet you state D deliberately ignores that information ... what's left is something that only partially resembles the original signature, and the user intent.

If I refrain from stating the obvious, may I ask if this is how it will remain?

- Kris


February 28, 2005

Matthew wrote:
> "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:42225D39.7010302@nospam.org...

(lots of good and to-the-point stuff deleted) (also points
taken)  :-)

>     (iv) the potential for D to be better, or rather to better support, template programming without the necessary intellectual pain that goes on in C++

We really do need to somehow get a grip on what's needed for
templates!! In a couple of posts I've tried to find out exactly
what the requirements for a (superior? superlative? -- aw crap,
just well working) template system are.

What I'd wish is a discussion about what kinds of things
people see a template system should bring. Like instead
of how we can tweak the existing one, we'd start from
scratch -- at least with the thinking and discussion.

I also somehow feel that we'd need to try out some of the
better ideas. At least the syntax could be tried out with
preprocessors (to some extent at least).

Matthew, (as the best qualified in this particular subject)
what would you really wish a "perfect" template system
could do? Like if you were granted Any Three Wishes style.

> I do not *believe* that it will become all that it can, anymore than I *believe* my sons will grow up to be Nobel prize-winning physicists, or that Imperfect C++ will eventually be recognised as better for developers than some recent C++ books that like to make the reader feel inferior with all manner of nonsensical intellectual masturbation. I just think these things are not beyond the bounds of reason.

Oh, Bob! What if my sons don't either???    {8-O

> I think strength comes from acknowledging one's shortcomings, and either learning to live with / obviate them, or using them as impetus to improve. Pretending that everything's good is the kind of crap we get from large corporations, or, dare I say it, 21st English-speaking governments (Canada and NZ largely excepted).

Oh, God, let me have strength to change all I could.
And let me placidly accept all I can't change!
February 28, 2005
"Kris" <Kris_member@pathlink.com> wrote in message news:cvul8e$255i$1@digitaldaemon.com...
> In article <cvuceu$1r0p$1@digitaldaemon.com>, Walter says...
> >"Derek Parnell" <derek@psych.ward> wrote in message news:1aryxa64qi6wv.1tueiq1jegdq2.dlg@40tude.net...
> >> I find myself asking, "why did the compiler implicitly cast the
constant
> >12
> >> to and int
> >
> >It didn't. The literal 12 is an int. Hence:
> >
> >C:mars>dmd test
> >test.d(9): function test.foo called with argument types:
> >        (char[],int,int)
> >matches both:
> >        test.foo(char[],int,uint)
> >and:
> >        test.foo(char[],uint,uint)
> >
> >both match with implicit casts, hence the ambiguity error. Also, storage classes (such as inout) are ignored for overload matching, *only*
argument
> >types are looked at.
>
> Well, of course they both match with implicit casts Walter -- you went and stripped off the important distinguishing traits before the matching
process
> occurred! Doh!
>
> Pointer types are treated more strictly, yet you state D deliberately
ignores
> that information ... what's left is something that only partially
resembles the
> original signature, and the user intent.

inout is a storage class, not a type. Overloading is based on the types. Although the inout is implemented as a pointer, that isn't how the type system views it.

> If I refrain from stating the obvious, may I ask if this is how it will
remain?

Yes. I don't think overloading based on the storage class is a good idea, it would introduce the possibility of things like:

void foo(int x);
void foo(inout int x);

existing, and what would that mean? Saying that case isn't allowed starts opening up the door to the C++ long list of special overloading cases. (C++ has a lot of funky weird overloading rules about when an int is 'better' than int&, and when it isn't. Matthew and I had the recent joy of attempting to track down yet another oddity with template partial specialization due to this. Both of us are experts at it, and we both expended much time trying to figure out what the exact behaviour should be, and it wound up surprising us both.) Sometimes making the "obvious" cases work leads to a lot of unintended consequences in the corners, that old "conflicting requirements" thing I was talking about before. Sometimes I think living with a simple rule is better.


February 28, 2005
>> I think that indicates a problem that this reworked error message does not
>address.
>
>You did have an issue with the "cast(uint)(c) is not an lvalue" message, however, that was the second error message put out by the compiler. The real error was the first error message (both error messages pointed to the same line). This is the well known "cascading error message" thing, where the compiler discovers an error, issues a correct diagnostic, then makes its best guess at how to patch things up so it can move on. If it guesses wrong (there is no way it can reliably guess write, otherwise it might as well write the program for you!), then more errors come out.

In this case the best would be to test all the ambingious functions. Than it
would know that one of the functions has a problem with the argument types while
the other one does not and could report that.
But that might be too complicated to implement.


-- Matthias Becker


February 28, 2005
"Walter" <newshound@digitalmars.com> wrote in message news:cvuku9$24j5$1@digitaldaemon.com...
> But the reverse is true
> for other integral conversions. They happen frequently, legitimately.
Having
> to insert casts for them means you'll wind up with a lot of casts in the code.

An example is in order:

    byte b;
    ...
    b = b + 1;

That gives an error if implicit narrowing conversions are disallowed. You'd have to write:

    b = cast(byte)(b + 1);

or, if writing generic code:

    b = cast(typeof(b))(b + 1);

Yuk.


February 28, 2005
>Implicit narrowing conversions:
>
>> Or that floating=>integral
>> narrowing is wrong in *all* cases, and integral=>integral narrowing  is
>> right in *all* cases. Doesn't stand up to common sense.
>
>Of course it doesn't stand up to common sense, and it is not what I said. What I did say, perhaps indirectly, is that implicit floating->integral conversions in the wild tend to nearly always be a mistake. For the few cases where it is legitimate, a cast can be used. But the reverse is true for other integral conversions. They happen frequently, legitimately. Having to insert casts for them means you'll wind up with a lot of casts in the code.
>
>Having lots of casts in the code results in *less* type safety, not more. Having language rules that encourage lots of casting is not a good idea. Casts should be fairly rare.
>
>It isn't all or nothing. It's a judgement call on the balance between the risk of bugs from a narrowing conversion and the risk of bugs from a cast masking a totally wrong type. With integral conversions, it falls on the one side, with floating conversions, it falls on the other.

When I get a warning from my C++ compiler that I do a narrowing conversation I allways insert a cast. So I wouldn't have a problem with inserting them in D code.

But what about down-casting? It's often done when you expect it to work (just as you do with integral narrowing conversations). And it is even more save as you get null if it doesn't work. So why aren't down-casts implicit?


-- Matthias Becker


February 28, 2005
On Sun, 27 Feb 2005 22:07:21 -0800, Walter wrote:

> "Derek Parnell" <derek@psych.ward> wrote in message news:1aryxa64qi6wv.1tueiq1jegdq2.dlg@40tude.net...
>> I find myself asking, "why did the compiler implicitly cast the constant
> 12
>> to and int
> 
> It didn't. The literal 12 is an int. Hence:


Hmmm... I think we have a problem.  I would have said that 12 is an *integer*, but I couldn't tell you what sort of integer it is until I saw it in context. I believe that 12 could be any of ...

  byte
  ubyte
  short
  ushort
  int
  uint
  long
  ulong

All are integers, but they are all different subsets of the integer set. And you are still absolutely positive that 12 is only an int? Could it not have been a uint? If not, why not?

Also, if 12 is an int, why does this compile without complaint?

void foo(real x) { }
void main(){ foo(12); }

Because, if the compiler casts it to a real it can find a match. So let's flow with that idea for a moment...

> C:mars>dmd test
> test.d(9): function test.foo called with argument types:
>         (char[],int,int)

And yet, if the compiler had chosen to assume 12 was a uint, then there would have been an exact match.

         (char[],int,uint)

> matches both:
>         test.foo(char[],int,uint)
> and:
>         test.foo(char[],uint,uint)
> 
> both match with implicit casts, hence the ambiguity error.

So that's why I wondered why the compiler stopped trying to find an exact match. It gave up too soon.

> Also, storage
> classes (such as inout) are ignored for overload matching, *only* argument
> types are looked at.

However, isn't 'inout int' very much similar in nature to the C++ '*int' pointer concept. And our beloved C++ would notice that an 'int' is very much different to a '*int', so why should D see no difference between 'inout int' and 'in int'.

But aside from that, what would be the consequence of having the compiler take into consideration the storage classes of the signature? And this is a serious question from someone who is trying to find some answers to that very question. It is not a rhetorical one, nor is it trolling.

So far, I would think that it would make the compiler find more bugs (mistakes), and it might be possible for the compiler to discover better optimizations.

-- 
Derek
Melbourne, Australia