April 16, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote...
>
> "Sean Kelly" <sean@f4.ca> wrote in message news:d3pqsk$1jsl$1@digitaldaemon.com...
> > In article <d3pq4b$1jhe$1@digitaldaemon.com>, Ben Hinkle says...
> >>
> >>I'd like to see a more complete description of what would change before
> >>making up my mind. If we just remove Object.opCmp obviously many things
> >>won't work (eg, aa's, sorting, dynamic array operations, maybe more) so
> >>what
> >>exactly won't work and what will have to change to get it to work?
Redoing
> >>large chunks of the TypeInfo system isn't trivial IMO. Maybe I'm just
> >>being
> >>thick but I'd like to see more details.
> >
> > Moving the methods to interfaces *almost* works, and the changes really
> > aren't
> > too bad on the library side (I gave this a whirl with Ares--there's a
> > branched
> > version on my website if you're interested).  But this approach also
> > requires
> > changes to portions of DMD that aren't shipped in source form, so
> > estimating
> > their scope is somewhat difficult.  Maybe someone with GDC experience
> > could
> > chime in?
>
> The interface would presumably be Comparable.opCmp(Comparable), correct?

Aye.

>I thought the primary complaint against opCmp was that people would define
a
> class Foo with Foo.opCmp(Foo) and expect it to work.

I thought it was the fact that classes should not be Comparable by default, since it makes no sense to compare them if not explicitly supported. There's also the small issue of apparently being incompatable with a compacting-GC. Further, in the interests of robustness and correctness, there should be a compile-time error for attempts to compare an aggregate that does not explicitly implement opCmp().

The interface can potentially move opEquals() out of the root object also, such that it would be a compile-time error to apply '==' upon a class without an explicit opEquals() implementation. This makes for notably more robust code, as was discussed at length recently.

We're talking about the root object here. It should be engineered in such a manner whereby it /prevents/ pedestrian faults. It should certainly not breed them :-)

There are limitations using the interface approach; perhaps the main one is the method declaration, as you pointed out. The best overall suggestion I've heard so far was from Matthew ~ but that requires a change to the compiler. The interface approach would, on the surface, potentially work without any internal compiler changes. There's the tradeoff.

- Kris


April 16, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d3pvun$1n8i$1@digitaldaemon.com...
>>> The interface would presumably be Comparable.opCmp(Comparable), correct? I thought the primary complaint against opCmp was that people would define a class Foo with Foo.opCmp(Foo) and expect it to work. With Comparable to get things to work one has to know about Comparable and write Foo.opCmp(Comparable) instead of Foo.opCmp(Object). Is this what the interfaces proposal is? I'm asking because I'm not sure exactly what the details of the proposal are. If that is the proposal then what are the benefits?
>>
>> I think there are two (or more) proposals here. My idea doesn't involve a particular interface, rather heterogeneity is supported as a natural consequence of the requirement for all types in the array (or whatever) to be related polymorphically.
>
> Yeah - the C++-style of using compile-time generics instead of run-time dispatching (Object or Comparable) seems like the most reasonable alternative to me at the moment - though it would also probably disrupt the TypeInfo system the most. The TypeInfos are a way to get generic type hooks without using templates or overloading. My gut is leaning towards either keeping Object.opCmp dynamic binding or ditching that for compile-time binding. The Comparable solution seems like a fairly small modification of Object.opCmp that doesn't really give enough benefits over Object.opCmp. But that's just my gut feelings without knowing much of anything about the actual proposals and their impact...

Excellent point. I'd suggest that we downgrade any enthusiasm for the interface-based approach (and upgrade it for my proposal of course ... <laughing hysterically; kids looking on bemused "shush daddy!">



April 16, 2005

Matthew wrote:
> "xs0" <xs0@xs0.com> wrote in message news:d3pg6t$1c8i$1@digitaldaemon.com...
> 
>>
>>Matthew wrote:
>>
>>>"Ben Hinkle" <bhinkle@mathworks.com> wrote in message news:d3p9qh$17hr$1@digitaldaemon.com...
>>>
>>>
>>>>>2. Leave the current DMD behaviour in, and document it properly.
>>>>
>>>>I would add 2a: leave it in, document it and add a warning to dmd and dlint for classes that define an opCmp that shadows Object.opCmp.
>>>>In fact a general warnings about shadowing member functions in base classes should say "Foo.opCmp(Foo) shadows Object.opCmp(Object)". For the common case of Object.opCmp it should give a nicer warning that should say something like "Foo.opCmp(Foo) shadows Object.opCmp(Object). Consider defining Foo.opCmp(Object) for associative arrays, sorting and other uses".
>>>
>>>
>>>But what's the attraction with addressing the symptoms of a disease, when we can prevent the disease?
>>>
>>>I'm not being sarcastic, I really want to know why people prefer this approach? Does it have _any_ advantages?
>>
>>You can sort an Object[] without caring what the actual types are? (of course, as long as all classes involved implement opCmp(Object))
> 
> 
> But sorting heterogeneous types is only meaningful in a subset of cases, and all those cases are accounted for in the alternative proposal.

Who said anything about heteregeneous types? Object[] means whatever, including Foo[], because you can implicitly cast Foo[] to Object[]. I.e. everything is an Object. How exactly would this work, if Object was without opCmp:

void checkRange(Object a, Object b, Object c)
{
    if (a<=b && b<=c)
        return;

    logger.logError("Range check failed");
}

Now this is not about comparing different types, it's about comparing any types.


xs0
April 16, 2005
>>I thought the primary complaint against opCmp was that people would define
>> a class Foo with Foo.opCmp(Foo) and expect it to work.
>
> I thought it was the fact that classes should not be Comparable by
> default,
> since it makes no sense to compare them if not explicitly supported.

Personally I have no problem with a default behavior if it has enough practical benefits. The debatable part is narrowed down to "enough", "practical" and "benefits". :-)

> There's also the small issue of apparently being incompatable with a compacting-GC.

This one doesn't worry me too much because the GC and Object.opCmp are compiler-dependent - though I know Sean is working on a pluggable API for the GC. It would be nice to not have to worry about it at all.

> Further, in the interests of robustness and correctness, there should be a compile-time error for attempts to compare an aggregate that does not explicitly implement opCmp().
>
> The interface can potentially move opEquals() out of the root object also, such that it would be a compile-time error to apply '==' upon a class without an explicit opEquals() implementation. This makes for notably more robust code, as was discussed at length recently.
>
> We're talking about the root object here. It should be engineered in such
> a
> manner whereby it /prevents/ pedestrian faults. It should certainly not
> breed them :-)
>
> There are limitations using the interface approach; perhaps the main one
> is
> the method declaration, as you pointed out. The best overall suggestion
> I've
> heard so far was from Matthew ~ but that requires a change to the
> compiler.
> The interface approach would, on the surface, potentially work without any
> internal compiler changes. There's the tradeoff.

Presumably the TypeInfo for Object (ti_C.d) and arrays of objects (ti_AC.d)
would try casting to Comparable and throw if it failed? That seems the most
reasonable behavior. That way the code
class Foo {}
void main() {
  Foo[] x,y;
  x.length = 2;
  y.length = 2;
  x < y;
}
would compile but throw at run-time. The alternative would be to change how
array comparison works more fundamentally. Similarly the AA implementation
would have to change to not use the TypeInfo compare hook at all.


April 16, 2005
> Who said anything about heteregeneous types? Object[] means whatever, including Foo[], because you can implicitly cast Foo[] to Object[]. I.e. everything is an Object.

hmm. It is pretty scary if we are allowed to do
class Foo{int foo;}
void main() {
  Foo[] x;
  x.length = 2;
  Object[] y = x; // implicitly cast
  y[0] = new Object;
  x[0].foo = 10; // Object is not a Foo - yikes!
}



April 16, 2005
> I think there are two (or more) proposals here. My idea doesn't involve a particular interface, rather heterogeneity is supported as a natural consequence of the requirement for all types in the array (or whatever) to be related polymorphically.
> 
> I'm pretty sure what I proposed in March - "A proposal for a new, improved opCmp() [WAS: Round-up of the excuses for keeping Object.opCmp, and rebuttals to them]" (d0lsv5$1496$1@digitaldaemon.com) - still accurately represents my thoughts.

I went and checked (btw, it would've been nice to mention that the thread started in September 2004 :)

I see at least this problem:

> As it stands above, a heterogeneous array of instances of these types, say [b1, b2, d1, md1, b3, md2, b4] would be sortable because Base derives opCmp(). Moreover, it would be sorted solely by the implementation of Base.opCmp() for all instances irrespective of their actual type.

This means that either Base must know all its subclasses (which is like total crap), or that all subclasses must implement opCmp(Base), which is about the same as implementing opCmp(Object), its just a different base class.

Furthermore, suppose I create a Number->[Int, Float] hierarchy and use it throughout my code. Later, I need to interface with some library, but that library has its own Num->[Integer, Double] hierarchy. I see no reason why they shouldn't be comparable, but they can't be, if there's no opCmp in Object..


xs0
April 16, 2005

Ben Hinkle wrote:
>>Who said anything about heteregeneous types? Object[] means whatever, including Foo[], because you can implicitly cast Foo[] to Object[]. I.e. everything is an Object.
> 
> 
> hmm. It is pretty scary if we are allowed to do
> class Foo{int foo;}
> void main() {
>   Foo[] x;
>   x.length = 2;
>   Object[] y = x; // implicitly cast
>   y[0] = new Object;
>   x[0].foo = 10; // Object is not a Foo - yikes!
> }

Well, what does this have to do with opCmp? :)

And you can't do that anyway, Object doesn't have a .foo property, so you'll get a compile error..


xs0
April 16, 2005
"xs0" <xs0@xs0.com> wrote in message news:d3q40g$1qbt$2@digitaldaemon.com...
>
>
> Ben Hinkle wrote:
>>>Who said anything about heteregeneous types? Object[] means whatever, including Foo[], because you can implicitly cast Foo[] to Object[]. I.e. everything is an Object.
>>
>>
>> hmm. It is pretty scary if we are allowed to do
>> class Foo{int foo;}
>> void main() {
>>   Foo[] x;
>>   x.length = 2;
>>   Object[] y = x; // implicitly cast
>>   y[0] = new Object;
>>   x[0].foo = 10; // Object is not a Foo - yikes!
>> }
>
> Well, what does this have to do with opCmp? :)

I saw the statement that Foo[] is implicitly castable to Object[] and I said to myself "no way" and then I went to get the compiler error and was surprised that it didn't error.

> And you can't do that anyway, Object doesn't have a .foo property, so you'll get a compile error..

Notice "x" is declared as Foo[] so x[0].foo is legal. Scarily enough the example compiled and ran. It should have errored at the implicit cast. I'll probably put something on D.bugs soon...


April 16, 2005

Ben Hinkle wrote:
> "xs0" <xs0@xs0.com> wrote in message news:d3q40g$1qbt$2@digitaldaemon.com...
> 
>>
>>Ben Hinkle wrote:
>>
>>>>Who said anything about heteregeneous types? Object[] means whatever, including Foo[], because you can implicitly cast Foo[] to Object[]. I.e. everything is an Object.
>>>
>>>
>>>hmm. It is pretty scary if we are allowed to do
>>>class Foo{int foo;}
>>>void main() {
>>>  Foo[] x;
>>>  x.length = 2;
>>>  Object[] y = x; // implicitly cast
>>>  y[0] = new Object;
>>>  x[0].foo = 10; // Object is not a Foo - yikes!
>>>}
>>
>>Well, what does this have to do with opCmp? :)
> 
> 
> I saw the statement that Foo[] is implicitly castable to Object[] and I said to myself "no way" and then I went to get the compiler error and was surprised that it didn't error.

Yes way:

 A dynamic array T[] can be implicitly converted to one of the following:

    * T*
    * U[] where U is a base class of T.
    * U* where U is a base class of T.
    * void*


>>And you can't do that anyway, Object doesn't have a .foo property, so you'll get a compile error..
> 
> 
> Notice "x" is declared as Foo[] so x[0].foo is legal. Scarily enough the example compiled and ran. It should have errored at the implicit cast. I'll probably put something on D.bugs soon... 

Sorry, I misread x[0] as y[0]. But how is it a bug (in the language)? Foo[] is Object[] in the sense that every Foo is an Object. Later, when you insert an Object, it's not Foo[] anymore, but you then use an "implicit explicit cast" back to Foo[], which is the error in this case.


xs0
April 17, 2005
Matthew wrote:
> Walter, can you characterise your position? Specifically,
>     1. Do you believe that there are any "serious flaws" in the D specification as it stands?
>     2. If the answer is no, do you have a degree of certainty that is unlikely to be overridden by any observations any of us might make?
>     3. If the answer to either 1 or 2 is yes, do you nonetheless have a need to expedite 1.0 despite any/all such "major flaws"?
> 
> Obviously, if we get (X, Yes, Y) or (X, Y, Yes), then there's no point having the debate.

You'd better ask Walter off-line.

The chances of him finding your particular post (soon enough), may be slim.

So, get the answer, and post it here, thanks.