January 05, 2012
Walter:

> What is dangerous is (in C++) the ability to override a non-virtual function, and the use of non-virtual destructors.

There is something left that I'd like to see D care more about, method hiding:


class Foo {
    string name = "c1";
    static void foo() {}
}
class Bar : Foo {
    string name = "c2";
    static void foo() {} // silent method hiding
}
void main() {}


Here I'd like D to *require* the use of a "new" keyword as C# does:

class Foo {
    string name = "c1";
    static void foo() {}
}
class Bar : Foo {
    string name = "c2";
    static new void foo() {} // method hiding is now visible
}
void main() {}

Bye,
bearophile
January 05, 2012
On Thursday, 5 January 2012 at 01:36:44 UTC, bearophile wrote:
> Walter:
>
>> What is dangerous is (in C++) the ability to override a non-virtual function, and the use of non-virtual destructors.
>
> There is something left that I'd like to see D care more about, method hiding:
>
>
> class Foo {
>   string name = "c1";
>   static void foo() {}
> }
> class Bar : Foo {
>   string name = "c2";
>   static void foo() {} // silent method hiding
> }
> void main() {}

Should we just disallow this? If the function wasn't static it would just override foo. Or is that changing once override is required?
January 05, 2012
On 12/29/2011 02:15 AM, Caligo wrote:
>
> This is somewhat of a serious question:  If there is a God (I'm not
> saying there isn't, and I'm not saying there is), what language would he
> choose to create the universe?  It would be hard for us mortals to
> imagine, but would it resemble a functional programming language more or
> something else?  And what type of hardware would the code run on?  I
> mean, there are computations happening all around us, e.g., when an
> apple falls or planets circle the sun, etc, so what's performing all the
> computation?

I have two contradictory answers:

Languages, Prolog.
Hardware, something that can solve the hauling problem (but just for for turning machines).
January 05, 2012
On 2012-01-04 16:31, Artur Skawina wrote:
> On 01/04/12 10:39, Manu wrote:
>> Walter made an argument "The same goes for all those language extensions you mentioned. Those are not part of Standard C. They are vendor extensions. Does that mean that C is not actually a systems language? No."
>> This is absurd... are you saying that you expect Iain to add these things to GDC to that people can use them, and then create incompatible D code with the 'standard' compiler?
>
> Some of these things are *already* in GDC... Probably not documented and tested enough [1], but they are there. So you /can/ have function declarations such as:
>
> pragma(GNU_attribute, always_inline, flatten, hot) int fxx(int i) { ... }

If you want your code to be portable (between compilers) you would need to wrap that in a version statement.

-- 
/Jacob Carlborg
January 05, 2012
On 2012-01-05 00:21, bearophile wrote:
> Walter:
>
>> The only reason to use classes in D is for polymorphic behavior - and that means
>> virtual functions.
>
> I don't agree, in some cases I use final class instances instead of heap-allocated structs even when I don't need polymorphic behaviour just to avoid pointer syntax (there is also a bit higher probability of destructors being called, compared to heap-allocated structs).
> In some cases I've used a final class just to be able to use a this() with no arguments :-)
>
> Bye,
> bearophile

You can get that with a static opCall for structs too.

-- 
/Jacob Carlborg
January 05, 2012
On Wed, Jan 4, 2012 at 12:53 PM, Manu <turkeyman@gmail.com> wrote:
> Oh, and virtual-by-default... completely unacceptable for a systems language. most functions are NOT virtual, and finding the false-virtuals while optimising will be extremely tedious and time consuming. Worse, if libraries contain false virtuals, there's good chance I may not be able to use said library on certain architectures (PPC, ARM in particular). Terrible decision... completely contrary to modern hardware design and trends. Why invent a 'new' language for 10 year old hardware?

The only benchmark of virtual functions on ARM that I can find is
http://mikeash.com/pyblog/performance-comparisons-of-common-operations-iphone-edition.html
, which found that the calls, when compared with other operations,
performed similarly to x86.
I'm not really sure what architecture-specific issues you're referring to here.
January 05, 2012
On 5 January 2012 02:42, bearophile <bearophileHUGS@lycos.com> wrote:

> Manu:
>
> > I'm not referring to vector OPERATIONS. I only refer to the creation of a type to identify these registers...
>
> Please, try to step back a bit and look at this problem from a bit more distance. D has vector operations, and so far they have received only a tiny amount of love. Are you able to find some ways to solve some of your problems using a hypothetical much better implementation of D vector operations? Please, think about the possibilities of this syntax.
>
> Think about future CPU evolution with SIMD registers 128, then 256, then 512, then 1024 bits long. In theory a good compiler is able to use them with no changes in the D code that uses vector operations.
>

These are all fundamentally different types, like int and long.. float and
double... and I certainly want a keyword to identify each of them. Even if
the compiler is trying to make auto vector optimisations, you can't deny
programmers explicit control to the hardware when they want/need it.
Look at x86 compilers, been TRYING to perform automatic SSE optimisations
for 10 years, with basically no success... do you really think you can do
better then all that work by microsoft and GCC?
In my experience, I've even run into a lot of VC's auto-SSE-ed code that is
SLOWER than the original float code.
Let's not even mention architectures that receive much less love than x86,
and are arguably more important (ARM; slower, simpler processors with more
demand to perform well, and not waste power)

Also, D is NOT a good compiler, it's a rubbish compiler with respect to code generation. And with a community so small, it has no hope of becoming a 'good' compiler any time soon.. Even C/C++ compilers that have been around for decades used by millions have been promising optimisations that are still not available, and the ones that are come at the expense of decades of smart engineers on huge paycheques.


> Intrinsics are an additive change, adding them later is possible. But I think fixing the syntax of vector ops is more important. I have some bug reports in Bugzilla about vector ops that are sleeping there since two years or so, and they are not about implementation performance.
>

Vector ops and SIMD ops are different things. float[4] (or more realistically, float[3]) should NOT be a candidate for automatic SIMD implementation, likewise, simd_type should not have its components individually accessible. These are operations the hardware can not actually perform. So no syntax to worry about, just a type.


> I think the good Hara will be able to implement those syntax fixes in a matter of just one day or very few days if a consensus is reached about what actually is to be fixed in D vector ops syntax.
>


> Instead of discussing about *adding* something (register intrinsics) I suggest to discuss about what to fix about the *already present* vector op syntax. This is not a request to just you Manu, but to this whole newsgroup.
>

And I think this is exactly the wrong approach. A vector is NOT an array of
4 (actually, usually 3) floats. It should not appear as one. This is overly
complicated and ultimately wrong way to engage this hardware.
Imagine the complexity in the compiler to try and force float[4] operations
into vector arithmetic vs adding a 'v128' type which actually does what
people want anyway...

SIMD units are not float units, they should not appear like an aggregation
of float units. They have:
 * Different error semantics, exception handling rules, sometimes different
precision...
 * Special alignment rules.
 * Special literal expression/assignment.
 * You can NOT access individual components at will.
 * May be reinterpreted at any time as float[1] float[4] double[2] short[8]
char[16], etc... (up to the architecture intrinsics)
 * Can not be involved in conventional comparison logic (array of floats
would make you think they could)
 *** Can NOT interact with the regular 'float' unit... Vectors as an array
of floats certainly suggests that you can interact with scalar floats...

I will use architecture intrinsics to operate on these regs, and put that nice and neatly behind a hardware vector type with version()'s for each architecture, and an API with a whole lot of sugar to make them nice and friendly to use.

My argument is that even IF the compiler some day attempts to make vector optimisations to float[4] arrays, the raw hardware should be exposed first, and allow programmers to use it directly. This starts with a language defined (platform independant) v128 type.


January 05, 2012
>
> The thing I'm most worried about is people forgetting to declare 'final:'
>> on a
>> class, or junior programmers who DON'T declare final, perhaps because
>> they don't
>> understand it, or perhaps because they have 1-2 true-virtuals, and the
>> rest are
>> just defined in the same place... This is DANGEROUS.
>>
>
> It isn't dangerous, it is just less optimal. What is dangerous is (in C++) the ability to override a non-virtual function, and the use of non-virtual destructors.
>

In 15 years I have never once overridden a non-virtual function, assuming
it was virtual, and wondering why it didn't work... have you?
I've never even heard a story of a colleague, or even on the net of that
ever happening (yes, I'm sure if I google specifically for it, I could find
it, but it's never appeared is an article or such)... but I can point you
at almost daily examples of junior programmers making silly mistakes that
go un-noticed by their seniors. Especially common are mistakes in
declaration where declaration attributes don't change whether the program
builds and works or not.

It seems to me the decision is that of sacrificing a real and common problem case with frequent and tangible evidence, for the feeling that the language is defined to do the 'right' thing?


> It's also true that D's design makes it possible for a compiler to make direct calls if it is doing whole-program analysis and determines that there are no overrides of it.
>

This is only possible with whole program optimisation, and some very crafty
code that may or may not ever be implemented, and certainly isn't
dependable from compiler vendor 'x'.. There would simply be no problem in
the first place if the default was declared the other way around, and the
compiler would need none of that extra code, and there are no problems of
compiler maturity.
Surely this sort of consideration is even more important for an open source
project with a relatively small team like D than it is even for C++?


January 05, 2012
On 5 January 2012 03:06, Walter Bright <newshound2@digitalmars.com> wrote:

> On 1/4/2012 4:30 PM, Sean Kelly wrote:
>
>> If a library is written without consideration to what is virtual and what
>> is
>> not, its performance will be the least of your problems.
>>
>
> I agree. Such is a massive failure in designing a polymorphic type, and the language can't help with that.
>

I don't follow.. how is someone failing (or forgetting) to type 'final' a
"massive design failure"? It's not a design failure, it's not even
'wrong'... it's INEVITABLE.
And the language CAN help with that, by making expensive operations require
explicit declaration.

At least make a compiler flag so I can disable virtual-by-default for my project...?


January 05, 2012
On 01/05/12 08:19, Jacob Carlborg wrote:
> On 2012-01-04 16:31, Artur Skawina wrote:
>> On 01/04/12 10:39, Manu wrote:
>>> Walter made an argument "The same goes for all those language extensions you mentioned. Those are not part of Standard C. They are vendor extensions. Does that mean that C is not actually a systems language? No."
>>> This is absurd... are you saying that you expect Iain to add these things to GDC to that people can use them, and then create incompatible D code with the 'standard' compiler?
>>
>> Some of these things are *already* in GDC... Probably not documented and tested enough [1], but they are there. So you /can/ have function declarations such as:
>>
>> pragma(GNU_attribute, always_inline, flatten, hot) int fxx(int i) { ... }
> 
> If you want your code to be portable (between compilers) you would need to wrap that in a version statement.
> 

Exactly. Which isn't a problem if you have one or two such functions. But becomes one when you have hundreds. And different compilers use different conventions, some do not support every feature and/or need specific tweaks. Copy-and-pasting multiline "declaration attribute blocks" for every function that needs them does not really scale well.
In C/C++ this is CPP territory where you solve it with a #define, and all of the magic is both hidden and easily accessible in one place. Adding support for another compiler requires only editing of that one header, not modifying practically the whole project. Let's not even think about compiler version specific tweaks (due to compiler bugs or features appearing in newer versions)...

D, being in its infancy, may have been able to ignore these issues so far (having only one D frontend helps too), but w/o a std, every vendor will have to invent a way to expose non-std features. For common things such as forcing functions to be inlined, keeping them out of line, marking them as hot/cold, putting them in specific text sections etc relying on vendor extensions is not really necessary.

It's bad enough that every compiler will use a different incompatible runtime, in some cases calling conventions - and consequently different shared libraries; reducing source code portability (even if just by making things harder than they should be) will lead to more balkanization...

artur