November 21, 2018
On Wednesday, 21 November 2018 at 17:46:29 UTC, Alex wrote:
> compiled against 4.6.1 Framework.
>
> However, of course, there is a NullReferenceException, if c happens to be null, when calling baz.
>
> So the difference is not the compiler behavior, but just the runtime behavior...
>
> How could the compiler know the state of Random anyway, before the program run.

The compiler would not be able to prove that something was initialized and hence could give an error. Maybe c# doesn't do it but swift certainly does:

class C {
    func baz() {}
}

func f() {
    var x: C
    if Int.random(in: 0 ..< 10) < 5 {
        x = C()
    }
    x.baz()
}

error: variable 'x' used before being initialized

November 21, 2018
On Wed, 21 Nov 2018 20:15:42 +0000, welkam wrote:
> In D classes are reference type and unless you mark them as final they will have vtable.

Even if you mark your class as final, it has a vtable because it inherits from Object, which has virtual functions. The ProtoObject proposal is for a base class that has no member functions. If you had a final class that inherited from ProtoObject instead of Object, it would have an empty vtable.

> Lets face it most people dont mark their classes as
> final. What all this mean is that EVERY access to class member value
> goes trough indirection (additional cost)

D classes support inheritance. They implicitly cast to their base types. They can add fields not present in their base types. If they were value types, this would mean you'd lose those fields when up-casting, and then you'd get memory corruption from calling virtual functions.

That is a cost that doesn't happen with structs, I'll grant, but the only way to avoid that cost is to give up inheritance. And inheritance is a large part of the reason to use classes instead of structs.

> and EVERY method call goes
> trough 2 indirections (one to get vtable and second to call
> function(method) from vtable).

Virtual functions do, that is. That's the vast majority of class member function calls.

> Now Java also have indirect vtable calls
> but it also have optimization passes that convert methods to final if
> they are not overridden. If Java didnt do that it would run as slow as
> Ruby.

Yeah, no.

https://git.ikeran.org/dhasenan/snippets/src/branch/master/virtualcalls/ results

Java and DMD both managed to de-virtualize and inline the function. DMD can do this in simple cases; Java can do this in a much wider range of cases but can make mistakes (and therefore has to insert guard code that will go back to the original bytecode when its hunches were wrong).

If it were merely devirtualization that were responsible for Java being faster than Ruby, Ruby might be ten times the duration of Java (just as dmd without optimizations is within times the duration of dmd without optimizations). You could also argue that `int += int` in Ruby is another virtual call, so it should be within twenty times the speed of Java.

Instead, it's 160 times slower than Java.

> On top of that some
> people want to check on EVERY dereference if pointer is not null. How
> slow you want your programs to run?

Every program on a modern CPU architecture and modern OS checks every pointer dereference to ensure the pointer isn't null. That's what a segfault is. Once you have virtual address space as a concept, this is free.

> Thats negatives but what benefit classes give us?
> First being reference type its easy to move them in memory. That would
> be nice for compacting GC but D doesnt have compacting GC.

You can do that with pointers, too. D doesn't do that because (a) it's difficult and we don't have the people required to make it work well enough, (b) it would make it harder to interface with other languages, (c) unions mean we would be unable to move some objects and people tend to be less thrilled about partial solutions than complete ones.

> Second they
> are useful for when you need to run code that some one else wrote for
> your project. Something like plugin system. [sarcasm]This is happening
> everyday[/sarcasm]
> Third porting code from Java to D.
> 
> Everything else you can do with struct and other D features.

Similarly, you can write Java-style object oriented code in C. It's hideously ugly and rather error-prone. Every project trying to do it would do it in a different and incompatible way.

Walter decided a long time ago that language support for Java-style OOP was a useful component for D to have, and having a standardized way of doing it with proper language support was better than leaving it to a library.
November 21, 2018
On Wednesday, 21 November 2018 at 09:31:41 UTC, Patrick Schluter wrote:
> On Tuesday, 20 November 2018 at 23:14:27 UTC, Johan Engelen wrote:
>> On Tuesday, 20 November 2018 at 19:11:46 UTC, Steven Schveighoffer wrote:
>>> On 11/20/18 1:04 PM, Johan Engelen wrote:
>>>>
>>>> D does not make dereferencing on class objects explicit, which makes it harder to see where the dereference is happening.
>>>
>>> Again, the terms are confusing. You just said the dereference happens at a.foo(), right? I would consider the dereference to happen when the object's data is used. i.e. when you read or write what the pointer points at.
>>
>> But `a.foo()` is already using the object's data: it is accessing a function of the object and calling it. Whether it is a virtual function, or a final function, that shouldn't matter.
>
> It matters a lot. A virtual function is a pointer that is in the instance, so there is a derefernce of the this pointer to get the address of the function.
> For a final function, the address of the function is known at compile time and no dereferencing is necessary.
>
> That is a thing that a lot of people do not get, a member function and a plain  function are basically the same thing. What distinguishes them, is their mangled name. You can call a non virtual member function from an assembly source if you know the symbol name.
> UFCS uses this fact, that member function and plain function are indistinguishable in a object code point of view, to fake member functions.

This and the rest of your email is exactly the kind of thinking that I oppose where language semantics and compiler implementation are being mixed. I don't think it's possible to write an optimizing compiler where that way of reasoning works. So D doesn't do that, and we have to treat language semantics separate from implementation details.  (virtual functions don't have to be implemented using vtables, local variables don't have to be on a stack, "a+b" does not need to result in a CPU add instruction, "foo()" does not need to result in a CPU procedure call instruction, etc, etc, etc. D is not a portable assembly language.)

-Johan

November 21, 2018
On Wednesday, 21 November 2018 at 07:47:14 UTC, Jonathan M Davis wrote:
>
> IMHO, requiring something in the spec like "it must segfault when dereferencing null" as has been suggested before is probably not a good idea is really getting too specific (especially considering that some folks have argued that not all architectures segfault like x86 does), but ultimately, the question needs to be discussed with Walter. I did briefly discuss it with him at this last dconf, but I don't recall exactly what he had to say about the ldc optimization stuff. I _think_ that he was hoping that there was a way to tell the optimizer to just not do that kind of optimization, but I don't remember for sure.

The issue is not specific to LDC at all. DMD also does optimizations that assume that dereferencing [*] null is UB. The example I gave is dead-code-elimination of a dead read of a member variable inside a class method, which can only be done either if the spec says that`a.foo()` is UB when `a` is null, or if `this.a` is UB when `this` is null.

[*] I notice you also use "dereference" for an execution machine [**] reading from a memory address, instead of the language doing a dereference (which may not necessarily mean a read from memory).
[**] intentional weird name for the CPU? Yes. We also have D code running as webassembly...

-Johan

November 21, 2018
On Wednesday, 21 November 2018 at 03:05:07 UTC, Neia Neutuladh wrote:
>
> Virtual function calls have to do a dereference to figure out which potentially overrided function to call.

"have to do a dereference" in terms of "dereference" as language semantic: yes.
"have to do a dereference" in terms of "dereference" as reading from memory: no. If you have proof of the runtime type of an object, then you can use that information to have the CPU call the overrided function directly without reading from memory.

-Johan

November 21, 2018
On Wednesday, 21 November 2018 at 21:05:37 UTC, aliak wrote:
> On Wednesday, 21 November 2018 at 17:46:29 UTC, Alex wrote:
>> compiled against 4.6.1 Framework.
>>
>> However, of course, there is a NullReferenceException, if c happens to be null, when calling baz.
>>
>> So the difference is not the compiler behavior, but just the runtime behavior...
>>
>> How could the compiler know the state of Random anyway, before the program run.
>
> The compiler would not be able to prove that something was initialized and hence could give an error. Maybe c# doesn't do it but swift certainly does:
>
> class C {
>     func baz() {}
> }
>
> func f() {
>     var x: C
>     if Int.random(in: 0 ..< 10) < 5 {
>         x = C()
>     }
>     x.baz()
> }
>
> error: variable 'x' used before being initialized

Nice! Didn't know that... But the language is a foreign one for me.

Nevertheless, from what I saw:
Shouldn't it be
var x: C?
as an optional kind, because otherwise, I can't assign a nil to the instance, which I can do to a class instance in D...
and if it is, it works in the same manner as C#, (tried this out! :) )

Comparing non-optional types from swift with classes in D is... yeah... hmm... evil ;)

And if you assume a kind which cannot be nil, then you are again with structs here...

But I wondered about something different:
Even if the compiler would check the existence of an assignment, the runtime information cannot be deduced, if I understand this correctly. And if so, it cannot be checked at compile time, if something is or is not null. Right?
November 22, 2018
On Wednesday, 21 November 2018 at 17:11:23 UTC, Stefan Koch wrote:
>
> For _TRIVIAL_cases this is not hard.
>
> But we cannot only worry about trivial cases;
> We have to consider _all_ cases.
>
> Therefore we better not emit an error in a trivial case.
> Which could lead users to assume that we are detecting all the cases.
> That in turn will give the impression of an unreliable system, and indeed that impression would not be too far from the truth.

On the face of it, that seems a reasonable argument. i.e. Consistency.

On the other-hand, I see nothing 'reliable' about handing off the responsibility of detecting run-time errors, to the o/s ;-)

I would prefer to catch these errors at compile time, or run time.

D can do neither it seems.

November 22, 2018
On Wednesday, 21 November 2018 at 17:00:29 UTC, Alex wrote:
> This was not my point. I wonder, whether the case, where the compiler can't figure out the initialization state of an object is so hard to construct.
>
> ´´´
> import std.experimental.all;
>
> class C
> {
> 	size_t dummy;
> 	final void baz()
> 	{
> 		if(this is null)
> 		{
> 			writeln(42);
> 		}
> 		else
> 		{
> 			writeln(dummy);
> 		}
> 	}
> }
> void main()
> {
> 	C c;
> 	if(uniform01 < 0.5)
> 	{
> 		c = new C();
> 		c.dummy = unpredictableSeed;
> 	}
>         else
>         {
>                 c = null;
>         }
> 	c.baz;
> 	writeln(c is null);
> }
> ´´´
>
> C# wouldn't reject the case above, would it?

As `c` is initialized in both branches, compiler knows it's always in initialized state after the if statement.
November 22, 2018
On Wednesday, 21 November 2018 at 22:24:06 UTC, Johan Engelen wrote:
> The issue is not specific to LDC at all. DMD also does optimizations that assume that dereferencing [*] null is UB.

Do you have an example? I think it treats null dereference as implementation defined but otherwise safe.
November 22, 2018
On Wednesday, 21 November 2018 at 23:27:25 UTC, Alex wrote:
> Nice! Didn't know that... But the language is a foreign one for me.
>
> Nevertheless, from what I saw:
> Shouldn't it be
> var x: C?
> as an optional kind, because otherwise, I can't assign a nil to the instance, which I can do to a class instance in D...
> and if it is, it works in the same manner as C#, (tried this out! :) )

This is true. But then the difference is that you can't* call a method on an optional variable without first unwrapping it (which is enforced at compile time as well).

* You can force unwrap it and then you'd get a segfault if it there was nothing inside the optional. But most times if you see someone force unwrapping an optional it's a code smell in swift.

>
> Comparing non-optional types from swift with classes in D is... yeah... hmm... evil ;)

Hehe, maybe in a way. Was just trying to show that compilers can fix the null reference "problem" at compile time. And that flow analysis can detect initialization.

>
> And if you assume a kind which cannot be nil, then you are again with structs here...
>
> But I wondered about something different:
> Even if the compiler would check the existence of an assignment, the runtime information cannot be deduced, if I understand this correctly. And if so, it cannot be checked at compile time, if something is or is not null. Right?

Aye. But depending on how a language is designed this problem - if you think it is one - can be dealt with. It's why swift has optionals built in to the language.