April 17, 2014
On Thursday, 17 April 2014 at 08:05:42 UTC, Ola Fosheim Grøstad wrote:
> On Thursday, 17 April 2014 at 06:56:11 UTC, Paulo Pinto wrote:
>> There is a reason why Dalvik is being replaced by ART.
>
> AoT compilation?

Not only. Dalvk was left to bit rotten and has hardly seen any updates since 2.3.

>
> Btw, AFAIK the GC is deprecated for Objective-C from OS-X 10.8. Appstore requires apps to be GC free... Presumably for good reasons.

Because Apple sucks at implementing GCs.

It was not possible to mix binary libraries compiled with GC enabled and with ones compiled with it disabled.

I already mentioned this multiple times here and can hunt the posts with respective links if you will.

The forums were full of crash descriptions.

Their ARC solution is based on Cocoa patterns and only applies to Cocoa and other Objective-C frameworks with the same lifetime semantics.

Basically the compiler inserts the appropriate [... retain] / [... release] calls in the places where an Objective-C programmer is expected to write them by hand. Additionally a second pass removes extra invocation pairs.

This way there is no interoperability issues between compiled libraries, as from the point of view from generated code there is no difference other that the optimized calls.

Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC is better than the crappy GC implementation we have done".

--
Paulo
April 17, 2014
On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
> Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC is better than the crappy GC implementation we have done".

I have never seen a single instance of a GC based system doing anything smooth in the realm of audio/visual real time performance without being backed by a non-GC engine.

You can get decent performance from GC backed languages on the higher level constructs on top of a low level engine. IMHO the same goes for ARC. ARC is a bit more predictable than GC. GC is a bit more convenient and less predictable.

I think D has something to learn from this:

1. Support for manual memory management is important for low level engines.

2. Support for automatic memory management is important for high level code on top of that.

The D community is torn because there is some idea that libraries should assume point 2 above and then be retrofitted to point 1. I am not sure if that will work out.

Maybe it is better to just say that structs are bound to manual memory management and classes are bound to automatic memory management.

Use structs for low level stuff with manual memory management.
Use classes for high level stuff with automatic memory management.

Then add language support for "union-based inheritance" in structs with a special construct for programmer-specified subtype identification.

That is at least conceptually easy to grasp and the type system can more easily safeguard code than in a mixed model.

Most successful frameworks that allow high-level programming have two layers:
- Python/heavy duty c libraries
- Javascript/browser engine
- Objective-C/C and Cocoa / Core Foundation
- ActionScript / c engine

etc

I personally favour the more integrated approach that D appears to be aiming for, but I am somehow starting to feel that for most programmers that model is going to be difficult to grasp in real projects, conceptually. Because they don't really want the low level stuff. And they don't want to have their high level code bastardized by low level requirements.

As far as I am concerned D could just focus on the structs and the low level stuff, and then later try to work in the high level stuff. There is no efficient GC in sight and the language has not been designed for it either.

ARC with whole-program optimization fits better into the low-level paradigm than GC. So if you start from low-level programming and work your way up to high-level programming then ARC is a better fit.

Ola.
April 17, 2014
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
> http://wiki.dlang.org/DIP60
>
> Start on implementation:
>
> https://github.com/D-Programming-Language/dmd/pull/3455

This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.
April 17, 2014
On Thursday, 17 April 2014 at 08:52:28 UTC, Ola Fosheim Grøstad wrote:
> On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
>> Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC is better than the crappy GC implementation we have done".
>
> I have never seen a single instance of a GC based system doing anything smooth in the realm of audio/visual real time performance without being backed by a non-GC engine.
>
> You can get decent performance from GC backed languages on the higher level constructs on top of a low level engine. IMHO the same goes for ARC. ARC is a bit more predictable than GC. GC is a bit more convenient and less predictable.
>
> I think D has something to learn from this:
>
> 1. Support for manual memory management is important for low level engines.
>
> 2. Support for automatic memory management is important for high level code on top of that.
>
> The D community is torn because there is some idea that libraries should assume point 2 above and then be retrofitted to point 1. I am not sure if that will work out.
>
> Maybe it is better to just say that structs are bound to manual memory management and classes are bound to automatic memory management.
>
> Use structs for low level stuff with manual memory management.
> Use classes for high level stuff with automatic memory management.
>
> Then add language support for "union-based inheritance" in structs with a special construct for programmer-specified subtype identification.
>
> That is at least conceptually easy to grasp and the type system can more easily safeguard code than in a mixed model.
>
> Most successful frameworks that allow high-level programming have two layers:
> - Python/heavy duty c libraries
> - Javascript/browser engine
> - Objective-C/C and Cocoa / Core Foundation
> - ActionScript / c engine
>
> etc
>
> I personally favour the more integrated approach that D appears to be aiming for, but I am somehow starting to feel that for most programmers that model is going to be difficult to grasp in real projects, conceptually. Because they don't really want the low level stuff. And they don't want to have their high level code bastardized by low level requirements.
>
> As far as I am concerned D could just focus on the structs and the low level stuff, and then later try to work in the high level stuff. There is no efficient GC in sight and the language has not been designed for it either.
>
> ARC with whole-program optimization fits better into the low-level paradigm than GC. So if you start from low-level programming and work your way up to high-level programming then ARC is a better fit.
>
> Ola.

Looking at the hardware specifications of usable desktop OSs built with automatic memory managed system programming languages, we have:

Interlisp, Mesa/Cedar, ARC with GC for cycle collection, running on Xerox 1132 (Dorado) and Xerox 1108 (Dandelion).

http://archive.computerhistory.org/resources/access/text/2010/06/102660634-05-05-acc.pdf

Oberon running on Ceres,

ftp://ftp.inf.ethz.ch/pub/publications/tech-reports/1xx/070.pdf

Bluebottle, Oberon's sucessor has a primitive video editor,
http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager?action=download&upname=AosScreenshot1.jpg

Spin running on DEC Alpha, http://en.wikipedia.org/wiki/DEC_Alpha

Any iOS device runs circles around those systems, hence why I always like to make clear it was Apple's failure to make a workable GC in a C based language and not the virtues of pure ARC over pure GC.

Their solution has its merits, and as I mentioned the benefit of generating the same code, while releasing the developer of pain to write those retain/release themselves.

Similar approach was taken by Microsoft with their C++/CX and COM integration.

So any pure GC basher now uses Apple's example, with a high probability of not  knowing the technical issues why it came to be like that.

--
Paulo
April 17, 2014
On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
> On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
>> http://wiki.dlang.org/DIP60
>>
>> Start on implementation:
>>
>> https://github.com/D-Programming-Language/dmd/pull/3455
>
> This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.

Sure it does.

module mymodule;
@nogc:

     void myfunc(){}

     class MyClass {
         void mymethod() {}
     }


Everything in above code has @nogc applied to it.
Nothing special about it, can do it for most attributes like
static, final and UDA's.
Unless of course you can think of another way it could be done? Or I've missed something.
April 17, 2014
Walter Bright:

> http://wiki.dlang.org/DIP60
>
> Start on implementation:
>
> https://github.com/D-Programming-Language/dmd/pull/3455

If I have this program:

__gshared int x = 5;
int main() {
    int[] a = [x, x + 10, x * x];
    return a[0] + a[1] + a[2];
}


If I compile with all optimizations DMD produces this X86 asm, that contains the call to __d_arrayliteralTX, so that main can't be @nogc:

__Dmain:
L0:     push    EAX
        push    EAX
        mov EAX,offset FLAT:_D11TypeInfo_Ai6__initZ
        push    EBX
        push    ESI
        push    EDI
        push    3
        push    EAX
        call    near ptr __d_arrayliteralTX
        mov EBX,EAX
        mov ECX,_D4test1xi
        mov [EBX],ECX
        mov EDX,_D4test1xi
        add EDX,0Ah
        mov 4[EBX],EDX
        mov ESI,_D4test1xi
        imul    ESI,ESI
        mov 8[EBX],ESI
        mov EAX,3
        mov ECX,EBX
        mov 014h[ESP],EAX
        mov 018h[ESP],ECX
        add ESP,8
        mov EDI,010h[ESP]
        mov EAX,[EDI]
        add EAX,4[EDI]
        add EAX,8[EDI]
        pop EDI
        pop ESI
        pop EBX
        add ESP,8
        ret


If I compile that code with ldc2 without optimizations the result is similar, there is a call to __d_newarrayvT:

__Dmain:
    pushl   %ebp
    movl    %esp, %ebp
    pushl   %esi
    andl    $-8, %esp
    subl    $32, %esp
    leal    __D11TypeInfo_Ai6__initZ, %eax
    movl    $3, %ecx
    movl    %eax, (%esp)
    movl    $3, 4(%esp)
    movl    %ecx, 12(%esp)
    calll   __d_newarrayvT
    movl    %edx, %ecx
    movl    __D4test1xi, %esi
    movl    %esi, (%edx)
    movl    __D4test1xi, %esi
    addl    $10, %esi
    movl    %esi, 4(%edx)
    movl    __D4test1xi, %esi
    imull   __D4test1xi, %esi
    movl    %esi, 8(%edx)
    movl    %eax, 16(%esp)
    movl    %ecx, 20(%esp)
    movl    20(%esp), %eax
    movl    20(%esp), %ecx
    movl    (%eax), %eax
    addl    4(%ecx), %eax
    movl    20(%esp), %ecx
    addl    8(%ecx), %eax
    leal    -4(%ebp), %esp
    popl    %esi
    popl    %ebp
    ret



But if I compile the code with ldc2 with full optimizations the compiler is able to perform a bit of escape analysis, and to see the array doesn't need to be allocated, and produces the asm:

__Dmain:
    movl    __D4test1xi, %eax
    movl    %eax, %ecx
    imull   %ecx, %ecx
    addl    %eax, %ecx
    leal    10(%eax,%ecx), %eax
    ret

Now there are no memory allocations.

So what's the right behavour of @nogc? Is it possible to compile this main with a future version of ldc2 if I compile the code with full optimizations?

Bye,
bearophile
April 17, 2014
Adam D. Ruppe:

> What I want is a __trait that scans for all call expressions in a particular function and returns all those functions.
>
> Then, we can check them for UDAs using the regular way and start to implement library defined things like @safe, @nogc, etc.

This is the start of a nice idea to extend the D type system a little in user defined code. But I think it still needs some refinement.

I also think there can be a more automatic way to test them than "the regular way" of putting a static assert outside the function.

Bye,
bearophile
April 17, 2014
On Thursday, 17 April 2014 at 09:32:52 UTC, Paulo Pinto wrote:
> Any iOS device runs circles around those systems, hence why I always like to make clear it was Apple's failure to make a workable GC in a C based language and not the virtues of pure ARC over pure GC.

I am not making an argument for pure ARC. Objective-C allows you to mix and Os-X is most certainly not pure ARC based.

If we go back in time to the timeslot you point to even C was considered waaaay too slow for real time graphics.

On the C64 and the Amiga you wrote in assembly and optimized for the hardware. E.g. using hardware scroll register on the C64 and the copperlist (a specialized scanline triggered processor writing to hardware registers) on the Amiga. No way you could do real time graphics in a GC backed language back then without a dedicated engine with HW support. Real time audio was done with DSPs until the mid 90s.
April 17, 2014
> Is it possible to compile this main with a future version of ldc2 if I compile the code with full optimizations?

Sorry, I meant to ask if it's possible to compile this main with a @nogc applied to it if I compile it with ldc2 with full optimizations.

Bye,
bearophile
April 17, 2014
On 04/17/14 11:33, Rikki Cattermole via Digitalmars-d wrote:
> On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
>> On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
>>> http://wiki.dlang.org/DIP60
>>>
>>> Start on implementation:
>>>
>>> https://github.com/D-Programming-Language/dmd/pull/3455
>>
>> This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.
> 
> Sure it does.
> 
> module mymodule;
> @nogc:
> 
>      void myfunc(){}
> 
>      class MyClass {
>          void mymethod() {}
>      }
> 
> 
> Everything in above code has @nogc applied to it.
> Nothing special about it, can do it for most attributes like
> static, final and UDA's.

It does not work like that. User defined attributes only apply to the current scope, ie your MyClass.mymethod() would *not* have the attribute. With built-in attributes it becomes more "interesting" - for example '@safe' will include child scopes, but 'nothrow" won't.

Yes, the current attribute situation in D is a mess. No, attribute inference isn't the answer.

artur
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19