April 17, 2014
On Thursday, 17 April 2014 at 20:46:57 UTC, Ola Fosheim Grøstad wrote:
> But compiled Objective-C code looks "horrible" to begin with… so I am not sure how well that translates to D.

Just to make it clear: ARC can make more assumptions than manual Objective-C calls to retain/release. So ARC being "surprisingly fast" relative to manual RC might be due to getting rid of Objective-C inefficiencies caused by explicit calls to retain/release rather than ARC being an excellent solution. YMMV.

April 17, 2014
On 4/17/2014 1:03 PM, John Colvin wrote:
> E.g. you can implement some complicated function foo that writes to a
> user-provided output range and guarantee that all GC usage is in the control of
> the caller and his output range.

As mentioned elsewhere here, it's easy enough to do a unit test for this.


> The advantage of having this as language instead of documentation is the
> turtles-all-the-way-down principle: if some function deep inside the call chain
> under foo decides to use a GC buffer then it's a compile-time-error.

And that's how @nogc works.
April 17, 2014
On 4/17/2014 1:53 PM, Steven Schveighoffer wrote:
> OK, you beat it out of me. I admit, when I said "Video processing/players with
> network capability" I meant all FILE * I/O, and really nothing to do with video
> processing or networking.


I would expect that with a video processor, you aren't dealing with ARC references inside the routine actually doing the work.

April 17, 2014
On 4/17/2014 1:46 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
> Apple has put a lot of resources into ARC. How much slower than manual RC
> varies, some claim as little as 10%, others 30%, 50%, 100%.

That pretty much kills it, even at 10%.
April 17, 2014
On Thu, 17 Apr 2014 18:08:43 -0400, Walter Bright <newshound2@digitalmars.com> wrote:

> On 4/17/2014 1:53 PM, Steven Schveighoffer wrote:
>> OK, you beat it out of me. I admit, when I said "Video processing/players with
>> network capability" I meant all FILE * I/O, and really nothing to do with video
>> processing or networking.
>
>
> I would expect that with a video processor, you aren't dealing with ARC references inside the routine actually doing the work.

Obviously, if you are dealing with raw data, you are not using ARC while accessing the data. But you are using ARC to get a reference to that data.

For instance, you might see:

-(void)processVideoData:(NSData *)data
{
   unsigned char *vdata = data.data;
   // process vdata
   ...
}

During the entire processing, you never increment/decrement a reference count, because the caller will have passed data to you with an incremented count.

Just because ARC protects the data, doesn't mean you need to constantly and needlessly increment/decrement references. If you know the data won't go away while you are using it, you can just ignore the reference counting aspect.

-Steve
April 17, 2014
On 4/17/2014 3:18 PM, Steven Schveighoffer wrote:
> During the entire processing, you never increment/decrement a reference count,
> because the caller will have passed data to you with an incremented count.
>
> Just because ARC protects the data, doesn't mean you need to constantly and
> needlessly increment/decrement references. If you know the data won't go away
> while you are using it, you can just ignore the reference counting aspect.

The salient point there is "if you know". If you are doing it, it is not guaranteed memory safe by the compiler. If the compiler is doing it, how does it know?

You really are doing *manual*, not automatic, ARC here, because you are making decisions about when ARC can be skipped, and you must make those decisions in order to have it run at a reasonable speed.
April 17, 2014
On Thursday, 17 April 2014 at 09:46:23 UTC, bearophile wrote:
> Walter Bright:
>
>> http://wiki.dlang.org/DIP60
>>
>> Start on implementation:
>>
>> https://github.com/D-Programming-Language/dmd/pull/3455
>
> If I have this program:
>
> __gshared int x = 5;
> int main() {
>     int[] a = [x, x + 10, x * x];
>     return a[0] + a[1] + a[2];
> }
>
>
> If I compile with all optimizations DMD produces this X86 asm, that contains the call to __d_arrayliteralTX, so that main can't be @nogc:
>
> But if I compile the code with ldc2 with full optimizations the compiler is able to perform a bit of escape analysis, and to see the array doesn't need to be allocated, and produces the asm:
>
> Now there are no memory allocations.
>
> So what's the right behavour of @nogc? Is it possible to compile this main with a future version of ldc2 if I compile the code with full optimizations?
>
> Bye,
> bearophile

That code is not @nogc safe, as you're creating a dynamic array within it. The fact that LDC2 at full optimizations doesn't actually allocate is simply an optimization and does not affect the design of the code.

If you wanted it to be @nogc, you could use:
int main() @nogc {
    int[3] a = [x, x + 10, x * x];
    return a[0] + a[1] + a[2];
}
April 18, 2014
On Thu, Apr 17, 2014 at 03:52:10PM -0700, Walter Bright via Digitalmars-d wrote:
> On 4/17/2014 3:18 PM, Steven Schveighoffer wrote:
> >During the entire processing, you never increment/decrement a reference count, because the caller will have passed data to you with an incremented count.
> >
> >Just because ARC protects the data, doesn't mean you need to constantly and needlessly increment/decrement references. If you know the data won't go away while you are using it, you can just ignore the reference counting aspect.
> 
> The salient point there is "if you know". If you are doing it, it is not guaranteed memory safe by the compiler. If the compiler is doing it, how does it know?
> 
> You really are doing *manual*, not automatic, ARC here, because you are making decisions about when ARC can be skipped, and you must make those decisions in order to have it run at a reasonable speed.

I thought that whole point of *A*RC is for the compiler to know when ref count updates can be skipped? Or are you saying this is algorithmically undecidable in the compiler?


T

-- 
"You are a very disagreeable person." "NO."
April 18, 2014
Kapps:

> That code is not @nogc safe, as you're creating a dynamic array within it. The fact that LDC2 at full optimizations doesn't actually allocate is simply an optimization and does not affect the design of the code.

Walter has answered to another person:

> The @nogc will tell you if it will allocate on the gc or not, on a case by case basis, and you can use easy workarounds as necessary.

That can be read as the opposite of what you say. The DIP60 needs to contain a clear answer on this point.

Bye,
bearophile
April 18, 2014
On 4/17/2014 5:09 PM, H. S. Teoh via Digitalmars-d wrote:
> I thought that whole point of *A*RC is for the compiler to know when ref
> count updates can be skipped? Or are you saying this is algorithmically
> undecidable in the compiler?

I don't think anyone has produced a "sufficiently smart compiler" in that regard.