September 11, 2012
On Wednesday, 5 September 2012 at 12:28:43 UTC, Piotr Szturmaj wrote:
> Benjamin Thaut wrote:
>> I do object pooling in both versions, as in game developement you
>> usually don't allocate during the frame. But still in the GC version you
>> have the problem that way to many parts of the language allocate and you
>> don't event notice it when using the GC.
>
> There's one proposed solution to this problem: http://forum.dlang.org/thread/k1rlhn$19du$1@digitalmars.com

It's a bad solution imho. Monitoring the druntime and hunting every part that allocates until our codebase is correct like Benjamen Thaut is a much better solution
September 11, 2012
SomeDude:

> It's a bad solution imho. Monitoring the druntime and hunting every part that allocates until our codebase is correct like Benjamen Thaut is a much better solution

Why do you think such hunt is better than letting the compiler tell you what parts of your program have the side effects you want to avoid?

Bye,
bearophile
September 11, 2012
Is not difficult to implement, as the compiler only needs to warn that the emission of /certain/ library calls /may/ cause heap allocations.

Regards.
----
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';

On 11 Sep 2012 11:31, "bearophile" <bearophileHUGS@lycos.com> wrote:
>
> SomeDude:
>
>
>> It's a bad solution imho. Monitoring the druntime and hunting every part
that allocates until our codebase is correct like Benjamen Thaut is a much better solution
>
>
> Why do you think such hunt is better than letting the compiler tell you
what parts of your program have the side effects you want to avoid?
>
> Bye,
> bearophile


September 12, 2012
On Tuesday, 11 September 2012 at 10:28:29 UTC, bearophile wrote:
> SomeDude:
>
>> It's a bad solution imho. Monitoring the druntime and hunting every part that allocates until our codebase is correct like Benjamen Thaut is a much better solution
>
> Why do you think such hunt is better than letting the compiler tell you what parts of your program have the side effects you want to avoid?
>
> Bye,
> bearophile

My problem is you litter your codebase with nogc everywhere. In  similar fashion, the nothrow keyword, for instance, has to be appended just about everywhere and I find it very ugly on its own. Basically, with this scheme, you have to annotate every single method you write for each and every guarantee (nothrow, nogc, nosideeffect, noshared, whatever you fancy) you want to ensure. This doesn't scale well at all.

I would find it okay to use a @noalloc annotation as a shortcut for a compiler switch or a an external tool to detect allocations in some part of code (as a digression, I tend to think D @annotations as compiler or tooling switches. One could imagine a general scheme where one associates a @annotation with a compiler/tool switch whose effect is limited to the annotated scope).
I suppose the tool has to build the full call tree starting with the @nogc method until it reaches the leaves or finds calls to new or malloc; you would have to do that for every single @nogc annotation, which could be very slow, unless you trust the developer that indeed his code doesn't allocate, which means he effectively needs to litter his codebase with nogc keywords.
September 12, 2012
class Foo
{
    @safe nothrow:
	void method_is_nothrow(){}
	void method_is_also_nothrow(){}
}


or

class Foo
{
    @safe nothrow
    {	
	void method_is_nothrow(){}
	void method_is_also_nothrow(){}
    }
}

no need to append it to every single method by hand...



Am 12.09.2012, 04:38 Uhr, schrieb SomeDude <lovelydear@mailmetrash.com>:

> On Tuesday, 11 September 2012 at 10:28:29 UTC, bearophile wrote:
>> SomeDude:
>>
>>> It's a bad solution imho. Monitoring the druntime and hunting every part that allocates until our codebase is correct like Benjamen Thaut is a much better solution
>>
>> Why do you think such hunt is better than letting the compiler tell you what parts of your program have the side effects you want to avoid?
>>
>> Bye,
>> bearophile
>
> My problem is you litter your codebase with nogc everywhere. In  similar fashion, the nothrow keyword, for instance, has to be appended just about everywhere and I find it very ugly on its own. Basically, with this scheme, you have to annotate every single method you write for each and every guarantee (nothrow, nogc, nosideeffect, noshared, whatever you fancy) you want to ensure. This doesn't scale well at all.
>
> I would find it okay to use a @noalloc annotation as a shortcut for a compiler switch or a an external tool to detect allocations in some part of code (as a digression, I tend to think D @annotations as compiler or tooling switches. One could imagine a general scheme where one associates a @annotation with a compiler/tool switch whose effect is limited to the annotated scope).
> I suppose the tool has to build the full call tree starting with the @nogc method until it reaches the leaves or finds calls to new or malloc; you would have to do that for every single @nogc annotation, which could be very slow, unless you trust the developer that indeed his code doesn't allocate, which means he effectively needs to litter his codebase with nogc keywords.


-- 
Erstellt mit Operas revolutionärem E-Mail-Modul: http://www.opera.com/mail/
September 13, 2012
On Wednesday, 12 September 2012 at 02:37:52 UTC, SomeDude wrote:
> On Tuesday, 11 September 2012 at 10:28:29 UTC, bearophile wrote:
>> SomeDude:
>>
>>> It's a bad solution imho. Monitoring the druntime and hunting every part that allocates until our codebase is correct like Benjamen Thaut is a much better solution
>>
>> Why do you think such hunt is better than letting the compiler tell you what parts of your program have the side effects you want to avoid?
>>
>> Bye,
>> bearophile
>
> My problem is you litter your codebase with nogc everywhere. In
>  similar fashion, the nothrow keyword, for instance, has to be appended just about everywhere and I find it very ugly on its own. Basically, with this scheme, you have to annotate every single method you write for each and every guarantee (nothrow, nogc, nosideeffect, noshared, whatever you fancy) you want to ensure. This doesn't scale well at all.
>
> I would find it okay to use a @noalloc annotation as a shortcut for a compiler switch or a an external tool to detect allocations in some part of code (as a digression, I tend to think D @annotations as compiler or tooling switches. One could imagine a general scheme where one associates a @annotation with a compiler/tool switch whose effect is limited to the annotated scope).
> I suppose the tool has to build the full call tree starting with the @nogc method until it reaches the leaves or finds calls to new or malloc; you would have to do that for every single @nogc annotation, which could be very slow, unless you trust the developer that indeed his code doesn't allocate, which means he effectively needs to litter his codebase with nogc keywords.

This is partially what happens in C++/CLI and C++/CX.

October 23, 2012
Here a small update:

I found a piece of code that did manually slow down the simulation in case it got to fast. This code never kicked in with the GC version, because it never reached the margin. The manual memory managed version however did reach the margin and was slowed down. With this piece of code removed the manual memory managed version runs at 5 ms which is 200 FPS and thus nearly 3 times as fast as the GC collected version.

Kind Regards
Benjamin Thaut

October 23, 2012
On Tuesday, 23 October 2012 at 16:30:41 UTC, Benjamin Thaut wrote:
> Here a small update:
>
> I found a piece of code that did manually slow down the simulation in case it got to fast. This code never kicked in with the GC version, because it never reached the margin. The manual memory managed version however did reach the margin and was slowed down. With this piece of code removed the manual memory managed version runs at 5 ms which is 200 FPS and thus nearly 3 times as fast as the GC collected version.
>
> Kind Regards
> Benjamin Thaut

That's a very significant difference in performance that should not be taken lightly. I don't really see a general solution to the GC problem other than to design things such that a D programmer has a truely practical ability to not use the GC at all and ensure that it does not sneak back in. IMHO I think it was a mistake to assume that D should depend on a GC to the degree that has taken place.

The GC is also the reason why D has a few other significant technical problems not related to performance, such as inability to link D code to C/C++ code if the GC is required on the D side, and inability to build dynamic liraries and runtime loadable plugins that link to the runtime system - the GC apparently does not work correctly in these situatons, although the problem is solvable how this was allowed to happen in the first place is difficult to understand.

I'll be a much more happy D programmer if I could guarantee where and when the GC is used, therefore the GC should be 100% optional in practice, not just in theory.

--rt

October 23, 2012
On Tuesday, 11 September 2012 at 10:28:29 UTC, bearophile wrote:
> SomeDude:
>
>> It's a bad solution imho. Monitoring the druntime and hunting every part that allocates until our codebase is correct like Benjamen Thaut is a much better solution
>
> Why do you think such hunt is better than letting the compiler tell you what parts of your program have the side effects you want to avoid?
>

The compiler option warning about undesirable heap allocations will allow for complete undesirable allocations to be identified much more easily and without missing anything. This is a general solution to a general problem where a programmer wishes to avoid heap allocations for whatever reason.

--rt

October 24, 2012
On Tuesday, 23 October 2012 at 22:31:03 UTC, Rob T wrote:
> On Tuesday, 23 October 2012 at 16:30:41 UTC, Benjamin Thaut wrote:
>> Here a small update:
>>
>> I found a piece of code that did manually slow down the simulation in case it got to fast. This code never kicked in with the GC version, because it never reached the margin. The manual memory managed version however did reach the margin and was slowed down. With this piece of code removed the manual memory managed version runs at 5 ms which is 200 FPS and thus nearly 3 times as fast as the GC collected version.
>>
>> Kind Regards
>> Benjamin Thaut
>
> That's a very significant difference in performance that should not be taken lightly. I don't really see a general solution to the GC problem other than to design things such that a D programmer has a truely practical ability to not use the GC at all and ensure that it does not sneak back in. IMHO I think it was a mistake to assume that D should depend on a GC to the degree that has taken place.
>
> The GC is also the reason why D has a few other significant technical problems not related to performance, such as inability to link D code to C/C++ code if the GC is required on the D side, and inability to build dynamic liraries and runtime loadable plugins that link to the runtime system - the GC apparently does not work correctly in these situatons, although the problem is solvable how this was allowed to happen in the first place is difficult to understand.
>
> I'll be a much more happy D programmer if I could guarantee where and when the GC is used, therefore the GC should be 100% optional in practice, not just in theory.
>
> --rt


Having dealt with systems programming in languages with GC (Native Oberon, Modula-3), I wonder how much an optional GC would really matter, if D's GC had better performance.

--
Paulo