October 08, 2013
On Tuesday, 8 October 2013 at 16:22:25 UTC, Dicebot wrote:
> On Tuesday, 8 October 2013 at 15:43:46 UTC, ponce wrote:
>> Is there a plan to have a standard counter-attack to that kind of overblown problems?
>> It could be just a solid blog post or a @nogc feature.
>
> It is not overblown. It is simply "@nogc" which is lacking but absolutely mandatory. Amount of hidden language allocations makes manually cleaning code of those via runtime asserts completely unreasonable for real project.

Please no more attributes. What next, @nomalloc?

Making sure your code doesn't allocate isn't that difficult.
October 08, 2013
On 10/8/13 10:00 AM, Dicebot wrote:
>
> proper performance

I apologize for picking out your post, Dicebot, as the illustrative example, but I see this pop up in various discussion and I've been meaning to comment on it for a while.

Please stop using words like 'proper', 'real', and other similar terms to describe a requirement. It's a horrible specifier and adds no useful detail.

It tends to needlessly setup the convarsation as confrontational or adversarial and implies that anyone that disagrees is wrong or not working on a real system.  There's lots of cases where pushing to the very edge of bleeding isn't actually required.

Thanks,
Brad
October 08, 2013
On Tuesday, 8 October 2013 at 17:55:33 UTC, Araq wrote:
> O(1) malloc implementations exist, it is a solved problem. (http://www.gii.upv.es/tlsf/)

custom allocator != generic malloc

In such conditions you almost always want to use incremental region allocator anyway. Problem is hidden automatical allocation.

> TLSF executes a maximum of 168 processor instructions in a x86 architecture. Saying that you can't use that during request handling is like saying that you can't afford a cache miss.

Some time ago I have been working in a networking project where request context was specifically designed to fit in a single cache line and breaking this immediately resulted in 30-40% performance penalty. There is nothing crazy about saying you can't afford an extra cache miss. It is just not that common. Same goes for avoiding heap allocations (but is much more common).
October 08, 2013
On Tuesday, 8 October 2013 at 19:52:32 UTC, Brad Roberts wrote:
> On 10/8/13 10:00 AM, Dicebot wrote:
>>
>> proper performance
>
> I apologize for picking out your post, Dicebot, as the illustrative example, but I see this pop up in various discussion and I've been meaning to comment on it for a while.
>
> Please stop using words like 'proper', 'real', and other similar terms to describe a requirement. It's a horrible specifier and adds no useful detail.
>
> It tends to needlessly setup the convarsation as confrontational or adversarial and implies that anyone that disagrees is wrong or not working on a real system.  There's lots of cases where pushing to the very edge of bleeding isn't actually required.
>
> Thanks,
> Brad

What wording would you suggest to use? For me "proper" is pretty much equal to "meeting requirements / expectations as defined by similar projects written in C". It has nothing do with "real" vs "toy" projects, just implies that in some domains such expectations are more restrictive.
October 08, 2013
On Tuesday, 8 October 2013 at 19:38:22 UTC, Peter Alexander wrote:
> Making sure your code doesn't allocate isn't that difficult.

What would you use for that? It is not difficult, it is unnecessary (and considerably) time-consuming.
October 08, 2013
On Tuesday, 8 October 2013 at 16:22:25 UTC, Dicebot wrote:
> On Tuesday, 8 October 2013 at 15:43:46 UTC, ponce wrote:
>> Is there a plan to have a standard counter-attack to that kind of overblown problems?
>> It could be just a solid blog post or a @nogc feature.
>
> It is not overblown.

I'm certain that most people complaining about it absolutely do not have the constraint that eliminate the possibility to use a GC.
October 08, 2013
Am 08.10.2013 22:39, schrieb Dicebot:
> On Tuesday, 8 October 2013 at 17:55:33 UTC, Araq wrote:
>> O(1) malloc implementations exist, it is a solved problem.
>> (http://www.gii.upv.es/tlsf/)
>
> custom allocator != generic malloc
>
> In such conditions you almost always want to use incremental region
> allocator anyway. Problem is hidden automatical allocation.
>
>> TLSF executes a maximum of 168 processor instructions in a x86
>> architecture. Saying that you can't use that during request handling
>> is like saying that you can't afford a cache miss.
>
> Some time ago I have been working in a networking project where request
> context was specifically designed to fit in a single cache line and
> breaking this immediately resulted in 30-40% performance penalty. There
> is nothing crazy about saying you can't afford an extra cache miss. It
> is just not that common. Same goes for avoiding heap allocations (but is
> much more common).

How did you manage to keep the request size portable across processors/motherboards?

Was the hardware design fixed?

--
Paulo
October 08, 2013
On Tuesday, 8 October 2013 at 20:55:39 UTC, Paulo Pinto wrote:
> How did you manage to keep the request size portable across processors/motherboards?
>
> Was the hardware design fixed?

Yes it was tightly coupled h/w + s/w solution sold as a whole and portability was out of question. I am still under NDA for few more years though so can't really tell most interesting stuff.

(I was speaking about request context struct though, not request data itself)
October 08, 2013
> What would you use for that? It is not difficult, it is unnecessary (and considerably) time-consuming.

It's likely allocations would show up in a profiler since GC collections are started by those. But I never tested it.

October 08, 2013
On Tuesday, 8 October 2013 at 20:44:55 UTC, Dicebot wrote:
> On Tuesday, 8 October 2013 at 19:38:22 UTC, Peter Alexander wrote:
>> Making sure your code doesn't allocate isn't that difficult.
>
> What would you use for that? It is not difficult, it is unnecessary (and considerably) time-consuming.

Just learn where allocations occur and avoid them during development. This leaves you only with accidental or otherwise unexpected allocations.

For the accidental allocations, these will come up during profiling (which is done anyway in a performance sensitive program). The profiler gives you the call stack so these are trivial to spot and remove. There are also several other ways to spot allocations (modify druntime to log on allocation, set a breakpoint in the GC using a debugger, etc.) although I don't do these.

You say it is time consuming. In my experience it isn't. General profiling and performance tuning are more time consuming.

You may argue that profiling won't always catch accidental allocations due to test coverage. This is true, but then @nogc is only a partial fix to this anyway. It will catch GC allocations, but what about accidental calls to malloc, mmap, or maybe an accidental IO call due to some logging you forgot to remove. GC allocations are just one class of performance problems, there are many more and I hope we don't have to add attributes for them all.