January 26, 2022

On Wednesday, 26 January 2022 at 02:09:24 UTC, Tejas wrote:

>

I asked Reddit why ARC isn't used more widely despite Swift being so successful and was swiftly(pun intended 😉) corrected that Swift user share has become 50% of what it once was at it's peak.

Bullshit argument. There is much less demand for iOS-only or Android-only development than cross-platform. Swift is not cross-platform. Thus Dart and other solutions are cheaper. Cheaper wins.

What makes Swift annoying is related to Objective-C requirements. Swift + C++ is ok for development of Apple-only applications.

January 26, 2022
On Wednesday, 26 January 2022 at 06:20:06 UTC, Elronnd wrote:
> Thread-local gc is a thing.  Good for false sharing too (w/real threads); can move contended objects away from owned ones.  But I see no reason why fibre-local heaps should need to be much different from thread-local heaps.

The difference is that you maybe have 8 threads, but maybe 10000 tasks. So in the latter case you cannot let the heap-owner collect its own garbage.


January 26, 2022
On Wednesday, 26 January 2022 at 08:20:51 UTC, Ola Fosheim Grøstad wrote:
> The difference is that you maybe have 8 threads, but maybe 10000 tasks. So in the latter case you cannot let the heap-owner collect its own garbage.

Yes.  Good point.  The more I think about it, the more I see differences and opportunities to profit from doing things differently.
January 26, 2022

On Wednesday, 26 January 2022 at 08:32:44 UTC, Elronnd wrote:

>

Yes. Good point. The more I think about it, the more I see differences and opportunities to profit from doing things differently.

Yes, if the load is somewhat even and you have 16 (8+8) cores then you could let 15 tasks run and pick one of the 100s others to collect with little impact on latency.

But you need heuristics to pick the one with most garbage and can be delayed without penalty (like if the task recently started waiting for network response or is marked as low priority).

So even though the situations seem similar conceptually I think a good dedicated implementation would be very different! :-D

Sounds like a fun project to me!!

January 26, 2022
On Tuesday, 25 January 2022 at 03:37:57 UTC, Elronnd wrote:
> Apropos recent discussion, here is a serious question: would you pay for either of these?
>

No. No problem => no solution needed.
January 27, 2022
On Tuesday, 25 January 2022 at 13:09:58 UTC, Adam D Ruppe wrote:
> On Tuesday, 25 January 2022 at 03:37:57 UTC, Elronnd wrote:
>> Apropos recent discussion, here is a serious question: would you pay for either of these?
>
> No. D's GC is already plenty good enough right now.

While i might pay for a good GC implementation to be worked on and added to D; However unless you're needing real-time and high workflow with GC active a lot, i don't see the need for it. So as Ruppe says, the current one is probably good enough.

I'd almost prefer to set and have the GC with it's own thread/core where it works at regular intervals; Recently having just gotten a 8 core machine i can't seem to keep all my cores busy, even when trying hard.
January 27, 2022
On Thu, Jan 27, 2022 at 09:11:18PM +0000, Era Scarecrow via Digitalmars-d wrote: [...]
> [...] Recently having just gotten a 8 core machine i can't seem to keep all my cores busy, even when trying hard.

I recently also upgraded to an 8-core AMD CPU with hyperthreading, but I find myself wishing it was 16 cores or 32... maybe even that 80-core Intel experiment from a number of years ago.  It just takes forever to churn through the large amounts of computations I throw at it.  With high-volume compute-intensive tasks that I'm doing, one can never have enough CPUs... :-P


T

-- 
"I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet. " -- swr
January 28, 2022
On Thursday, 27 January 2022 at 21:11:18 UTC, Era Scarecrow wrote:
> I'd almost prefer to set and have the GC with it's own thread/core where it works at regular intervals

Sadly, doesn't work as well as we'd like.  Concurrent GC exists and does peg its own cores, but hurts mainline application performance; I hear 10-50% (depending on workload).  Contention sucks...
January 28, 2022
On 28/01/2022 10:11 AM, Era Scarecrow wrote:
> I'd almost prefer to set and have the GC with it's own thread/core where it works at regular intervals; Recently having just gotten a 8 core machine i can't seem to keep all my cores busy, even when trying hard.

We already do this (more or less).

    uint parallel = 99;      // number of additional threads for marking (limited by cpuid.threadsPerCPU-1)

https://github.com/dlang/druntime/blob/master/src/core/gc/config.d#L26
January 28, 2022
On Wednesday, 26 January 2022 at 06:20:06 UTC, Elronnd wrote:
>
> Thread-local gc is a thing.  Good for false sharing too (w/real threads); can move contended objects away from owned ones.  But I see no reason why fibre-local heaps should need to be much different from thread-local heaps.
>

I would like to challenge the idea that thread aware GC would do much for performance. Pegging memory to one thread is unusual and doesn't often correspond to the reality.

For example a computer game with large amount of vertex data where you decide to split up the workload on several threads. You don't make a thread local copy of that data but keep the original vertex data global and even destination buffer would be global.

What I can think of is a server with one thread per client with data that no other reason thread works on. Perhaps there thread local GC could be benefitial. My experience is that this thread model isn't good programming and servers should instead be completely async meaning any thread might handle the next partial work.

As I see it thread aware GC doesn't do much for performance but complicates it for the programmer.