August 08, 2022

On Monday, 8 August 2022 at 01:49:57 UTC, Nicholas Wilson wrote:

>

On Sunday, 7 August 2022 at 20:48:02 UTC, ryuukk_ wrote:

>

On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:

>

[...]

What we should promote more about D is the fact that

"GC is here when you need it, but you can also go raw when you need it, pragmatism allows D to be used for 99.9% of traditional softwares, but is also suitable for the remaining 0.1%"

And not just "We have a GC too, who needs to manage memory manually LOL"

You seem to be unaware that D does have more than one GC available.
Specifically, there is a fork based GC available for linux that is not stop-the-world, and is usable fro real time applications.

Perhaps we should advertise that more. Its only real downside is that it is linux only.

Oh! Do we have any benchmarks comparing the performance(like throughout, memory consumption, latency, etc)?

August 08, 2022

On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:

>

On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:

>

It's actually 69.420% of all software in the world

Exactly, hence why this quote is bullshit

But nobody wants to understand the problems anymore

https://discord.com/blog/why-discord-is-switching-from-go-to-rust

Let's miss every opportunities to catch market shares

https://i.kym-cdn.com/photos/images/original/000/732/494/c35.gif đŸ˜‰

August 08, 2022

On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:

>

On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:

>

On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:

>

On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:

>

[...]

That's kinda bullshit, it depends on the GC implementation

D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading

Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases

D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable

Go did it right by focusing on low latency, and parallelism, we should copy their GC

It's actually 69.420% of all software in the world

Exactly, hence why this quote is bullshit

But nobody wants to understand the problems anymore

https://discord.com/blog/why-discord-is-switching-from-go-to-rust

Let's miss every opportunities to catch market shares

I don't see how that is related. According to the investigation they described in the article you linked, Go's GC is set up to run every 2 minutes no questions asked. That's not true for D's GC.
Instead of jumping on the rust hype train they could have forked Go's GC and solved the actual performance problem - the forced 2 minutes GC run.

As far as D's default GC is concerned. Last time I checked it only runs a collection cycle on an allocation, further, once the GC has allocated the memory from the OS it won't release it back until the program terminates.
This means that the GC can re-alloc previously allocated, but now collected, memory basically for free, because there's not context switch into kernel and back. Which may have additional cost of reloading cache lines. But all of this depends on a lot of factors so this may or may not be a big deal.

Also, when you run your own memory management, you need to keep in mind that your manual call to *alloc/free is just as expensive as if the GC calls it. You also need to keep in mind that your super fast allocator (as in the lib/system call you use to allocate the memory) may not actually allocate the memory on your call but the real allocation may be deferred until such time when the memory is actually accessed, which may cause lag akin to that of a collection cycle, depending on the amount of memory you allocate.

It's possible to pre-allocate memory with a GC. Re-use those buffers, and slice them as you see fit. Without ever triggering a collection cycle.
You can also disable garbage collection for D'c GC for hot areas.

IME the GC saves a lot of headaches, much more than it causes and I'd much rather have more convenience in communicating my intentions to the GC than cluttering every API with allocator parameters.

Something like:

@GC(DND)//Do Not Disturb
{
  foreach (...)
    // hot code goes here and no collection cycles will happen
}

or,

void load_assets()
{
  // allocate, load stuff, etc..
  @GC(collect); // lag doesn't matter here
}
August 08, 2022

On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:

>

That's kinda bullshit, it depends on the GC implementation

D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading

Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases

D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable

Go did it right by focusing on low latency, and parallelism, we should copy their GC

D did the serious mistake by having raw pointers in the default language (even in safe mode) rather than opaque references. This means that D cannot just as easily offer different GC algorithms like other languages.

If D would have opaque references then we would have seen more different GC types that would fit more needs.

D3 needs to happen so that we can correct these serious flaws.

August 08, 2022

On Monday, 8 August 2022 at 15:05:49 UTC, wjoe wrote:

>
@GC(DND)//Do Not Disturb
{
  foreach (...)
    // hot code goes here and no collection cycles will happen
}

or,

void load_assets()
{
  // allocate, load stuff, etc..
  @GC(collect); // lag doesn't matter here
}

This is possible using the GC API in core.memory:

{
    import core.memory: GC;

    GC.disable();
    scope(exit) GC.enable();

    foreach (...)
        // hot code goes here
}
void load_assets()
{
    import core.memory: GC;

    // allocate, load stuff, etc..
    GC.collect();
}
August 08, 2022

On Monday, 8 August 2022 at 15:07:47 UTC, IGotD- wrote:

>

[snip]

D did the serious mistake by having raw pointers in the default language (even in safe mode) rather than opaque references. This means that D cannot just as easily offer different GC algorithms like other languages.

If D would have opaque references then we would have seen more different GC types that would fit more needs.

D3 needs to happen so that we can correct these serious flaws.

It is a bit of a design trade-off though. If you have two separate pointer types, then a function that takes a pointer of one has to have an overload to get the second one working. Some kind of type erasure would be useful to prevent template bloat.

August 08, 2022
On Monday, 8 August 2022 at 15:25:40 UTC, Paul Backus wrote:
> This is possible using the `GC` API in `core.memory`:
>
> ```d
> {
>     import core.memory: GC;
>
>     GC.disable();
>     scope(exit) GC.enable();
>
>     foreach (...)
>         // hot code goes here
> }
> ```
>
> ```d
> void load_assets()
> {
>     import core.memory: GC;
>
>     // allocate, load stuff, etc..
>     GC.collect();
> }
> ```

Yes, but more typing and it requires an import.
No intention to complain; just saying convenience and such. :)
August 08, 2022

On Sunday, 7 August 2022 at 22:39:24 UTC, Paulo Pinto wrote:

>

Discord switched to Rust, because they wanted to work in cool new toys, that was the actual reason, while they use Electron for their "desktop" app.

I don't know what their reasoning was, but you need twice as much memory for GC. But yeah, chat is not a low-latency application.

August 08, 2022

On Monday, 8 August 2022 at 15:05:49 UTC, wjoe wrote:

>

On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:

>

On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:

>

On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:

>

On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:

>

[...]

That's kinda bullshit, it depends on the GC implementation

D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading

Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases

D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable

Go did it right by focusing on low latency, and parallelism, we should copy their GC

It's actually 69.420% of all software in the world

Exactly, hence why this quote is bullshit

But nobody wants to understand the problems anymore

https://discord.com/blog/why-discord-is-switching-from-go-to-rust

Let's miss every opportunities to catch market shares

I don't see how that is related. According to the investigation they described in the article you linked, Go's GC is set up to run every 2 minutes no questions asked. That's not true for D's GC.
Instead of jumping on the rust hype train they could have forked Go's GC and solved the actual performance problem - the forced 2 minutes GC run.

As far as D's default GC is concerned. Last time I checked it only runs a collection cycle on an allocation, further, once the GC has allocated the memory from the OS it won't release it back until the program terminates.
This means that the GC can re-alloc previously allocated, but now collected, memory basically for free, because there's not context switch into kernel and back. Which may have additional cost of reloading cache lines. But all of this depends on a lot of factors so this may or may not be a big deal.

Also, when you run your own memory management, you need to keep in mind that your manual call to *alloc/free is just as expensive as if the GC calls it. You also need to keep in mind that your super fast allocator (as in the lib/system call you use to allocate the memory) may not actually allocate the memory on your call but the real allocation may be deferred until such time when the memory is actually accessed, which may cause lag akin to that of a collection cycle, depending on the amount of memory you allocate.

It's possible to pre-allocate memory with a GC. Re-use those buffers, and slice them as you see fit. Without ever triggering a collection cycle.
You can also disable garbage collection for D'c GC for hot areas.

IME the GC saves a lot of headaches, much more than it causes and I'd much rather have more convenience in communicating my intentions to the GC than cluttering every API with allocator parameters.

Something like:

@GC(DND)//Do Not Disturb
{
  foreach (...)
    // hot code goes here and no collection cycles will happen
}

or,

void load_assets()
{
  // allocate, load stuff, etc..
  @GC(collect); // lag doesn't matter here
}

I'm not on the anti-GC train, i use it myself in some of my projects, i find it very useful to have

The point i am trying to make is D has the capabilities to provide a solution to both GC users and people whose performance constraints prohibit the use of a GC

But for some reason, people in the community only focus on the GC, and disregard anything else, preventing me to properly advertise D as a pragmatic solution

That's it

August 08, 2022

On Monday, 8 August 2022 at 15:39:16 UTC, jmh530 wrote:

>

It is a bit of a design trade-off though. If you have two separate pointer types, then a function that takes a pointer of one has to have an overload to get the second one working. Some kind of type erasure would be useful to prevent template bloat.

Yes, as always there is a trade off. In almost no cases will functions take raw pointers, just like C# that has raw ponters you almost never use them other than in special cases.

Problems start to arise when when programs and shared libraries are compiled with different GCs. One thing I have personally noticed, having a pointer to the free function in the managed pointer type makes it very versatile. Even change GC in runtime becomes possible.

With managed pointers only, the world will open up for us to experiment with things like this.