August 08, 2022

On Monday, 8 August 2022 at 15:51:11 UTC, wjoe wrote:

>

Yes, but more typing and it requires an import.
No intention to complain; just saying convenience and such. :)

These days, new attributes are added to the core.attribute module rather than being available globally, so if the @GC(...) syntax were added, it would also require an import. :)

August 08, 2022

On Sunday, 7 August 2022 at 22:39:24 UTC, Paulo Pinto wrote:

>

On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:

>

On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:

>

On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:

>

[...]

It's actually 69.420% of all software in the world

Exactly, hence why this quote is bullshit

But nobody wants to understand the problems anymore

https://discord.com/blog/why-discord-is-switching-from-go-to-rust

Let's miss every opportunities to catch market shares

Discord switched to Rust, because they wanted to work in cool new toys, that was the actual reason, while they use Electron for their "desktop" app.

Meanwhile companies ship production quality firmware for IoT secure keys written in Go.

I think this kind of start-with-the-desired-conclusion-and-work-backwards thinking seems to be alarmingly prevalent in the computing world (and on the Supreme Court). It is certainly a requirement for being a Rust fan-boy.

But I can tell you that I saw this kind of thing 50+ years ago (human nature just doesn't change), when performance measurement was my specialty. I constantly ran into people who "just knew" why certain code, even their code, performed as it did. Measurements (evidence) were/was unnecessary. I could tell you many war stories where these people were dead wrong (almost always), even about the behavior of their own code.

August 08, 2022
On Mon, Aug 08, 2022 at 07:11:46PM +0000, Don Allen via Digitalmars-d wrote:
> On Sunday, 7 August 2022 at 22:39:24 UTC, Paulo Pinto wrote:
[...]
> > Discord switched to Rust, because they wanted to work in cool new toys, that was the actual reason, while they use Electron for their "desktop" app.
> > 
> > Meanwhile companies ship production quality firmware for IoT secure keys written in Go.
> 
> I think this kind of start-with-the-desired-conclusion-and-work-backwards thinking seems to be alarmingly prevalent in the computing world (and on the Supreme Court). It is certainly a requirement for being a Rust fan-boy.
> 
> But I can tell you that I saw this kind of thing 50+ years ago (human nature just doesn't change), when performance measurement was my specialty. I constantly ran into people who "just knew" why certain code, even their code, performed as it did. Measurements (evidence) were/was unnecessary. I could tell you many war stories where these people were dead wrong (almost always), even about the behavior of their own code.

Once upon a time, I was one of those guilty as charged. I cherished my l33t C skillz, hand-tweaked every line of code in fits of premature optimization, and "just knew" my code would be faster if I wrote `x++` instead of `x = x + 1`, ad nauseaum.

Then one day, I ran a profiler.

It revealed the performance bottleneck was somewhere *completely* different from where I thought it was. (It was a stray debug printf that I'd forgotten to remove after fixing a bug.)  Deleting that one line of code boosted my performance MAGNITUDES more than countless hours of sweating over every line of code to "squeeze all the juice out of the machine".

That was only the beginning; the first dawning of the gradual realization that I was actually WRONG about the performance of my code. Most of the time.  Although one can make educated guesses about where the bottleneck is, without hard proof from a profiler you're just groping in the dark. And most of the time you're wrong.

Eventually, I learned (the hard way) that most real-world bottlenecks
are (1) not where you expect them to be, and (2) can be largely
alleviated with a small code change. Straining through every line of
code is 99.9% of the time unnecessary (and an unproductive use of time).
Always profile, profile, profile.  Only optimize what the profiler
reveals, don't bother with the rest.

That's why these days, I don't pay much attention to people complaining about how this or that is "too inefficient" or "too slow".  Show me actual profiler measurements, and I might pay more attention. Otherwise, I just consign it to the premature optimization bin.


T

-- 
In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
August 08, 2022
On Monday, 8 August 2022 at 00:57:52 UTC, Walter Bright wrote:
> I expected Carmack's view to be practical and make perfect sense. I'm pleased to be right!
>
> Anything he has to say about writing code is worth listening too.

Replying to this to emphasise the point.

You know, one of the advantages of Carmack releasing his code is that you can see for yourself what his views on GCs are.

https://github.com/id-Software/DOOM/blob/master/linuxdoom-1.10/z_zone.c

I've spent a lot of time in the Doom source code, and what I've linked here is the Zone allocator. No allocation in the code is done outside of this, it all goes through zones. Allocating with the PU_STATIC zone is the equivalent of a manual malloc, ie you need to free it yourself.

Where it gets interesting, though, is the PU_LEVEL and PU_CACHE tags. These are garbage collected zones that you don't need to free yourself.

The level stuff has persistence exactly as long as it takes to reload a level (be it through death/new game/load game/level warp/etc). The Z_FreeTags function is used on such a reload to deallocate anything with the PU_LEVEL tag. PU_CACHE is a bit more fun - if the allocator runs out of memory in the free pool, it'll just plain grab something that's marked PU_CACHE. As such, you have no guarantee that any PU_CACHE memory is valid after the next call to the allocator. This is used for textures, in fact, and is how the game both didn't crash out on no memory on low-spec 386s back in the day _and_ why that disk loading icon showed up so frequently on such a system.

So tl;dr is that there's tactical usage of non-GC _AND_ GC memory in Doom. And since it's a C application, there's no concern about destructors. Code is structured in such a way that an init path will be called before attempting to access GC memory again, and the system keeps itself together.

That was also nearly 30 years ago now. And as such, I always laugh whenever someone tries to tell me GC has no place in video games.

I also shipped a game last year where the GC was a major pain. Unreal Engine's GC is a mark and sweep collector, not too dissimilar in theory to Rainer's GC (written for Visual D). And we had to do things to it to make it not freeze the game for 300+ milliseconds. Given that we need to present a new frame every 16.666666... seconds, that's unacceptable. The solution worked (amortize the collect) but it's not really ideal.

I have been meaning to sit down and work on a concurrent GC, but LOLNO as if I have the time.

Separately though:

> there is a fork based GC available

Can we stop talking about this please? It's a technological dead end, unusable on a large number of computers and mobile devices. If that's the best we've got, it's really not good enough.
August 08, 2022

On 8/8/22 6:32 PM, Ethan wrote:

>

I have been meaning to sit down and work on a concurrent GC, but LOLNO as if I have the time.

If you mean concurrent as in it can use multiple threads, that already is happening.

If you mean concurrent as in you can allocate and mark/sweep in separate threads independently, that would be a huge improvement.

Even if you have some way to designate "this one thread can't be paused", and figure out a way to section that off, it would be huge.

-Steve

August 09, 2022
On 09/08/2022 10:32 AM, Ethan wrote:
>> there is a fork based GC available
> 
> Can we stop talking about this please? It's a technological dead end, unusable on a large number of computers and mobile devices. If that's the best we've got, it's really not good enough.

I double checked this (even though we previously discussed it an I believed you), but processsnapshot.h which is required to do concurrent GC's for Windows is not available for Xbox.

So yeah, concurrrent GC's are out on Xbox.

Need write barriers if you want a better GC.
August 09, 2022
On Monday, 8 August 2022 at 19:49:16 UTC, H. S. Teoh wrote:
<snip>
> That's why these days, I don't pay much attention to people complaining about how this or that is "too inefficient" or "too slow".  Show me actual profiler measurements, and I might pay more attention. Otherwise, I just consign it to the premature optimization bin.

Exactly. That's why I always call the C++ and Rust for performance (Rust for safety is a bit different) as POOP languages: Premature Optimization Oriented Programming languages.

August 09, 2022
On 8/8/2022 3:32 PM, Ethan wrote:
> Replying to this to emphasise the point.

Thank you for writing this. It's clever and sensible. I expect nothing less from Carmack!
August 09, 2022
On Tuesday, 9 August 2022 at 14:03:47 UTC, Patrick Schluter wrote:
> On Monday, 8 August 2022 at 19:49:16 UTC, H. S. Teoh wrote:
> <snip>
>> That's why these days, I don't pay much attention to people complaining about how this or that is "too inefficient" or "too slow".  Show me actual profiler measurements, and I might pay more attention. Otherwise, I just consign it to the premature optimization bin.
>
> Exactly. That's why I always call the C++ and Rust for performance (Rust for safety is a bit different) as POOP languages: Premature Optimization Oriented Programming languages.

GC is also a premature optimization

Do you need it when you write a 1 step cli tool? no you don't, DMD disables it

The key is to understand your domain and pick the right tool for the job

We should not fall into the trap of using a screwdriver for everything; our strength is our ability to have a GC, but also stray away from it whenever you domain requires it, vice versa

That's in the vision document, and Atilla perfectly explained it at DConf

And the doom example from Manu is the perfect real world usecase of my point

Some interesting thread:

https://twitter.com/TheGingerBill/status/1556961078252343296
August 09, 2022
On Tuesday, 9 August 2022 at 14:36:13 UTC, ryuukk_ wrote:
> On Tuesday, 9 August 2022 at 14:03:47 UTC, Patrick Schluter wrote:
>> On Monday, 8 August 2022 at 19:49:16 UTC, H. S. Teoh wrote:
>> <snip>
>>> [...]
>>
>> Exactly. That's why I always call the C++ and Rust for performance (Rust for safety is a bit different) as POOP languages: Premature Optimization Oriented Programming languages.
>
> GC is also a premature optimization
>
> Do you need it when you write a 1 step cli tool? no you don't, DMD disables it
>
> The key is to understand your domain and pick the right tool for the job
>
> We should not fall into the trap of using a screwdriver for everything; our strength is our ability to have a GC, but also stray away from it whenever you domain requires it, vice versa
>
> That's in the vision document, and Atilla perfectly explained it at DConf
>
> And the doom example from Manu is the perfect real world usecase of my point
>
> Some interesting thread:
>
> https://twitter.com/TheGingerBill/status/1556961078252343296

dmd not freeing by default is/was a bad idea. The memory usage on large projects is catastrophic.

So just enable the GC? In theory yes but in practice people hold references to stuff all over the place so the GC often can't actually free anything.