October 12, 2018
On Friday, 12 October 2018 at 19:43:02 UTC, Stanislav Blinov wrote:
> On Friday, 12 October 2018 at 18:50:26 UTC, Neia Neutuladh wrote:
>
>> Over the lifetime of the script, it processed more memory than my computer had. That means I needed a memory management strategy other than "allocate everything". The GC made that quite easy.
>
> Now *that* is a good point. Then again, until you run out of address space you're still fine with just plain old allocate-and-forget. Not that it's a good thing for production code, but for one-off scripts? Sure.
>
>>>> People demonstrably have trouble doing that. We can do it most of the time, but everyone occasionally forgets.
>>> 
>>> The GC isn't a cure for forgetfulness. One can also forget to close a file or a socket, or I dunno, cancel a financial transaction.
>
>> By lines of code, programs allocate memory much more often than they deal with files or sockets or financial transactions. So anything that requires less discipline when dealing with memory will reduce bugs a lot, compared with a similar system dealing with sockets or files.
>
> My point is it's irrelevant whether it's memory allocation or something else. If you allow yourself to slack on important problems, that habit *will* bite you in the butt in the future.

Freeing your mind and the codebase of having to deal with memory leaves it in an easier place to deal with the less common higher impact leaks: file descriptors, sockets, database handles ect. (this is like chopping down the forest so you can see the trees you care about ;) ).
October 12, 2018
On Friday, 12 October 2018 at 19:55:02 UTC, Nicholas Wilson wrote:

> Freeing your mind and the codebase of having to deal with memory leaves it in an easier place to deal with the less common higher impact leaks: file descriptors, sockets, database handles ect. (this is like chopping down the forest so you can see the trees you care about ;) ).

That's done first and foremost by stripping out unnecessary allocations, not by writing "new" every other line and closing your eyes.

I mean come on, it's 2018. We're writing code for multi-core and multi-processor systems with complex memory interaction. Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer. It's true that it isn't critical for one-off scripts, but so is deallocation.

Saying stuff like "do more with GC" is just outright harmful. Kids are reading, for crying out loud.
October 12, 2018
On Friday, 12 October 2018 at 20:12:26 UTC, Stanislav Blinov wrote:
> Saying stuff like "do more with GC" is just outright harmful. Kids are reading, for crying out loud.

People in this thread mostly said that for some things GC is just awesome. When you need to get shit done fast and dirty GC saves time and mental capacity. Not all code deals with sockets, DB, bank transactions, multithreading, etc.
October 12, 2018
On Friday, 12 October 2018 at 16:26:49 UTC, Stanislav Blinov wrote:
> On Thursday, 11 October 2018 at 21:22:19 UTC, aberba wrote:
>> "It takes care of itself
>> -------------------------------
>> When writing a throwaway script...
>
> ...there's absolutely no need for a GC.

True. There's also absolutely no need for computer languages either, machine code is sufficient.

> In fact, the GC runtime will only detract from performance.

Demonstrably untrue. It puzzles me why this myth persists. There are trade-offs, and one should pick whatever is best for the situation at hand.

>> What this means is that whenever I have disregarded a block of information, say removed an index from an array, then that memory is automatically cleared and freed back up on the next sweep. While the process of collection and actually checking
>
> Which is just as easily achieved with just one additional line of code: free the memory.

*Simply* achieved, not *easily*. Decades of bugs has shown emphatically that it's not easy.

>> Don't be a computer. Do more with GC.
>
> Writing a throwaway script there's nothing stopping you from using mmap or VirtualAlloc.

There is: writing less code to achieve the same result.

> The "power" of GC is in the language support for non-trivial types, such as strings and associative arrays. Plain old arrays don't benefit from it in the slightest.

For me, the power of tracing GC is that I don't need to think about ownership, lifetimes, or manual memory management. I also don't have to please the borrow checker gods.

Yes, there are other resources to manage. RAII nearly always manages that, I don't need to think about that either.


October 12, 2018
On Friday, 12 October 2018 at 20:12:26 UTC, Stanislav Blinov wrote:
> On Friday, 12 October 2018 at 19:55:02 UTC, Nicholas Wilson wrote:
>
>> Freeing your mind and the codebase of having to deal with memory leaves it in an easier place to deal with the less common higher impact leaks: file descriptors, sockets, database handles ect. (this is like chopping down the forest so you can see the trees you care about ;) ).
>
> That's done first and foremost by stripping out unnecessary allocations, not by writing "new" every other line and closing your eyes.

D isn't Java. If you can, put your data on the stack. If you can't, `new` away and don't think about it. The chances you'll have to optimise the code are not high. If you do, the chances that the GC allocations are the problem are also not high. If the profiler shows they are... then remove those allocations.

> I mean come on, it's 2018. We're writing code for multi-core and multi-processor systems with complex memory interaction.

Sometimes we are. Other times it's a 50 line script.

> Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer.

Of any D programmer writing code that's performance sensitive.

> It's true that it isn't critical for one-off  scripts, but so is deallocation.

We'll agree to disagree.

> Saying stuff like "do more with GC" is just outright harmful.

Disagreement yet again.



October 12, 2018
On Friday, 12 October 2018 at 21:15:04 UTC, welkam wrote:

> People in this thread mostly said that for some things GC is just awesome. When you need to get shit done fast and dirty GC saves time and mental capacity. Not all code deals with sockets, DB, bank transactions, multithreading, etc.

Read the OP again then. What message does it send? What broad conclusion does it draw from a niche use case?
October 12, 2018
On 10/11/18 11:20 PM, JN wrote:
> On Thursday, 11 October 2018 at 21:22:19 UTC, aberba wrote:
[snip]
> That is fine, if you want to position yourself as competition to languages like Go, Java or C#. D wants to be a viable competition to languages like C, C++ and Rust, as a result, there are usecases where GC might not be enough.

Does it though? The way I see it is that people who want to do what C/C++ does are going to use ... C/C++. The same goes for Java/C#. People who want to do what Java/C# do are pretty much just going to use Java/C#. And nothing D does is going to convince them that D is truly better.

For the C/C++ D's more involved involved semantics for non-GC code are ALWAYS going to be a turnoff. And for Java/C# people D's less evolved standard library (and library ecosystem) is ALWAYS going to be a turnoff.

Where D shines is in it's balance between the two extremes. If want to attempt what C# can do with C++ i'm going to spend the next ten years writing code to replace what ships OOB in .NET. If I want to use C# as a systems language, I have to reinvent everything that C# relies on from the ground up, which will cost me about 10 years (see MSR's Singularity).

IMHO D should focus on being the best possible D it can be. If we take care of D, the rest will attend to itself.

-- 
Adam Wilson
IRC: LightBender
import quiet.dlang.dev;
October 12, 2018
On Friday, 12 October 2018 at 21:34:35 UTC, Atila Neves wrote:

>>> -------------------------------
>>> When writing a throwaway script...
>>
>> ...there's absolutely no need for a GC.
>
> True. There's also absolutely no need for computer languages either, machine code is sufficient.

Funny. Now for real, in a throwaway script, what is there to gain from a GC? Allocate away and forget about it.

>> In fact, the GC runtime will only detract from performance.

> Demonstrably untrue. It puzzles me why this myth persists.

Myth, is it now? Unless all you do is allocate memory, which isn't any kind of useful application, pretty much on each sweep run the GC's metadata is *cold*. What's worse, you don't control how much data there is and where it is. Need I say more? If you disagree, please do the demonstration then.

> There are trade-offs, and one should pick whatever is best for the situation at hand.

Exactly. Which is *not at all* what the OP is encouraging to do.

>>> What this means is that whenever I have disregarded a block of information, say removed an index from an array, then that memory is automatically cleared and freed back up on the next sweep. While the process of collection and actually checking
>>
>> Which is just as easily achieved with just one additional line of code: free the memory.
>
> *Simply* achieved, not *easily*. Decades of bugs has shown emphatically that it's not easy.

Alright, from one non-native English speaker to another, well done, I salute you. I also used the term "dangling pointer" previously, where I should've used "non-null". Strange you didn't catch that.
To the point: *that* is a myth. The bugs you're referring to are not *solved* by the GC, they're swept under a rug. Because the bugs themselves are in the heads, stemming from that proverbial programmer laziness. It's like everyone is Scarlett O'Hara with a keyboard.

For most applications, you *do* know how much memory you'll need, either exactly or an estimation. Garbage collection is useful for cases when you don't, or can't estimate, and even then a limited subset of that.

>>> Don't be a computer. Do more with GC.
>>
>> Writing a throwaway script there's nothing stopping you from using mmap or VirtualAlloc.
>
> There is: writing less code to achieve the same result.

Well, I guess either of those do take more arguments than a "new", so yup, you do indeed write "less" code. Only that you have no clue how much more code is hiding behind that "new", how many indirections, DLL calls, syscalls with libc's wonderful poison that is errno... You don't want to think about that. Then two people start using your script. Then ten, a hundred, a thousand. Then it becomes a part of an OS distribution. And no one wants to "think about that".

>> The "power" of GC is in the language support for non-trivial types, such as strings and associative arrays. Plain old arrays don't benefit from it in the slightest.

> For me, the power of tracing GC is that I don't need to think about ownership, lifetimes, or manual memory management.

Yes you do, don't delude yourself. Pretty much the only way you don't is if you're writing purely functional code. But we're talking about D here.
Reassigned a reference? You thought about that. If you didn't, you just wrote a nasty bug. How much more hypocrisy can we reach here?

"Fun" fact: it's not @safe to "new" anything in D if your program uses any classes. Thing is, it does unconditionally thanks to DRuntime.

> I also don't have to please the borrow checker gods.

Yeah, that's another extremum. I guess "rustacians" or whatever the hell they call themselves are pushing that one, don't they? "Let's not go for a GC, let's straight up cut out whole paradigms for safety's sake..."

> Yes, there are other resources to manage. RAII nearly always manages that, I don't need to think about that either.

Yes you do. You do need to write those destructors or scoped finalizers, don't you? Or so help me use a third-party library that implements those? There's fundamentally *no* difference from memory management here. None, zero, zip.

Sad thing is, you're not alone. Look at all the major OSs today. How long does it take to, I don't know, open a project in the Visual Studio on Windows? Or do a search in a huge file opened in 'less' on Unix? On an octacore 4GHz machine with 32Gb 3GHz memory? Should just instantly pop up on the screen, shouldn't it? Why doesn't it then? Because most programmers think the way you do: "oh it doesn't matter here, I don't need to think about that". And then proceed to advocate those "awesome" laid-back solutions that oh so help them save so much time coding. Of course they do, at everyone else's expense. Decades later, we're now trying to solve problems that shouldn't have existed in the first place. You'd think that joke was just that, a joke...

But let's get back to D. Look at Phobos. Why does stdout.writefln need to allocate? How many times does it copy it's arguments? Why can't it take non-copyable arguments? Does it change global state in your process' address space? Does it impose external dependencies? You don't want to think about that? The author likely didn't either. And yet everybody is encouraged to use that: it's out of the box after all...
Why is Socket a class, blown up from a puny 32-bit value to a bloated who-knows-how-many-bytes monstrosity? Will that socket close if you rely on the GC? Yes? No? Maybe? Why?
Can I deploy the compiler on a remote machine with limited RAM and expect it to always successfully build my projects and not run out of memory?

I can go on and on, but I hope I finally made my point somewhat clearer. Just in case, a TLDR: *understand your machine and your tools and use them accordingly*. There are no silver bullets for anything, and that includes the GC. If you go on advocating it because it helped you write a 1kLOC one-time-use script, it's very likely I don't want to use anything you write.
October 12, 2018
On Friday, 12 October 2018 at 20:12:26 UTC, Stanislav Blinov wrote:
> On Friday, 12 October 2018 at 19:55:02 UTC, Nicholas Wilson wrote:
>
>> Freeing your mind and the codebase of having to deal with memory leaves it in an easier place to deal with the less common higher impact leaks: file descriptors, sockets, database handles ect. (this is like chopping down the forest so you can see the trees you care about ;) ).
>
> That's done first and foremost by stripping out unnecessary allocations, not by writing "new" every other line and closing your eyes.

If you need perf in your _scripts_, a use LDC and b) pass -O3 which among many other improvements over baseline will promote unnecessary garbage collection to the stack.

> I mean come on, it's 2018. We're writing code for multi-core and multi-processor systems with complex memory interaction.

We might be sometimes. I suspect that is less likely for a script to fall in that category.

> Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer. It's true that it isn't critical for one-off scripts, but so is deallocation.
>
> Saying stuff like "do more with GC" is just outright harmful.

That is certainly not an unqualified truth. Yes one shouldn't `new` stuff just for fun, but speed of executable is often not what one is trying to optimise when writing code, e.g. when writing a script one is probably trying to minimise development/debugging time.

> Kids are reading, for crying out loud.

Oi, you think thats bad? Try reading what some of the other Aussies post, *cough* e.g. a frustrated Manu *cough*

October 12, 2018
On Friday, 12 October 2018 at 21:39:13 UTC, Atila Neves wrote:

> D isn't Java. If you can, put your data on the stack. If you can't, `new` away and don't think about it.

Then five years later, try and hunt down that mysterious heap corruption. Caused by some destructor calling into buggy third-party code. Didn't want to think about that one either?

> The chances you'll have to optimise the code are not high. If you do, the chances that the GC allocations are the problem are also not high. If the profiler shows they are... then remove those allocations.

>> I mean come on, it's 2018. We're writing code for multi-core and multi-processor systems with complex memory interaction.
>
> Sometimes we are. Other times it's a 50 line script.

There is no "sometimes" here. You're writing programs for specific machines. All. The. Time.

>> Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer.
>
> Of any D programmer writing code that's performance sensitive.

All code is performance sensitive. Whoever invented that distinction should be publicly humiliated. If it's not speed, it's power consumption. Or memory. Or I/O. "Not thinking" about any of that means you're treating your power champion horse as if it was a one-legged pony.
Advocating the "not thinking" approach makes you an outright evil person.