October 12, 2018
On Friday, 12 October 2018 at 23:32:34 UTC, Nicholas Wilson wrote:
> On Friday, 12 October 2018 at 20:12:26 UTC, Stanislav Blinov

>> That's done first and foremost by stripping out unnecessary allocations, not by writing "new" every other line and closing your eyes.
>
> If you need perf in your _scripts_, a use LDC and b) pass -O3 which among many other improvements over baseline will promote unnecessary garbage collection to the stack.

If you *need* perf, you write performant code. If you don't need perf, the least you can do is *not write* lazy-ass pessimized crap.

>> I mean come on, it's 2018. We're writing code for multi-core and multi-processor systems with complex memory interaction.
>
> We might be sometimes. I suspect that is less likely for a script to fall in that category.

Jesus guys. *All* code falls in that category. Because it is being executed by those machines. Yet we all oh so like to pretend that doesn't happen, for some bizarre reason.

>> Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer. It's true that it isn't critical for one-off scripts, but so is deallocation.
>>
>> Saying stuff like "do more with GC" is just outright harmful.
>
> That is certainly not an unqualified truth. Yes one shouldn't `new` stuff just for fun, but speed of executable is often not what one is trying to optimise when writing code, e.g. when writing a script one is probably trying to minimise development/debugging time.

That's fine so long as it doesn't unnecessarily *pessimize* execution. Unfortunately, when you advertise GC for it's awesomeness in your experience with "throwaway" scripts, you're sending a very, *very* wrong message.

>> Kids are reading, for crying out loud.
> Oi, you think thats bad? Try reading what some of the other Aussies post, *cough* e.g. a frustrated Manu *cough*

:)
October 13, 2018
On Fri, 12 Oct 2018 23:35:19 +0000, Stanislav Blinov wrote:

>>> Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer.
>>
>> Of any D programmer writing code that's performance sensitive.
> 
> All code is performance sensitive. Whoever invented that distinction should be publicly humiliated. If it's not speed, it's power consumption. Or memory. Or I/O. "Not thinking" about any of that means you're treating your power champion horse as if it was a one-legged pony.

And sometimes it's programmer performance. Last year I had a malformed CSV I needed to manipulate; Excel couldn't handle it, and I couldn't (or don't know how to) trust a Vim macro to do it, so I wrote a small script in D. My design wasn't even close to high-performance, but it was easy to test (which was probably my biggest requirement); I probably could have spent another 30 minutes writing something that would have run two minutes faster, but that would have been inefficient.

I didn't even keep the script; I'll never need it again. There are times when the easy or simple solution really is the best one for the task at hand.
October 13, 2018
On Saturday, 13 October 2018 at 12:15:07 UTC, rjframe wrote:

> ...I didn't even keep the script; I'll never need it again. There are times when the easy or simple solution really is the best one for the task at hand.

And?.. Would you now go around preaching how awesome the GC is and that everyone should use it?
October 13, 2018
On Friday, 12 October 2018 at 23:24:56 UTC, Stanislav Blinov wrote:
> On Friday, 12 October 2018 at 21:34:35 UTC, Atila Neves wrote:
>
>>>> -------------------------------
>>>> When writing a throwaway script...
>>>
>>> ...there's absolutely no need for a GC.
>>
>> True. There's also absolutely no need for computer languages either, machine code is sufficient.
>
> Funny. Now for real, in a throwaway script, what is there to gain from a GC? Allocate away and forget about it.

In case you run out of memory, the GC scans. That's the gain.

>>> In fact, the GC runtime will only detract from performance.
>
>> Demonstrably untrue. It puzzles me why this myth persists.
>
> Myth, is it now?

Yes.

> Unless all you do is allocate memory, which isn't any kind of useful application, pretty much on each sweep run the GC's metadata is *cold*.

*If* the GC scans.

>> There are trade-offs, and one should pick whatever is best for the situation at hand.
>
> Exactly. Which is *not at all* what the OP is encouraging to do.

I disagree. What I got from the OP was that for most code, the GC helps. I agree with that sentiment.

> Alright, from one non-native English speaker to another, well done, I salute you.

The only way I'd qualify as a non-native English speaker would be to pedantically assert that I can't be due to not having learned it first. In any case, I'd never make fun of somebody's English if they're non-native, and that's most definitely not what I was trying to do here - I assume the words "simple" and "easy" exist in most languages. I was arguing about semantics.

> To the point: *that* is a myth. The bugs you're referring to are not *solved* by the GC, they're swept under a rug.

Not in my experience. They've literally disappeared from the code I write.

> Because the bugs themselves are in the heads, stemming from that proverbial programmer laziness. It's like everyone is Scarlett O'Hara with a keyboard.

IMHO, lazy programmers are good programmers.

> For most applications, you *do* know how much memory you'll need, either exactly or an estimation.

I don't, maybe you do. I don't even care unless I have to. See my comment above about being lazy.

> Well, I guess either of those do take more arguments than a "new", so yup, you do indeed write "less" code. Only that you have no clue how much more code is hiding behind that "new",

I have a clue. I could even look at the druntime code if I really cared. But I don't.

> how many indirections, DLL calls, syscalls with libc's wonderful poison that is errno... You don't want to think about that.

That's right, I don't.

> Then two people start using your script. Then ten, a hundred, a thousand. Then it becomes a part of an OS distribution. And no one wants to "think about that".

Meh. There are so many executables that are part of distributions that are written in Python, Ruby or JavaScript.

>> For me, the power of tracing GC is that I don't need to think about ownership, lifetimes, or manual memory management.
>
> Yes you do, don't delude yourself.

No, I don't. I used to in C++, and now I don't.

> Pretty much the only way you don't is if you're writing purely functional code.

I write pure functional code by default. I only use side-effects when I have to and I isolate the code that does.

> But we're talking about D here.
> Reassigned a reference? You thought about that. If you didn't, you just wrote a nasty bug. How much more hypocrisy can we reach here?

I probably didn't write a nasty bug if the pointer that was reassigned was to GC allocated memory. It lives as long as it has to, I don't think about it.

> "Fun" fact: it's not @safe to "new" anything in D if your program uses any classes. Thing is, it does unconditionally thanks to DRuntime.

I hardly ever use classes in D, but I'd like to know more about why it's not @safe.

>> Yes, there are other resources to manage. RAII nearly always manages that, I don't need to think about that either.
>
> Yes you do. You do need to write those destructors or scoped finalizers, don't you? Or so help me use a third-party library that implements those? There's fundamentally *no* difference from memory management here. None, zero, zip.

I write a destructor once, then I never think about it again. It's a lot different from worrying about closing resources all the time. I don't write `scope(exit)` unless it's only once as well, otherwise I wrap the code in an RAII struct.

> Why is Socket a class, blown up from a puny 32-bit value to a bloated who-knows-how-many-bytes monstrosity? Will that socket close if you rely on the GC? Yes? No? Maybe? Why?

I don't know. I don't think Socket should even have been a class. I assume it was written in the D1 days.

> Can I deploy the compiler on a remote machine with limited RAM and expect it to always successfully build my projects and not run out of memory?

If the compiler had the GC turned on, yes. That's not a point about GC, it's a point about dmd.


October 13, 2018
On Friday, 12 October 2018 at 23:35:19 UTC, Stanislav Blinov wrote:
> On Friday, 12 October 2018 at 21:39:13 UTC, Atila Neves wrote:
>
>> D isn't Java. If you can, put your data on the stack. If you can't, `new` away and don't think about it.
>
> Then five years later, try and hunt down that mysterious heap corruption. Caused by some destructor calling into buggy third-party code. Didn't want to think about that one either?

That hasn't happened to me.

>>> I mean come on, it's 2018. We're writing code for multi-core and multi-processor systems with complex memory interaction.
>>
>> Sometimes we are. Other times it's a 50 line script.
>
> There is no "sometimes" here. You're writing programs for specific machines. All. The. Time.

I am not. The last time I wrote code for a specific machine it was for my 386, probably around 1995.

>>> Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer.
>>
>> Of any D programmer writing code that's performance sensitive.
>
> All code is performance sensitive.

If that were true, nobody would write code in Python. And yet...

> If it's not speed, it's power consumption. Or memory. Or I/O.

Not if it's good enough as it is. Which, in my my experience, is frequently the case. YMMV.

> "Not thinking" about any of that means you're treating your power champion horse as if it was a one-legged pony.

Yes. I'd rather the computer spend its time than I mine. I value the latter far more than the former.

> Advocating the "not thinking" approach makes you an outright evil person.

Is there meetup for evil people now that I qualify? :P

https://www.youtube.com/watch?v=FVAD3LQmxbw&t=42
October 14, 2018
On 14/10/2018 2:08 AM, Atila Neves wrote:
>> "Fun" fact: it's not @safe to "new" anything in D if your program uses any classes. Thing is, it does unconditionally thanks to DRuntime.
> 
> I hardly ever use classes in D, but I'd like to know more about why it's not @safe.

void main() @safe {
    Foo foo = new Foo(8);
    foo.print();
}

class Foo {
    int x;

    this(int x) @safe {
        this.x = x;
    }

    void print() @safe {
        import std.stdio;

        try {
            writeln(x);
        } catch(Exception) {
        }
    }
}
October 13, 2018
On Saturday, 13 October 2018 at 13:08:30 UTC, Atila Neves wrote:
> On Friday, 12 October 2018 at 23:24:56 UTC, Stanislav Blinov

>> Funny. Now for real, in a throwaway script, what is there to gain from a GC? Allocate away and forget about it.
>
> In case you run out of memory, the GC scans. That's the gain.

Correction, in case the GC runs out of memory, it scans.

>>>> In fact, the GC runtime will only detract from performance.
>>> Demonstrably untrue. It puzzles me why this myth persists.
>> Myth, is it now?
> Yes.

Please demonstrate.

>> Unless all you do is allocate memory, which isn't any kind of useful application, pretty much on each sweep run the GC's metadata is *cold*.
>
> *If* the GC scans.

"If"? So... ahem... what, exactly, is the point of a GC that doesn't scan? What are you even arguing here? That you can allocate and never free? You can do that without GC just as well.

>>> There are trade-offs, and one should pick whatever is best for the situation at hand.
>>
>> Exactly. Which is *not at all* what the OP is encouraging to do.
>
> I disagree. What I got from the OP was that for most code, the GC helps. I agree with that sentiment.

Helps write code faster? Yes, I'm sure it does. It also helps write slower unsafe code faster, unless you're paying attention, which, judging by your comments, you're not and aren't inclined to.

>> Alright, from one non-native English speaker to another, well done, I salute you.
>
> The only way I'd qualify as a non-native English speaker would be to pedantically assert that I can't be due to not having learned it first. In any case, I'd never make fun of somebody's English if they're non-native, and that's most definitely not what I was trying to do here - I assume the words "simple" and "easy" exist in most languages. I was arguing about semantics.

Just FYI, they're the same word in my native language :P

>> To the point: *that* is a myth. The bugs you're referring to are not *solved* by the GC, they're swept under a rug.
>
> Not in my experience. They've literally disappeared from the code I write.

Right. At the expense of introducing unpredictable behavior in your code. Unless you thought about that.

>> Because the bugs themselves are in the heads, stemming from that proverbial programmer laziness. It's like everyone is Scarlett O'Hara with a keyboard.

> IMHO, lazy programmers are good programmers.

Yes, but not at the expense of users and other programmers who'd use their code.

>> For most applications, you *do* know how much memory you'll need, either exactly or an estimation.

> I don't, maybe you do. I don't even care unless I have to. See my comment above about being lazy.

Too bad. You really, really should.

>> Well, I guess either of those do take more arguments than a "new", so yup, you do indeed write "less" code. Only that you have no clue how much more code is hiding behind that "new",
>
> I have a clue. I could even look at the druntime code if I really cared. But I don't.

You should.

>> how many indirections, DLL calls, syscalls with libc's wonderful poison that is errno... You don't want to think about that.
>
> That's right, I don't.

You should. Everybody should.

>> Then two people start using your script. Then ten, a hundred, a thousand. Then it becomes a part of an OS distribution. And no one wants to "think about that".
>
> Meh. There are so many executables that are part of distributions that are written in Python, Ruby or JavaScript.

Exactly my point. That's why we *must not* pile more crap on top of that. That's why we *must* think about the code we write. Just because your neighbour sh*ts in a public square, doesn't mean that you must do that too.

>>> For me, the power of tracing GC is that I don't need to think about ownership, lifetimes, or manual memory management.
>>
>> Yes you do, don't delude yourself.
>
> No, I don't. I used to in C++, and now I don't.

Yes you do, you say as much below.

>> Pretty much the only way you don't is if you're writing purely functional code.
>
> I write pure functional code by default. I only use side-effects when I have to and I isolate the code that does.
>
>> But we're talking about D here.
>> Reassigned a reference? You thought about that. If you didn't, you just wrote a nasty bug. How much more hypocrisy can we reach here?
>
> I probably didn't write a nasty bug if the pointer that was reassigned was to GC allocated memory. It lives as long as it has to, I don't think about it.

In other words, you knew what you were doing, at which point I'd ask, what's the problem with freeing the no-longer-used memory there and then? There's nothing to "think" about.

>> "Fun" fact: it's not @safe to "new" anything in D if your program uses any classes. Thing is, it does unconditionally thanks to DRuntime.
>
> I hardly ever use classes in D, but I'd like to know more about why it's not @safe.

rikki's example isn't exactly the one I was talking about, so here goes:

module mycode;

import std.stdio;

import thirdparty;

void mySuperSafeFunction() @safe {
    auto storage = new int[10^^6];
    // do awesome work with that storage...
}

void main() {
    thirdPartyWork();
    writeln("---");
    mySuperSafeFunction(); // this function can't corrupt memory, can it? It's @safe!
    writeln("---");
}

module thirdparty;

@system: // just so we're clear

void thirdPartyWork() {
    auto nasty = new Nasty;
}

private:

void corruptMemory() {
    import std.stdio;
    writeln("I've just corrupted your heap, or maybe your stack, mwahahahah");
}

class Nasty {
    ~this() {
        corruptMemory();
    }
}

Thus, even if you wrote an entirely "@safe" library, someone using it may encounter a nasty bug. Worse yet, they may not. Because whether or not the GC calls finalizers depends on the overall state of the program. And certainly, *your* code may cause third-party code to UB, and vice versa.
Now, of course things wouldn't be called "Nasty" and "corruptMemory". They'll have innocent names, and nothing conspicuous at a glance. Because that's just how bugs are. The point is, GC can't deliver on the @safe promise, at least in the language as it is at the moment.

> I write a destructor once, then I never think about it again. It's a lot different from worrying about closing resources all the time. I don't write `scope(exit)` unless it's only once as well, otherwise I wrap the code in an RAII struct.

Ok, same goes for memory management. What's your point?

>> Why is Socket a class, blown up from a puny 32-bit value to a bloated who-knows-how-many-bytes monstrosity? Will that socket close if you rely on the GC? Yes? No? Maybe? Why?
>
> I don't know. I don't think Socket should even have been a class. I assume it was written in the D1 days.

Exactly. But it's in a "standard" library. So if I "don't want to think about it" I'll use that, right?

>> Can I deploy the compiler on a remote machine with limited RAM and expect it to always successfully build my projects and not run out of memory?
>
> If the compiler had the GC turned on, yes. That's not a point about GC, it's a point about dmd.

The answer is no. At least dmd will happily run out of memory, that's just the way it is.
It's a point about programmers not caring about what they're doing, whether it's GC or not is irrelevant. Only in this case, it's also about programmers advising others to do so.
October 13, 2018
On Saturday, 13 October 2018 at 13:17:41 UTC, Atila Neves wrote:

>> Then five years later, try and hunt down that mysterious heap corruption. Caused by some destructor calling into buggy third-party code. Didn't want to think about that one either?
>
> That hasn't happened to me.

It rarely does indeed. Usually it's someone else that has to sift through your code and fix your bugs years later. Because by that time you're long gone on another job, happily writing more code without thinking about it.

>> There is no "sometimes" here. You're writing programs for specific machines. All. The. Time.
>
> I am not. The last time I wrote code for a specific machine it was for my 386, probably around 1995.

Yes you are. Or what, you're running your executables on a 1990 issue calculator? :P Somehow I doubt that.

>>>> Precisely where in memory your data is, how it got there and how it's laid out should be bread and butter of any D programmer.
>>>
>>> Of any D programmer writing code that's performance sensitive.
>>
>> All code is performance sensitive.
>
> If that were true, nobody would write code in Python. And yet...

Nobody would write code in Python if Python didn't exist. That it exists means there's a demand. Because there are an awful lot of folks who just "don't want to think about it".
Remember 2000s? Everybody and their momma was a developer. Web developer, Python, Java, take your pick. Not that they knew what they were doing, but it was a good time to peddle crap.
Now, Python in an of itself is not a terrible language. But people write *system* tools and scripts with it. WTF? I mean, if you could care less how the machine works, you have *no* business developing *anything* for an OS.

>> If it's not speed, it's power consumption. Or memory. Or I/O.
>
> Not if it's good enough as it is. Which, in my my experience, is frequently the case. YMMV.

That is not a reason to intentionally write *less* efficient code.

>> "Not thinking" about any of that means you're treating your power champion horse as if it was a one-legged pony.
>
> Yes. I'd rather the computer spend its time than I mine. I value the latter far more than the former.

And what if your code wastes someone else's time at some later point? Hell with it, not your problem, right?

>> Advocating the "not thinking" approach makes you an outright evil person.
>
> Is there meetup for evil people now that I qualify? :P

Any gathering of like-minded programmers will do.
October 13, 2018
On Saturday, 13 October 2018 at 14:43:22 UTC, Stanislav Blinov wrote:
> On Saturday, 13 October 2018 at 13:17:41 UTC, Atila Neves wrote:
>
>> [...]
>
> It rarely does indeed. Usually it's someone else that has to sift through your code and fix your bugs years later. Because by that time you're long gone on another job, happily writing more code without thinking about it.
>
> [...]

Not everyone have the time nor skills of doing manual memory management. Even more so when correctness is way more important than speed.

Not everything needs to be fast.
October 14, 2018
On Saturday, 13 October 2018 at 21:44:45 UTC, 12345swordy wrote:

> Not everyone have the time nor skills of doing manual memory management. Even more so when correctness is way more important than speed.
>
> Not everything needs to be fast.

That's a lamest excuse if I ever seen one. If you can't be bothered to acquire one of the most relevant skills for writing code for modern systems, then:

a) Ideally, you shouldn't be writing code
b) At the very least, you're not qualified to give any advice pertaining to writing code

PS. "Correctness" also includes correct use of the machine and it's resources.