On Fri, Jan 14, 2022 at 06:20:58AM +0000, Araq via Digitalmars-d-announce wrote:
> On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
> > It takes 10x the effort to write a shell-script substitute in C++ because at every turn the language works against me -- I can't avoid dealing with memory management issues at every turn -- should I use malloc/free and fix leaks / dangling pointers myself? Should I use std::autoptr? Should I use std::shared_ptr? Write my own refcounted pointer for the 15th time? Half my APIs would be cluttered with memory management paraphrenalia, and half my mental energy would be spent fiddling with pointers instead of MAKING PROGRESS IN MY PROBLEM DOMAIN.
> > With D, I can work at the high level and solve my problem long before I even finish writing the same code in C++.
> Well C++ ships with unique_ptr and shared_ptr, you don't have to roll your own. And you can use them and be assured that the performance profile of your program doesn't suddenly collapse when the data/heap grows too big as these tools assure independence of the heap size.
That's not entirely accurate. Using unique_ptr or shared_ptr does not guarantee you won't get a long pause when the last reference to a large object graph goes out of scope, for example, and a whole bunch of dtors get called all at once. In code that's complex enough to warrant shared_ptr, the point at which this happens is likely not predictable (if it was, you wouldn't have needed to use shared_ptr).
> (What does D's GC assure you? That it won't run if you don't use it?
> That's such a low bar...)
When I'm writing a shell-script substitute, I DON'T WANT TO CARE about memory management, that's the point. I want the GC to clean up after me, no questions asked. I don't want to spend any time thinking about memory allocation issues. If I need to manually manage memory, *then* I manually manage memory and don't use the GC. D gives me that choice. C++ forces me to think about memory allocation WHETHER I WANT TO OR NOT.
And unique_ptr/shared_ptr doesn't help in this department, because their use percolates through all of my APIs. I cannot pass a unique_ptr to an API that receives only shared_ptr, and vice versa, without jumping through hoops. Having a GC lets me completely eliminate memory management concerns from my APIs, resulting in cleaner APIs and less time wasted fiddling with memory management. It's a needless waste of time. WHEN performance demands it, THEN I can delve into the dirty details of how to manually manage memory. When performance doesn't really matter, I don't care, and I don't *want* to care.
> Plus with D you cannot really work at the "high level" at all, it is full of friction. Is this data const? Or immutable? Is this @safe? @system? Should I use @nogc?
When I'm writing a shell-script substitute, I don't care about const/immutable or @safe/@system. Let all data be mutable for all I care, it doesn't matter. @nogc is a waste of time in shell-script substitutes. Just use templates and let the compiler figure out the attributes for you.
When I'm designing something longer term, *then* I worry about const/immutable/etc.. And honestly, I hardly ever bother with const/immutable, because IME they just become needless encumbering past the first few levels of abstraction. They preclude useful things like caching, lazy initialization, etc., and are not worth the effort except for leaf-node types. There's nothing wrong with mutable by default in spite of what academic types tell you.
> Are exceptions still a good idea?
Of course it's a good idea. Esp in a shell-script substitute, where I don't want to waste my time worrying about checking error codes and all of that nonsense. Just let it throw an exception and die when something fails, that's good enough.
If exceptions ever become a problem, you're doing something wrong. Only in rare cases do you actually need nothrow -- in hotspots identified by a profiler where try-blocks actually make a material difference. 90% of code doesn't need to worry about this.
> Should I use interfaces or inheritance? Should I use class or struct?
For shell script substitutes? Don't even bother with OO. Just use structs and templates with attribute inference, job done.
Honestly, even for most serious programs I wouldn't bother with OO, unless the problem domain actually maps well onto the OO paradigm. Most problem domains are better handled with data-only types and external operations on them. Only for limited domains OO is actually useful. Even many polymorphic data models are better handled in other ways than OO (like ECS for runtime dynamic composition).
> Pointers or inout?
inout is a misfeature. Avoid it like the plague.
As for pointers vs. non-pointers: thanks to type inference and `.` working for both pointers and non-pointers, most of the time you don't even need to care. I've written lots of code where I started with a non-pointer and later decided to change it to a pointer (or vice versa) -- most of the code that works with it doesn't even need to be changed, I just change the type definition, and maybe 1 or 2 places where the difference actually matters, and type inference takes care of the rest. No such nonsense as needing to change '.' to '->' in 50 different places, or respell types in 25 different modules scattered across the program. `auto` and templated types are your friend. Let the compiler figure out what the concrete types are -- that's its job, the human shouldn't need to constantly fiddle with this manually except in a few places.
> There are many languages where it's much easier to focus on the PROBLEM DOMAIN. Esp if the domain is "shell-script substitute".
I'm curious. Do you have any actual examples to show?
Some days you win; most days you lose.