6 days ago
On Fri, Jan 14, 2022 at 06:20:58AM +0000, Araq via Digitalmars-d-announce wrote:
> On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
> > It takes 10x the effort to write a shell-script substitute in C++ because at every turn the language works against me -- I can't avoid dealing with memory management issues at every turn -- should I use malloc/free and fix leaks / dangling pointers myself? Should I use std::autoptr? Should I use std::shared_ptr? Write my own refcounted pointer for the 15th time?  Half my APIs would be cluttered with memory management paraphrenalia, and half my mental energy would be spent fiddling with pointers instead of MAKING PROGRESS IN MY PROBLEM DOMAIN.
> > 
> > With D, I can work at the high level and solve my problem long before I even finish writing the same code in C++.
> 
> Well C++ ships with unique_ptr and shared_ptr, you don't have to roll your own. And you can use them and be assured that the performance profile of your program doesn't suddenly collapse when the data/heap grows too big as these tools assure independence of the heap size.

That's not entirely accurate. Using unique_ptr or shared_ptr does not guarantee you won't get a long pause when the last reference to a large object graph goes out of scope, for example, and a whole bunch of dtors get called all at once. In code that's complex enough to warrant shared_ptr, the point at which this happens is likely not predictable (if it was, you wouldn't have needed to use shared_ptr).


> (What does D's GC assure you? That it won't run if you don't use it?
> That's such a low bar...)

When I'm writing a shell-script substitute, I DON'T WANT TO CARE about memory management, that's the point. I want the GC to clean up after me, no questions asked.  I don't want to spend any time thinking about memory allocation issues.  If I need to manually manage memory, *then* I manually manage memory and don't use the GC.  D gives me that choice. C++ forces me to think about memory allocation WHETHER I WANT TO OR NOT.

And unique_ptr/shared_ptr doesn't help in this department, because their use percolates through all of my APIs. I cannot pass a unique_ptr to an API that receives only shared_ptr, and vice versa, without jumping through hoops.  Having a GC lets me completely eliminate memory management concerns from my APIs, resulting in cleaner APIs and less time wasted fiddling with memory management.  It's a needless waste of time.  WHEN performance demands it, THEN I can delve into the dirty details of how to manually manage memory.  When performance doesn't really matter, I don't care, and I don't *want* to care.


> Plus with D you cannot really work at the "high level" at all, it is full of friction. Is this data const? Or immutable? Is this @safe? @system? Should I use @nogc?

When I'm writing a shell-script substitute, I don't care about const/immutable or @safe/@system.  Let all data be mutable for all I care, it doesn't matter.  @nogc is a waste of time in shell-script substitutes.  Just use templates and let the compiler figure out the attributes for you.

When I'm designing something longer term, *then* I worry about const/immutable/etc.. And honestly, I hardly ever bother with const/immutable, because IME they just become needless encumbering past the first few levels of abstraction.  They preclude useful things like caching, lazy initialization, etc., and are not worth the effort except for leaf-node types.  There's nothing wrong with mutable by default in spite of what academic types tell you.


> Are exceptions still a good idea?

Of course it's a good idea.  Esp in a shell-script substitute, where I don't want to waste my time worrying about checking error codes and all of that nonsense. Just let it throw an exception and die when something fails, that's good enough.

If exceptions ever become a problem, you're doing something wrong. Only in rare cases do you actually need nothrow -- in hotspots identified by a profiler where try-blocks actually make a material difference. 90% of code doesn't need to worry about this.


> Should I use interfaces or inheritance?  Should I use class or struct?

For shell script substitutes?  Don't even bother with OO. Just use structs and templates with attribute inference, job done.

Honestly, even for most serious programs I wouldn't bother with OO, unless the problem domain actually maps well onto the OO paradigm. Most problem domains are better handled with data-only types and external operations on them.  Only for limited domains OO is actually useful. Even many polymorphic data models are better handled in other ways than OO (like ECS for runtime dynamic composition).


> Pointers or inout?

inout is a misfeature. Avoid it like the plague.

As for pointers vs. non-pointers: thanks to type inference and `.` working for both pointers and non-pointers, most of the time you don't even need to care.  I've written lots of code where I started with a non-pointer and later decided to change it to a pointer (or vice versa) -- most of the code that works with it doesn't even need to be changed, I just change the type definition, and maybe 1 or 2 places where the difference actually matters, and type inference takes care of the rest. No such nonsense as needing to change '.' to '->' in 50 different places, or respell types in 25 different modules scattered across the program. `auto` and templated types are your friend.  Let the compiler figure out what the concrete types are -- that's its job, the human shouldn't need to constantly fiddle with this manually except in a few places.


> There are many languages where it's much easier to focus on the PROBLEM DOMAIN. Esp if the domain is "shell-script substitute".

I'm curious. Do you have any actual examples to show?


T

-- 
Some days you win; most days you lose.
6 days ago
On Fri, Jan 14, 2022 at 03:51:17AM +0000, forkit via Digitalmars-d-announce wrote:
> On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
> > 
> > How is using D "losing autonomy"?  Unlike Java, D does not force you to use anything. You can write all-out GC code, you can write @nogc code (slap it on main() and your entire program will be guaranteed to be GC-free -- statically verified by the compiler). You can write functional-style code, and, thanks to metaprogramming, you can even use more obscure paradigms like declarative programming.
> > 
> 
> I'm talking about the 'perception of autonomy' - which will differ between people. Actual autonomy does not, and cannot, exist.
> 
> I agree, that if a C++ programmer wants the autonomy of chosing between GC or not, in their code, then they really don't have that autonomy in C++ (well, of course they do actually - but some hoops need to be jumped through).

IMO, 'autonomy' isn't the notion you're looking for.  The word I prefer to use is *empowerment*.  A programming language should be a toolbox filled with useful tools that you can use to solve your problem.  It should not be a straitjacket that forces you to conform to what its creators decided is good for you (e.g., Java), nor should it be a minefield full of powerful but extremely dangerous explosives that you have to be very careful not to touch in the wrong way (e.g., C++). It should let YOU decide what's the best way to solve a problem -- and give you the tools to help you on your way.

I mean, you *can* write functional-style code in C if you really, really wanted to -- but you will face a lot of friction and it will be a constant uphill battle. The result will be a huge unmaintainable mess. With D, UFCS gets you 90% of the way there, and the syntax is even pleasant to read.  Functional not your style? No problem, you can do OO too. Or just plain ole imperative. Or all-out metaprogramming.  Or a combination of all four -- the language lets you intermingle all of them in the *same* piece of code.  I've yet to find another language that actively *encourages* you to mix multiple paradigms together into a seamless whole.

Furthermore, the language should empower you to do what it does -- for example, user-defined types ought to be able to do everything built-in types can.  Built-in stuff shouldn't have "magical properties" that cannot be duplicated in a user-defined type.  The language shouldn't hide magical properties behind a bunch of opaque, canned black-box solutions that you're not allowed to look into.  The fact that D's GC is written in D, for example, is a powerful example of not hiding things behind opaque black-boxes. You can, in theory, write your own GC and use that instead of the default one.

D doesn't completely meet my definition of empowerment, of course, but it's pretty darned close -- closer than any other language I've used. That's why I'm sticking with it, in spite of various flaws that I'm not going to pretend don't exist.

As for why anyone would choose something over another -- who knows. My own choices and preferences have proven to be very different from the general population, so I'm not even gonna bother to guess how anyone else thinks.


T

-- 
English is useful because it is a mess. Since English is a mess, it maps well onto the problem space, which is also a mess, which we call reality. Similarly, Perl was designed to be a mess, though in the nicest of all possible ways. -- Larry Wall
6 days ago
On Fri, Jan 14, 2022 at 09:18:23AM +0000, Paulo Pinto via Digitalmars-d-announce wrote:
> On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
[...]
> > How is using D "losing autonomy"?  Unlike Java, D does not force you to use anything. You can write all-out GC code, you can write @nogc code (slap it on main() and your entire program will be guaranteed to be GC-free -- statically verified by the compiler). You can write functional-style code, and, thanks to metaprogramming, you can even use more obscure paradigms like declarative programming.
[..]
> When languages are compared in grammar and semantics alone, you are fully correct.
> 
> Except we have this nasty thing called eco-system, where libraries, IDE tooling, OS, team mates, books, contractors, .... are also part of the comparisasion.
[...]

That's outside of the domain of the language itself.  I'm not gonna pretend we don't have ecosystem problems, but that's a social issue, not a technical one.

Well OK, maybe IDE tooling is a technical issue too... but I write D just fine in Vim. Unlike Java, using an IDE is not necessary to be productive in D. You don't have to write aneurysm-inducing amounts of factory classes and wrapper types just to express the simplest of abstraction.  I see an IDE for D as something nice to have, not an absolute essential.


> Naturally C# 10 was only an example among several possible ones, that have a flowershing ecosytem and keep getting the features only D could brag about when Andrei's book came out 10 years ago.

IMNSHO, D should forget all pretenses of being a stable language, and continue to evolve as it did 5-10 years ago.  D3 should be a long-term goal, not a taboo that nobody wants to talk about.  But hey, I'm not the one making decisions here, and talk is cheap...


T

-- 
Give me some fresh salted fish, please.
6 days ago
On Wednesday, 12 January 2022 at 20:41:56 UTC, Walter Bright wrote:
>
You nailed it. Bravo :)


6 days ago
On Friday, 14 January 2022 at 14:29:54 UTC, H. S. Teoh wrote:
>
Well explained. :)
6 days ago

On 1/14/22 1:20 AM, Araq wrote:

>

Plus with D you cannot really work at the "high level" at all, it is full of friction. Is this data const? Or immutable? Is this @safe? @system? Should I use @nogc? Are exceptions still a good idea? Should I use interfaces or inheritance? Should I use class or struct? Pointers or inout? There are many languages where it's much easier to focus on the PROBLEM DOMAIN. Esp if the domain is "shell-script substitute".

I realize you have a different horse in the language race, but this statement is completely strawman (as countless existing "high level" D projects demonstrate)

You might as well say that C is unusable at a high level vs. javascript because you need to decide what type of number you want, is it int, float, long? OMG SO MANY CHOICES.

-Steve

6 days ago

On Friday, 14 January 2022 at 18:54:26 UTC, Steven Schveighoffer wrote:

>

You might as well say that C is unusable at a high level vs. javascript because you need to decide what type of number you want, is it int, float, long? OMG SO MANY CHOICES.

Bad choice of example… C is close to unusable at a high level and C++ is remarkably unproductive if you only want to do high level stuff.

But yes, the problem with D const isn't that there are many choices. The problem is that there is only one over-extended choice.

6 days ago
On Friday, 14 January 2022 at 14:50:50 UTC, H. S. Teoh wrote:
>
> IMO, 'autonomy' isn't the notion you're looking for.  The word I prefer to use is *empowerment*.  A programming language should be a toolbox filled with useful tools that you can use to solve your problem.  It should not be a straitjacket that forces you to conform to what its creators decided is good for you (e.g., Java), nor should it be a minefield full of powerful but extremely dangerous explosives that you have to be very careful not to touch in the wrong way (e.g., C++). It should let YOU decide what's the best way to solve a problem -- and give you the tools to help you on your way.

Yes, trying to reduce a concept into a word, can be tricky.

Even so, 'autonomy' is the right word I think:

'the capacity of an agent to act in accordance with an objective'.

I've found the D programming language 'empowers' me to be more 'autonomous' (or at least, to more 'easily' be autonomous. I don't feel like D restricts me, before I even begin (like other languages often do, or the learning curve associated with their syntax does).

So I far less concerned about features, and more interested in how a programming language empowers autonomy.

Next ›   Last »
1 2 3 4 5