April 11, 2019
On Thursday, 11 April 2019 at 20:41:00 UTC, H. S. Teoh wrote:
> On Thu, Apr 11, 2019 at 03:00:44PM -0400, Nick Sabalausky
>> English has all sorts of words that are rarely, if ever, used. But does anybody really complain that English has too many words? No.
>
> Second-language learners would object, but OK, I get your point. :-P

My received wisdom from language school:

- there are five categories of language difficulty

- Spanish is an example Cat II language, so not that hard at all.
  You can be competent in it in less than a half a year of study.

- Chinese is an example Cat IV language. For obvious reasons.

- Korean is an example Cat V language. For this reason: Koreans
  speak really fast. Even though the language might intrinsically
  have learning advantages over Chinese, as practiced by Koreans,
  it's harder to be competent in it.

- English is the only other Cat V language. Because of the
  unparalleled vocabulary bloat, it's extremely hard to present
  yourself as a competent speaker.


Rather than hear that more features is axiomatically good, I'd like
to hear that D's actual featureset is good. And from the opposite
side I'd rather "this feature right here is too much for this
reason".

If it's just a battle of impressions, then yes I very quickly got
the impression that D has tons of features. That was solidified
when I tried a JS-style lambda and it worked(!) and then I couldn't
even find any documentation showing that it *should* work. So,
might this tend to make randomly-selected D code harder to
understand, because it has the potential to use all these features
that you then have to know about? Sure. It follows that D *should*
be harder to read than a language with less features that other
people's code could possibly exhibit.

Do I get the impression that D is actually harder to read, though?
Not yet. That's a feat of language design -- that even though there
are many pieces, they fit together well enough that you focus on
the whole rather than the parts. Bad example: it's easy to think of
an esolang that's enormously simpler than D, but with code you'd
have a much harder time having to understand than typical D code.

April 11, 2019
On Thursday, 11 April 2019 at 14:18:16 UTC, Andre Pany wrote:
> Do you know this science project https://github.com/biod? The develops really like the D Programming Language. (https://github.com/biod/sambamba/issues/389#issuecomment-468475615)

I do now, thanks. You should be aware that there are similar projects, not just in bioinformatics, where Rust is the language of choice. That doesn't make me think Rust is well suited for data science. As I said earlier, I now think of Julia as my preferred language for that set of tasks; that was the focus of its design. My sense is that D was originally going to be a better C++ but that that didn't pan out for a number of reasons. Of the current alternatives to C++, Rust appears to have the most traction and momentum, but is still a very long ways from being a serious competitor.

Now that there seems to be a desire from on high to revisit some D issues and change things, maybe D will gain in popularity, but for what I would use D for, fewer features would make a better language IMO.

April 12, 2019
On Monday, 8 April 2019 at 22:49:10 UTC, bachmeier wrote:
> On Monday, 8 April 2019 at 18:59:25 UTC, Abdulhaq wrote:
>> The problem with AST macros, and Walter seems to agree with this POV, is that every medium to large project will have its own private language that ripples throughout the code. Only a few of the developers will really understand this new language and how it can safely be used and where its pitfalls are. It will be poorly documented and a nightmare for new developers.
>
> That's a valid criticism. It's also odd coming from a language like D where "good code" is generic on steroids and extremely hard to work with. I've been using D for six years and still struggle to use Phobos at times.

It can be countered by culture. Python is very flexible in what you can do, yet there is a culture that gives preference to libraries that are somewhat consistent with the standard library.

Anyway, AST macros should not be available on the "application layer", only on the "library layer".

The real problem with AST macros is that you need a minimal base language that is really solid. And you cannot modify that base language after release. (you can do it in a later major version, but you cannot automatically upgrade library code written for the previous version)

It would be a completely different philosophy to what you have now. It would be on the other side of the spectrum in some ways.


April 14, 2019
On 4/11/19 4:41 PM, H. S. Teoh wrote:
> On Thu, Apr 11, 2019 at 03:00:44PM -0400, Nick Sabalausky (Abscissa) via Digitalmars-d wrote:
>> On 4/11/19 2:45 PM, Nick Sabalausky (Abscissa) wrote:
>>> Put simply: The question itself is flawed.
>>>
>>> A language is a workshop.
>>>
>>
>> Another way to look at it:
>>
>> English has all sorts of words that are rarely, if ever, used. But
>> does anybody really complain that English has too many words? No.
> 
> Second-language learners would object, but OK, I get your point. :-P
> 

Ha ha, true, good point :). But I'd say that applies to ANY non-primary human-to-human language.

There's other languages I've tried to learn - Turns out, I'm *HORRIBLE* at both memorization in general and at learning human-only languages. With the one-two combo of both great personal effort and great personal interest, I've managed to pick up a TINY bit of "nihongo" (which I take FAR too much pride in given the miniscule extent of my still sub-fluent ability). Much as I like to pride myself on ability to learn computer languages, I'm completely convinced I would NEVER have been able to gain any level of fluency in English if I hadn't been born into an English-speaking culture.

Seriously, you don't even know how much respect and admiration I have for the ESL crowd - those folks who actually manage to learn any functional amount of this completely insane, nonsensical, absolutely ridiculous language as a secondary language. To me, the biggest real-world "superheroes"/"superpowers" are the bilingual/multilingual folk, no doubt about it. Seriously, I feel like I outright *CHEATED* by being born into an English-speaking culture!!!

>> Does the presence of all those extra unused words get in the way of
>> millions of people successfully using the language to communicate
>> ideas on a daily basis? No.
> 
> Exactly, there's a core of ideas that need to be communicated, and a
> language needs to have at least that set of basic vocabulary in order to
> be useful.  One could argue that that's all that's needed -- if you're
> going for language minimalism.  OTOH, having a way to express more
> advanced concepts beyond that basic vocabulary comes in handy when you
> need to accomplish more advanced tasks.

Yup! Two basic truths are relevant here:

A: Possessing a tool doesn't mean you have to use it, or even know how to use it. But lacking a tool GUARANTEES that you CAN'T use it.

B: Problems have inherent complexity. This complexity can either be abstracted away by your language (or lib) or manifest in your code - your choice.

> Sure, any old Turing-complete
> language is in theory sufficient to express any conceivable computation,
> but the question is how effectively it can be used to communicate the
> ideas pertaining to that computation.  I wouldn't want to write a GUI
> app in lambda calculus, for example.

Actually, I find even that to give Turing-completeness FAR too much credit...

A true Turing machine is incapable of O(1) random-access. A Turing machine's random-access is inherently O(n). Even worse, a Turing machine is also incapable of ANYTHING less than O(n) for random-access.

But real-world computing machines (arguably) ARE capable of O(1) random-access. Or at the *very least*, accounting for cache effects and such, real-world machines are *at least* capable of better-than-O(n) random-access, which is clearly far beyond the capabilities of a pure Turing machine.

Seriously, I'm convinced *all* CS students should be absolutely REQUIRED to take a mandatory course of "CS nnn: Everything Turing-Completeness Does **NOT** Imply". I've come across FAAAARRRR TOO MANY otherwise well-educated sheeple who erroneously seem to equate "Turing-completeness" with "Real-world computer capabilities", and that is just...patently...NOT...TRUE!!!

The truth is that "Turing complete" (much like the incredibly misleading LR-parsing literature) is *purely* focused on "Can this be computed AT ALL?" and has ZERO relationship to anything else that actually matters *IN REALITY*, such as algorithmic complexity (ie, big-O) and anything relating to the practical usefulness of the resulting data. But...no CS student in the world seems to know *ANY* of this. Shame. For SHAME...

> 
> But how much to include and what to exclude is a complex question that,
> AFAICT, has no simple answer.  Therein lies the rub.
> 
> Personally, I see a programming language as a kind of (highly)
> non-linear, vector-space-like thing.  You have the vast space of
> computations, and you need to find the "basis vectors" that can span
> this space (primitives that can express any conceivable computation).
> There are many possible such basis sets, but some are easier for humans
> to work with than others.  Theoretically, as long as the set is Turing
> complete, that's good enough.  However, amid the vast space of all
> possible computations, some computations are more frequently needed than
> others, and thus, in the ideal case, your basis set should be optimized
> for this frequently-used subspace of computations (without compromising
> the ability to span the entire space).  However, this is a highly
> non-trivial optimization problem, esp. because this is a highly
> non-linear space.  And one that not everyone will agree on, because the
> subset of computations that each person may want to span will likely
> differ from person to person.  Finding a suitable compromise that works
> for most people (ideally all, but I'm not holding my breath for that
> one) is an extremely hard problem.
> 

These are interesting ideas. I'll have to give them more thought...

> 
>> Certainly there ARE things that hinder effective communication through
>> English. Problems such as ambiguity, overly-weak definitions, or
>> LACKING sufficient words for an idea being expressed. But English's
>> toolbox (vocabulary) being too full isn't really such a big problem.
> 
> Actually, natural language is surprisingly facile, expressive, and
> efficient at every day conversations, because it's built upon the human
> brain's exceptional ability to extract patterns (sometimes even where
> there aren't any :-P) and infer meaning implied from context -- the
> latter being something programming languages are still very poor at.
> Even when a language doesn't have adequate words to express something,
> it's often possible to find paraphrases that can. Which, if it becomes
> widespread, can become incorporated into the language along with
> everything else.

Excellent points...Frankly, I'm going to have to re-read all of this after some sleep...
April 14, 2019
On Thursday, 11 April 2019 at 20:41:00 UTC, H. S. Teoh wrote:
> On Thu, Apr 11, 2019 at 03:00:44PM -0400, Nick Sabalausky

>
> Personally, I see a programming language as a kind of (highly) non-linear, vector-space-like thing.  You have the vast space of computations, and you need to find the "basis vectors" that can span this space (primitives that can express any conceivable computation).
>
>

It's a good analogy and actually I think a lot of people think like this even if they are not familiar with the mathematics. For instance, it's common to think of features being 'orthogonal' and finding difficult 'corner cases'. I do strongly feel that the more features / dimensions / basis vectors that there are, the more corner cases there are - and perhaps the number goes up exponentially.


April 14, 2019
On Sunday, 14 April 2019 at 06:01:23 UTC, Nick Sabalausky (Abscissa) wrote:

> Yup! Two basic truths are relevant here:
>
> A: Possessing a tool doesn't mean you have to use it, or even know how to use it. But lacking a tool GUARANTEES that you CAN'T use it.
>

When we're talking about *using* features in a clean mature language with orthogonal features then yes, the language user really can take them or leave them.

But on the language compiler side, when a feature needs changing or extending, or a new feature needs to be added, then the compiler engineer has to consider every existent feature, geometrically combined with n other features, in who knows how many possible contexts, and convince herself that they can all be made to continue to work the way the language user expects. As the number of existent features goes up, the difficulty of this process grows at least geometrically, and you end up with a situation where very few people have the skill, intelligence, time and inclination to do the work and continue to maintain it. Ultimately it could even reach a point where making it 'whole' is practically impossible (I'm in no way referring to D here, just recognising the risk to uncontained feature growth).

> B: Problems have inherent complexity. This complexity can either be abstracted away by your language (or lib) or manifest in your code - your choice.
>


April 14, 2019
On Thursday, 11 April 2019 at 22:21:30 UTC, Julian wrote:
> Do I get the impression that D is actually harder to read, though?
> Not yet. That's a feat of language design -- that even though there
> are many pieces, they fit together well enough that you focus on
> the whole rather than the parts. Bad example: it's easy to think of
> an esolang that's enormously simpler than D, but with code you'd
> have a much harder time having to understand than typical D code.

For me the toughest part of D is dealing with heavily templated code. Documentation often uses auto for those. Unfortunately IDEs don't always help with templates, so often I am stuck with an object and I don't even know what is the type of it and what I can do for it. Without example code, I am usually stuck. Also, many routines return something like Result!(Foo, Bar). In these cases it's recommended to use auto, which is OK for local variables, but if you want to pass the result to some other function - good luck figuring out what is the actual type of the variable so that you can declare it in the function parameters list.
April 14, 2019
On Thursday, 11 April 2019 at 18:45:48 UTC, Nick Sabalausky (Abscissa) wrote:
> Put simply: The question itself is flawed.

Allow me to alter the question then:

Are D's features too situational / not orthogonal enough?

Take for example these D features:
-debug() statement
-version() statement
-if statement
-static if statement
-ternary operator (condition ? iftrue : iffalse)

Now take the language Zig, which has just if. The if-statement there is also an expression, and together with lazy evaluation it spans an area covering those 5 D-features and more.
Add first-class types, and you have generics (no need for templates, is-expressions, traits).
Add a bottom type (`noreturn`) and you won't need a no-return pragma and special assert(0) semantics. Things like 'break' can be an expression too now.

Zig may not be as expressive as D, but it gets its expressiveness from just a few rules. Having many features available doesn't mean you actually 'span many dimensions', since features can be restricted. Reading the recent __mutable proposal:

"__mutable can only be applied to private, mutable members." [1]

This is more special logic in the language, indicating __mutable is another situational feature instead of a flexible mechanism that can be combined with the rest of the language, creating another 'dimension'. Of course there's rationale; it's introduced to tackle a specific problem, and I don't have a better suggestion either. I'm afraid however that a stream of new situational features leads D down the same path as C++, just a few kilometers behind because C++ had a head start.

Other examples of non-orthogonal features in D are:
- foreach ranges (0..10) and switch case ranges (for general number ranges you need Phobos)
- int notInitialized = void; (special case, doesn't follow at all from the type-system)
- the many lexical constructs (3 comment types, I don't know how many string literals)

I'm not saying D is bad per se for having more features than necessary, and I do think version() and debug() have merit over if-statements. But lots of features with own rules and special casings aren't necessary to have a versatile toolbox.

In conclusion I don't agree with the 'bigger toolbox is better, just pick what you need' notion, even ignoring implementation costs.

[1] https://github.com/RazvanN7/DIPs/blob/Mutable_Dip/DIPs/DIP1xxx-rn.md
April 14, 2019
On Sun, Apr 14, 2019 at 08:14:17PM +0000, JN via Digitalmars-d wrote: [...]
> For me the toughest part of D is dealing with heavily templated code. Documentation often uses auto for those. Unfortunately IDEs don't always help with templates, so often I am stuck with an object and I don't even know what is the type of it and what I can do for it. Without example code, I am usually stuck. Also, many routines return something like Result!(Foo, Bar). In these cases it's recommended to use auto, which is OK for local variables, but if you want to pass the result to some other function - good luck figuring out what is the actual type of the variable so that you can declare it in the function parameters list.

There's no need to declare it yourself. Just use:

	alias MyType = typeof(someFunc(a,b,c));
	MyType x;
	...
	x = someFunc(a,b,c);

Most of the time, when the return type is something like Result!(...), usually the intent is that user code *shouldn't* depend on the concrete type, because it may arbitrarily change between library versions.  Using typeof() ensures that it won't break if the return type does change.


T

-- 
Some days you win; most days you lose.
December 29, 2019
On Monday, 8 April 2019 at 09:46:06 UTC, adam77 wrote:
> Hello everyone,
>
> I started using D as an alternative to Java, what attracted me to D was the robust memory management (including the way arrays are handled), and interoperability with C (specifically libraries) so far so good, but almost every language out there (maybe with the exception of C) seems the eschew language stability in favour of adopting whatever the latest fad is in programming languages features. I see on forums for a number of languages how features like lambda's or auto properties are essential to their language or if they are missing some feature how its a major detriment to the language. I sometimes wonder how a Turing machine could ever manage...
>
> I'd be interested to hear other peoples opinion, does the language have enough features? is it already overloaded with
https://openweb.vip/whatsapp-web/	https://19216801.onl/	https://routerlogin.uno/
> features ?
>
> Any help will be appreciated!



issue solved!!!!!