November 15, 2018
On Thursday, 15 November 2018 at 13:29:47 UTC, Andrei Alexandrescu wrote:
> On 11/14/18 4:07 PM, lagfra wrote:
>> https://www.reddit.com/r/cpp/comments/9vwvbz/2018_san_diego_iso_c_committee_trip_report_ranges/
>> 
>> 
>> By 2020 C++ is planning to introduce:
>> 
>> * Ranges
>> * Contracts
>> * Concepts (`__traits`)
>> * Proper constexpr
>> * Modules
>> * Reflections
>> * Green threads
>> 
>> Right now it already has:
>> 
>> * `auto` variables
>> * Ranged for (`foreach`)
>> * Lambda expressions and closures
>> * `nothrow` attributes
>> * Proper containers
>> * Proper RAII
>> 
>> In no way this is the usual trollpost (I am a participant of SAoC). What bugs me is the shortening distance regarding what D has to offer with respect to C++. While D for sure has a way better syntax (thinking of template declarations, `immutable`, UDAs) and a GC, what are the advantages of using D vs C++ if my goal is to build a complex system / product?
>> 
>> TL;DR: what will D offer with respect to C++ when almost all key features of D are present in C++20(+)?
>
> Thanks for asking. Saw several good answers, to which I feel compelled to add the following.
>
> I just delivered the opening keynote for Meeting C++ (https://meetingcpp.com/2018/Schedule.html). The video will come about in a few days. There's a bit of a twitter storm going about.

Heh, slides like that will do it:

https://mobile.twitter.com/JamesMcNellis/status/1063000460280377344

> I think C++ is not properly equipped for the next big thing, which is Design by Introspection.

That was a great talk, finally clicked for me on the overview slides and checkedint example in that keynote.

> C++ has a history of poorly copying features from D while missing their core point, which makes the import much more difficult to use. The example I give in the talk is that C++ initially rejected everything and anything about static if, to then implement it under a different name ("whatever it is, make sure it's not static if!") and entirely missing the point by having if constexpr insert a new scope (whereby NOT introducing the scope is the entire point of the feature in the first place).
>
> So going through the motions is far from achieving real parity. At the same time C++ is spending a lot of real estate on language features of minor impact (concepts) or mere distractions (metaclasses), both of which aim squarely not at solving difficult issues in problem space, but to patch for difficulties created by the language itself.

Let them keep digging deeper into their hole. If you're right about how D is better, someone will build the next great software with D and prove you right.

Speaking of which, Weka might already be it: I'm editing together an interview with Liran for the D blog, should be up soon.
November 15, 2018
On 11/15/2018 06:04 AM, Erik van Velzen wrote:

> If that's true even templates will be more concise in C++.

We had similar hopes with C++11's 'auto' keyword and the move semantics with 'auto&&'. Anybody who hasn't read explanations of those concepts like the ones in Scott Meyers's "Effective Modern C++" are fooling themselves about the simplicity of such concepts.

Another example: Can anyone memorize what C++ special functions does the compiler generate in the absence, presence, explicit-defaulting, and explicit-deletion (and more? I can't be sure..) states of the same functions? No.

One of the prominent members of the C++ community is local here in Silicon Valley. He hinted that the goal is to keep C++ improved to avoid it becoming like COBOL, where very few experts remain, who are paid $1M salaries. "We don't want C++ become like COBOL." My answer is, C++ is heading exactly the same place not through natural death but through those improvements.

Ali

November 15, 2018
On Thu, Nov 15, 2018 at 11:17:43AM -0800, Ali Çehreli via Digitalmars-d wrote:
> On 11/15/2018 06:04 AM, Erik van Velzen wrote:
> > If that's true even templates will be more concise in C++.
> 
> We had similar hopes with C++11's 'auto' keyword and the move semantics with 'auto&&'. Anybody who hasn't read explanations of those concepts like the ones in Scott Meyers's "Effective Modern C++" are fooling themselves about the simplicity of such concepts.
> 
> Another example: Can anyone memorize what C++ special functions does the compiler generate in the absence, presence, explicit-defaulting, and explicit-deletion (and more? I can't be sure..) states of the same functions? No.
> 
> One of the prominent members of the C++ community is local here in Silicon Valley. He hinted that the goal is to keep C++ improved to avoid it becoming like COBOL, where very few experts remain, who are paid $1M salaries. "We don't want C++ become like COBOL." My answer is, C++ is heading exactly the same place not through natural death but through those improvements.
[...]

And that's the problem with C++: because of the insistence on backward compatibility, the only way forward is to engineer extremely delicate and elaborate solutions to work around fundamental language design flaws.  Consequently, very few people actually understand all the intricate and subtle rules that the new constructs obey, and I daresay pretty much nobody fully understands the implications of these intricate and subtle rules when used together with other equally intricate and subtle features.  The result is an extremely convoluted, hard to understand, and fragile minefield where every feature interacts with every other feature a complex way most people don't fully comprehend, and every two steps has a pretty high chance of resulting in an unexpected explosion somewhere in the code.

Writing C++ code therefore becomes an exercise in navigating the obstacle course of an overly-complex and fragile language, rather than the language being a tool for facilitating the programmer's conveying his intent to the machine.  It may be thrilling when you successfully navigate the obstacle course, but when I'm trying to get work done in the problem domain rather than wrestle with language subtleties, I really would rather throw out the obstacle course altogether and hire a better translator of my intent to the machine. Like D.


T

-- 
What do you call optometrist jokes? Vitreous humor.
November 15, 2018
On Thursday, 15 November 2018 at 19:54:06 UTC, H. S. Teoh wrote:
> On Thu, Nov 15, 2018 at 11:17:43AM -0800, Ali Çehreli via Digitalmars-d wrote:

>> "We don't want C++ become like COBOL." My answer is, C++ is heading exactly the same place not through natural death but through those improvements.
> [...]
>
> And that's the problem with C++: because of the insistence on backward compatibility, the only way forward is to engineer extremely delicate and elaborate solutions to work around fundamental language design flaws...

Funny you should say that, as that same problem already holds D back quite a lot. The Argument No.1 against pretty much any change in recent years is "it will break too much code".

> Writing C++ code therefore becomes an exercise in navigating the obstacle course of an overly-complex and fragile language...

Same will happen to D. Or rather, it already has.
November 15, 2018
On Thursday, 15 November 2018 at 22:29:56 UTC, Stanislav Blinov wrote:
>> Writing C++ code therefore becomes an exercise in navigating the obstacle course of an overly-complex and fragile language...
>
> Same will happen to D. Or rather, it already has.

++1

That's my impression of D too, lately. It's seeking stability - but at what cost?

C++, at least, is boldly going where..perhaps it should not go ;-)

.. but at least it's moving in a direction that is (speaking for myself) making programming in C++ at little less brittle (*if* you stick to particular features, and learn them well). Trying to learn all of C++ is just complete nonsense (as it is for almost any language). It would take decades just learning it all (and it's a constant moving target now, making it even more difficult - i.e Scott Meyers).... and nobody needs to use all of the language anyway.

D is no exception to this - it is also a rather complex language with far too many features that any single programmer would need in totality. Pick a subset, get good at using it. Preferably the subset that can best provide guarantees of software correctness ;-)

As for the next 'paradigm', it won't be 'unbridled freedom', I guarantee that.

The programmers may certainly want that freedom (I certainly do), but the institutions/corporations who will be impacted by that 'unbridled freedom', will want better guarantees around software correctness - not more freedom.

So in my opinion, the language that can best provide such guarantees (with consideration to other constraints that might apply), is the language that people will flock too.

D provides a lot in that area (which is what attracted me to it), but, it breaks awfully in other areas ( I'm thinking implicit conversions (so old school), no concept of private state *within* a module (what! really!), no appetite at all for addressing many issues, ...etc..etc).

C++ is the Bear. Poke it at your risk.

November 15, 2018
On Thu, Nov 15, 2018 at 10:29:56PM +0000, Stanislav Blinov via Digitalmars-d wrote:
> On Thursday, 15 November 2018 at 19:54:06 UTC, H. S. Teoh wrote:
> > On Thu, Nov 15, 2018 at 11:17:43AM -0800, Ali Çehreli via Digitalmars-d wrote:
> 
> > > "We don't want C++ become like COBOL." My answer is, C++ is heading exactly the same place not through natural death but through those improvements.
> > [...]
> > 
> > And that's the problem with C++: because of the insistence on backward compatibility, the only way forward is to engineer extremely delicate and elaborate solutions to work around fundamental language design flaws...
> 
> Funny you should say that, as that same problem already holds D back quite a lot. The Argument No.1 against pretty much any change in recent years is "it will break too much code".

Yes, this obsession with not breaking existing code is a thorn in D's side.  D is not quite at the point of C++, where *nothing* legacy can change (like getting rid of that evil preprocessor); we still have a process for deprecating old stuff.  A slow process, perhaps slower than many would like, but it has been happening over the years, and has cleaned up some of the uglier parts of the language / standard library.

Still, it's sad to see that bad decisions like autodecoding probably won't ever be fixed, because it "breaks too much code".


> > Writing C++ code therefore becomes an exercise in navigating the obstacle course of an overly-complex and fragile language...
> 
> Same will happen to D. Or rather, it already has.

I won't pretend D doesn't have its dark corners... but as of right now, it's still orders of magnitude better than C++.  It lets me express complex computations with a minimum of fuss and red tape, and I can get a lot done in a short time far better than in C/C++/Java.  Especially Java. :-P  So far, at least, I haven't found another language that doesn't get in my way the same way D does.  D is far from perfect, but I haven't seen a better alternative yet.


T

-- 
If you look at a thing nine hundred and ninety-nine times, you are perfectly safe; if you look at it the thousandth time, you are in frightful danger of seeing it for the first time. -- G. K. Chesterton
November 15, 2018
On Thu, Nov 15, 2018 at 11:03:56PM +0000, NoMoreBugs via Digitalmars-d wrote: [...]
> C++, at least, is boldly going where..perhaps it should not go ;-)
> 
> .. but at least it's moving in a direction that is (speaking for
> myself) making programming in C++ at little less brittle (*if* you
> stick to particular features, and learn them well).

That's ridiculous.  So you're saying, to use C++ effectively you basically have to use only a subset of it?  So why do we keep having to haul the rest of that baggage around?  Not to mention, you cannot enforce this sort of thing in a project of any meaningful scale, short of draconian code review institutions.  As long as a feature is there, SOMEBODY is bound to use it someday, and it will cause headaches to everyone else.


> Trying to learn all of C++ is just complete nonsense (as it is for almost any language). It would take decades just learning it all (and it's a constant moving target now, making it even more difficult - i.e Scott Meyers).... and nobody needs to use all of the language anyway.

Patently ridiculous.  What's the point of having something in a language if nobody's going to use it, or if nobody *should* use it?  A language should be a toolbox for the programmer to draw upon, not a minefield of dangerous explosives that you have to very carefully avoid touching in the wrong way.  You should be able to combine any language feature with any other language feature, and it should work without any surprising results or unexpected interactions. (Not even D fully meets this criterion, but C++ hasn't even made it on the chart yet!)

Which also means that every new feature or change added to a language not only brings the intended benefits, but also comes at the price of technical debt.  The larger the language, the less people can fully comprehend it, and the less you can comprehend it, the more chances you will not use something correctly, resulting in bugs that are hard to find, reduce productivity, and waste time.  A language that cannot be fully comprehended by any one person is far too large, and has the stench of bad language design.


> D is no exception to this - it is also a rather complex language with far too many features that any single programmer would need in totality. Pick a subset, get good at using it. Preferably the subset that can best provide guarantees of software correctness ;-)

Actually, I've used most of D's features myself, even just in my own projects.  Won't lie, though: some parts look dark and dirty enough that I wouldn't go beyond reading about it, if I had the choice.


> As for the next 'paradigm', it won't be 'unbridled freedom', I guarantee that.
> 
> The programmers may certainly want that freedom (I certainly do), but the institutions/corporations who will be impacted by that 'unbridled freedom', will want better guarantees around software correctness - not more freedom.

Then screw the institution.  Programming is about facilitating the programmer to express what he wants to the computer, not about binding him in a straitjacket and making him jump through hoops just to express the simplest of concepts (*ahem*cough*Java*sneeze*).

It's a big lie and a great fallacy that language restrictions lead to software correctness.  Restrictions only reduce productivity, and sweep the problem under the rug -- you may get rid of the superficial mistakes, but fundamental errors committed through inexperience or a faulty paradigm will continue to promulgate.

Real support for correctness in a language comes from a careful, well thought-out design of language features such that the paradigm itself leads you to think about your programming problem in a correct way that results in correct code. Most correctness problems stem from thinking in the wrong way about something, which manifests itself as the symptoms of leaky (or outright unsafe) abstractions, programming by convention, needless boilerplate, and so on.  These in turn leads to a proliferation of bugs.


> So in my opinion, the language that can best provide such guarantees (with consideration to other constraints that might apply), is the language that people will flock too.

Common fallacy: popular == good, unpopular == bad.

I couldn't care less which language people flock to.  In fact, in my book, people flocking to the language is a strong indication that I should probably avoid it.  People flocked to Java back in the day -- I was skeptical.  And recently, after having gotten over my skepticism, I finally conceded to try it out.  Did not last long.  In the process, although I did find one or two pleasant surprises, my overall experience was that 85% of my time was wasted fighting with a crippled, straitjacketed language, rather than making progress in the problem domain.

Same thing with C++ a year or two ago, when I went back to try to fix up one of my old C++ projects.  Found myself fighting with the language more than getting anything done in the problem domain.  Ended up spending about half a year to rewrite the whole thing in D (which is a small fraction of the effort it took to do it in C++), with far better results.


> D provides a lot in that area (which is what attracted me to it), but,
> it breaks awfully in other areas ( I'm thinking implicit conversions
> (so old school),

Yawn. Another common fallacy: old school == bad, bandwagon == good.

(Not saying that implicit conversions in D are any good -- in fact, I'm in favor of killing off outdated C-style implicit conversions altogether, even if this will only ever happen over Walter's dead body -- but seriously, "old school" as the reason? Wow. Never seen a more rational argument.)


> no concept of private state *within* a module (what! really!), no
> appetite at all for addressing many issues, ...etc..etc).

Yawn.  Still not over the hangup over 'private', I see.  Missing the forest for the trees.  There are far more important issues in programming than such trivialities.  But hey, such things require too much thought to comprehend, so why bother?


> C++ is the Bear. Poke it at your risk.

*snort* The Bear, eh? :-D  A sickly and overweight one, no doubt, with a deteriorating heart condition dangerously close to a stroke, still clawing the edge of the cliff in memory of its former strength even as it slowly but surely falls off the stage of relevance into the dusts of history, dragged down by its own weight of backward compatibilities and layers of bandages upon patches upon bandages that paper over fundamental design problems that will never be truly fixed.  Yawn.


T

-- 
Your inconsistency is the only consistent thing about you! -- KD
November 15, 2018
On Thursday, November 15, 2018 4:44:00 PM MST H. S. Teoh via Digitalmars-d wrote:
> On Thu, Nov 15, 2018 at 10:29:56PM +0000, Stanislav Blinov via
Digitalmars-d wrote:
> > On Thursday, 15 November 2018 at 19:54:06 UTC, H. S. Teoh wrote:
> > > On Thu, Nov 15, 2018 at 11:17:43AM -0800, Ali Çehreli via Digitalmars-d
> > >
> > > wrote:
> > > > "We don't want C++ become like COBOL." My answer is, C++ is heading exactly the same place not through natural death but through those improvements.
> > >
> > > [...]
> > >
> > > And that's the problem with C++: because of the insistence on backward compatibility, the only way forward is to engineer extremely delicate and elaborate solutions to work around fundamental language design flaws...
> >
> > Funny you should say that, as that same problem already holds D back quite a lot. The Argument No.1 against pretty much any change in recent years is "it will break too much code".
>
> Yes, this obsession with not breaking existing code is a thorn in D's side.  D is not quite at the point of C++, where *nothing* legacy can change (like getting rid of that evil preprocessor); we still have a process for deprecating old stuff.  A slow process, perhaps slower than many would like, but it has been happening over the years, and has cleaned up some of the uglier parts of the language / standard library.
>
> Still, it's sad to see that bad decisions like autodecoding probably won't ever be fixed, because it "breaks too much code".

C++ doesn't really have a process for deprecating old features or code. They sort of do, but not really. We do have that ability and use it, which helps, but we still have the basic tension between code continuing to work and being able to move the language and standard library forward. No one likes coming back to their code and finding that it doesn't work anymore. They want it to work as-is forever. And yet everyone also wants new features. And they want the old cruft gone. Balancing all of that is difficult. I'm not saying that we're necessarily doing a fantastic job of that, but there isn't an easy answer, and no matter what answer you pick, some portion of the user base is going to be annoyed with you. We can't both be completely stable and completly flexible. A balance of some kind must be struck. It remains to be seen whether we're managing to strike a good balance or whether we'll do so in the future.

As for features like auto-decoding, the core problem is how much they have their tendrils in everything. When it's a straightforward process to deprecate something, then it's a pretty simple question of whether we think the change is worth the breakage that it causes, but when it affects as much as auto-decoding does, it becomes very difficult to even figure out _how_ to do it. Honestly, I think that that's the biggest thing that's prevented removing auto-decoding thus far. It's not that someone proposed a plan to Walter and Andrei and they rejected it, because it was going to break too much code. No one has even proposed a plan that was feasible - at least not if you care about having any kind of transition process as opposed to immediately breaking tons of code (potentially in silent ways in some cases). AFAIK, pretty much the only plan that we have right now that would would would amount to creating D3 - or at least would basically mean throwing out the version of Phobos that we have, which isn't much different. Either way, it would mean completely forking the existing code base and community, and we really don't want to do that. We want a plan that involves removing auto-decoding in D2 in place.

The first step in that whole mess, which really has not been done anywhere near the level that it needs to be done, is to ensure that Phobos in general doesn't care whether a range of characters is of char, wchar, dchar, or graphemes. Many of the specializations for strings do still need to be there to avoid auto-decoding (at least as long as auto-decoding is there), but a lot of the code assumes that ranges of characters are ranges of dchar, and it _all_ needs to work with ranges of char, wchar, dchar, and graphemes. Once it's reasonably character-type agnostic (and what that means exactly is going to vary depending on what the function does), then we can sit back and see how much in the way of auto-decoding tendrils are left. At minimum, at that point, code like byCodeUnit is then going to work as well as it can, even if we can't actually remove auto-decoding. Until that's done, byCodeUnit and its ilk are going to keep running into problems in various places when you try to use them with Phobos. As it stands, when using them with Phobos, they work sometimes, and other times, they don't. So, that work is necessary regardless of what happens with auto-decoding. But once that work is done, it may very well be that we can finally find a way to get rid of auto-decoding. I don't know. I still question it given how it's tied into arrays and UFCS, making it so that we can't properly use the module system to deal with conflicts, but at least at that point, we'll have done the work that needs to be done that surrounds the problem, and it's work that needs to be done whether we get rid of auto-decoding or not.

*sigh* Honestly, auto-decoding is almost a perfect storm of issues for us being able to actually get rid of it. So, while I agree with you that we'd ideally fix the problem, it's _not_ an easy one to fix, and really the only "easy" way to fix it is to pretty much literally say "D3" and hard break all code. I think that the reality of the matter is that there are issues in every language that you can't fix without either creating a new language or creating a new version of the language that's not backwards compatible with the old one (which then forks the language and community). So, while we'd very much like to fix everything, there are going to be some things we simply can't fix if we're not willing to create D3, and talking about D3 creates a whole other can of worms, which I don't think we're even vaguely ready for yet. Maybe auto-decoding will turn out to be fixable, maybe it won't, but I think that it's going to be inevitable that _some_ things will be unfixable. I love D, but it's never going be perfect. No programming language will be, much as I would love to use one. We should do the best that we can to approach perfection, but we're going to miss in some places, and as it stands, we definitely missed when it comes to auto-decoding.

- Jonathan M Davis




November 15, 2018
On Thu, Nov 15, 2018 at 06:03:37PM -0700, Jonathan M Davis via Digitalmars-d wrote: [...]
> *sigh* Honestly, auto-decoding is almost a perfect storm of issues for us being able to actually get rid of it. So, while I agree with you that we'd ideally fix the problem, it's _not_ an easy one to fix, and really the only "easy" way to fix it is to pretty much literally say "D3" and hard break all code. I think that the reality of the matter is that there are issues in every language that you can't fix without either creating a new language or creating a new version of the language that's not backwards compatible with the old one (which then forks the language and community).  So, while we'd very much like to fix everything, there are going to be some things we simply can't fix if we're not willing to create D3, and talking about D3 creates a whole other can of worms, which I don't think we're even vaguely ready for yet.

Talking about D3 has sorta become taboo around here, for understandable reasons -- splitting the community now might very well be the death of D after that Tango vs. Phobos fiasco.  Python survived such a transition, and Perl too AIUI.  But D currently does not have nearly the size of Python or Perl to be able to bear the brunt of such a drastic change.

Nevertheless I can't help wondering if it would be beneficial to one day sit down and sketch out D3, even if we never actually implement it. It may give us some insights on the language design we should strive to reach, based on the experience we have accumulated thus far. Autodecoding, even though it's a commonly mentioned example, actually is only a minor point as far as language design is concerned.  More fundamental issues could be how to address the can of worms that 'shared' has become, for example, or what the type system might look like if we were to shed the vestiges of C integer promotion rules.


> Maybe auto-decoding will turn out to be fixable, maybe it won't, but I think that it's going to be inevitable that _some_ things will be unfixable. I love D, but it's never going be perfect. No programming language will be, much as I would love to use one. We should do the best that we can to approach perfection, but we're going to miss in some places, and as it stands, we definitely missed when it comes to auto-decoding.
[...]

It's true that at some point, you just have to dig in and work with what you have, rather than wish for an ideal language that never arrives. Still, one can't help wondering what the ideal language might look like.

A crazy wishful thought of mine is versioned support of the language, analogous to versioned library dependencies.  Imagine if we could specify a D version in source files, and the compiler switches to "compatibility" mode for D2 and "native" mode for D3.  The compiler would emit any necessary shunt code to make code compiled in D2 mode binary-compatible with code compiled in D3 mode, for example (modulo incompatible language changes, of course).  This way, existing codebases could slowly transition to D3, module-by-module.  Sorta like the -dipxxxx switches, but embedded in the source file, and on a larger scale.

(I know this is probably not feasible in practice, because of the massive amount of work it will take to maintain parallel language versions in the same compiler, which will require manpower we simply do not have. But one can dream.)


T

-- 
My father told me I wasn't at all afraid of hard work. I could lie down right next to it and go to sleep. -- Walter Bright
November 16, 2018
On Friday, 16 November 2018 at 02:02:26 UTC, H. S. Teoh wrote:
> ... after that Tango vs. Phobos fiasco....

Do you mind to explain this for newcomers?

Al.