August 26, 2018
On Sun, 26 Aug 2018 at 18:09, Jonathan M Davis via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> On Sunday, August 26, 2018 5:39:32 PM MDT Manu via Digitalmars-d wrote:
> >   ARC? What ever happened to the opAddRef/opDecRef proposal? Was it
> > rejected? Is it canned, or is it just back on the bench? (GC is
> > absolutely off the table for my project, I have no influence on this)
>
> I don't know what Walter's current plans are for what any built-in ref-counting solution would look like, but it's my understanding that whatever he was working on was put on hold, because he needed something like DIP 1000 in order to make it work with @safe - which is what then triggered his working on DIP 1000 like he has been. So, presumably, at some point after DIP 1000 is complete and ready, he'll work on the ref-counting stuff again. So, while we may very well get it, I expect that it will be a while.

I'm sure I recall experimental patches where those operators were available to try out... was I dreaming? :/
August 26, 2018
On Sunday, August 26, 2018 7:11:30 PM MDT Manu via Digitalmars-d wrote:
> On Sun, 26 Aug 2018 at 18:09, Jonathan M Davis via Digitalmars-d
>
> <digitalmars-d@puremagic.com> wrote:
> > On Sunday, August 26, 2018 5:39:32 PM MDT Manu via Digitalmars-d wrote:
> > >   ARC? What ever happened to the opAddRef/opDecRef proposal? Was it
> > >
> > > rejected? Is it canned, or is it just back on the bench? (GC is absolutely off the table for my project, I have no influence on this)
> >
> > I don't know what Walter's current plans are for what any built-in ref-counting solution would look like, but it's my understanding that whatever he was working on was put on hold, because he needed something like DIP 1000 in order to make it work with @safe - which is what then triggered his working on DIP 1000 like he has been. So, presumably, at some point after DIP 1000 is complete and ready, he'll work on the ref-counting stuff again. So, while we may very well get it, I expect that it will be a while.
> I'm sure I recall experimental patches where those operators were available to try out... was I dreaming? :/

I have no idea. AFAIK, Walter didn't make any changes related to ref-counting public, and I have no idea whether he got very far along with it at all, but I also haven't been watching everything he does in anticipation for such a feature. So, I could easily have missed something.

- Jonathan M Davis



August 27, 2018
On Sunday, 26 August 2018 at 08:40:32 UTC, Andre Pany wrote:
> On Saturday, 25 August 2018 at 20:52:06 UTC, Walter Bright wrote:
>> On 8/25/2018 3:52 AM, Chris wrote:
>>> On Friday, 24 August 2018 at 19:26:40 UTC, Walter Bright wrote:
>>>> Every programmer who says this also demands new (and breaking) features.
>>> "Every programmer who..." Really?
>>
>> You want to remove autodecoding (so do I) and that will break just about every D program in existence. For everyone else, it's something else that's just as important to them.
>>
>> For example, Shachar wants partially constructed objects to be partially destructed, a quite reasonable request. Ok, but consider the breakage:
>>
>>   struct S {
>>     ~this() {}
>>   }
>>
>>   class C {
>>     S s;
>>
>>     this() nothrow {}
>>   }
>>
>> I.e. a nothrow constructor now must call a throwing destructor. This is not some made up example, it breaks existing code:
>>
>>   https://github.com/dlang/dmd/pull/6816
>>
>> If I fix the bug, I break existing code, and apparently a substantial amount of existing code. What's your advice on how to proceed with this?
>
> In the whole discussion I miss 2 really important things.
>
> If your product compiles fine with a dmd version, no one forces you to update to the next dmd version. In the company I work for, we set for each project the DMD version in the build settings. The speed of DMD releases or breaking changes doesn't affect us at all.
>
> Maybe I do not know a lot open source products but the amount of work which goes into code quality is extremely high for the compiler, runtime, phobos and related products. I love to see how much work is invested in unit tests and also code style.
>
> DMD (and LDC and GDC) has greatly improved in the last years in various aspects.
>
> But I also see that there is a lot of work to be done. There are definitely problems to be solved. It is sad that people like Dicebot leaving the D community.
>
> Kind regards
> Andre

Dicebot should speak for himself as he wishes.  But I was entertained by the simultaneous posting by someone else of a blog post from a while back with him asking for comments on the early release of dtoh, a tool intended in time to be integrated into DMD given its design.

I don't think he was very happy about the process around DIP1000 but I am not myself well placed to judge.

In any case, languages aren't in a death match where there can be only one survivor.  Apart from anything else, does anyone really think less code will be written in the future than in the past or that there will be fewer people who write code as part of what they do but aren't career programmers?

I probably have an intermediate experience between you and Jon Degenhardt on the one hand and those complaining about breakage.  Some of it was self-inflicted because on the biggest project we have 200k SLoC a good part of which I wrote myself pretty quickly and the build system has been improvised since and could still be better.  The Linux builder is a docker container created nightly and taking longer to something similar on Windows side, where funnily enough the bigger problems are.  Often in the past little things like dub turning relative paths into absolute ones, creating huge paths that broke DMD path limit till we got fed up and decided to fix ourselves.  (Did you know there are six extant ways of handling paths on Windows?)

Dub dependency resolution has been tough.  It might be better now. I appreciate it's a tough problem but somehow eg maven is quick (it might well cheat by solving an easier problem).

And quite a lot of breakage in vibe.  But nobody forces you to use vibe and there do exist other options for many things.

Overall though, it's not that bad depending on your use case.  Everything has problems but also everyone has a different kind of sensitivity to different kinds of problems.

For me, DPP makes a huge difference because I now know it's pretty likely I can just #include a C library if that's the best option and in my experience it mostly just works.

The plasticity, coherence and readability of D code dominates the difficulties for quite a few things I am doing.  Might not be the case for others because everyone is different.  Cost of my time in the present context dominates cost of several programmers' time but I don't think thats a necessary part of why D makes sense for some things for us.  I think by the end of this year we might have eleven people including me writing D at least sometimes, from only me about 18 months ago.  That's people working from the office including practitioners who write code and add a handful of remote consultants who only write D to that.

There's no question from my perspective that D is much better than a year ago and unimaginably better from when I first picked it up in 2014.  One can't be serious suggesting that D isn't flourishing as far as adoption goes.

The forum really isn't the place to assess what people using the language at work feel. Almost nobody working from our offices is active in the forums and that's the impression I get speaking to other enterprise users.  People have work to do, unfortunately!

I wonder if the budget was there whether it would be possible to find someone even half as productive as Seb Wilzbach to help full-time, because whilst some of the problems raised are very difficult ones, others might just be a matter of (very high quality) manpower.  Michael Parker's involvement has also made a huge difference to public profile of D.

I definitely think a stable version with backfixes ported would be great if feasible.

I don't really get the hate for betterC even though I don't directly use it myself in code I write.  It's useful directly for lots of things like web assembly and embedded and a side effect from Seb's work on betterC testing for Phobos will I guessbe that it's much clearer how much can be used without depending on the GC because that's what betterC also implies.  Is it really such an expensive effort ?

Beyond real factors it also helps with perception and social factors.  I feel like I came across more HFT programmers or people who claim to be such in Reddit and who could, they  say, never  even stare too closely at  a GC  language than in the industry itself!

Would be great if Manu's work on STL and extern (C++) comes to fruition.  I think DPP will work for much more of C++ in time, though it might be quite some time.

I wonder if we are approaching the point where enterprise crowd-funding of missing features or capabilities in the ecosystem could make sense.  If you look at how Liran managed to find David Nadlinger to help him, it could just be in part a matter of lacking social organisation preventing the market addressing unfulfilled mutual coincidences of wants.  Lots of capable people would like to work full time programming in D.  Enough firms would like some improvements made.  If takes work to organise these things.  If I were a student I might be trying to see if there was an opportunity there.
August 27, 2018
On Sunday, 26 August 2018 at 23:12:10 UTC, FeepingCreature wrote:
> On Sunday, 26 August 2018 at 22:44:05 UTC, Walter Bright wrote:
>> On 8/26/2018 8:43 AM, Chris wrote:
>>> I wanted to get rid of autodecode and I even offered to test it on my string heavy code to see what breaks (and maybe write guidelines for the transition), but somehow the whole idea of getting rid of autodecode was silently abandoned. What more could I do?
>>
>> It's not silently abandoned. It will break just about every D program out there. I have a hard time with the idea that breakage of old code is inexcusable, so let's break every old program?
>>
> Can I just throw in here that I like autodecoding and I think it's good?
> If you want ranges that iterate over bytes, then just use arrays of bytes. If you want Latin1 text, use Latin1 strings. If you want Unicode, you get Unicode iteration. This seems right and proper to me. Hell I'd love if the language was *more* aggressive about validating casts to strings.

Same here. I do make unicode errors more often than I'd care to admit (someString[$-1] being the most common; I need to write a lastChar helper function), but autodecoding means I can avoid that class of errors.
August 26, 2018
On Sunday, August 26, 2018 5:12:10 PM MDT FeepingCreature via Digitalmars-d wrote:
> On Sunday, 26 August 2018 at 22:44:05 UTC, Walter Bright wrote:
> > On 8/26/2018 8:43 AM, Chris wrote:
> >> I wanted to get rid of autodecode and I even offered to test it on my string heavy code to see what breaks (and maybe write guidelines for the transition), but somehow the whole idea of getting rid of autodecode was silently abandoned. What more could I do?
> >
> > It's not silently abandoned. It will break just about every D program out there. I have a hard time with the idea that breakage of old code is inexcusable, so let's break every old program?
>
> Can I just throw in here that I like autodecoding and I think
> it's good?
> If you want ranges that iterate over bytes, then just use arrays
> of bytes. If you want Latin1 text, use Latin1 strings. If you
> want Unicode, you get Unicode iteration. This seems right and
> proper to me. Hell I'd love if the language was *more* aggressive
> about validating casts to strings.

The problem is that auto-decoding doesn't even give you correct Unicode handling. At best, it's kind of like using UTF-16 instead of ASCII but assuming that a UTF-16 code unit can always contain an entire character (which is frequently what you get in programs written in languages like Java or C#). A bunch more characters then work properly, but plenty of characters still don't. It's just a lot harder to realize it, because it's far from fail-fast. In general, doing everything at the code point level with Unicode (as auto-decoding does) is very much broken. It's just that it's a lot less obvious, because that much more works - and it comes with the bonus of being far less efficient.

If you wanted everything to "just work" out of the box without having to worry about Unicode, you could probably do it if everything operated at the grapheme cluster level, but that be would horribly inefficient. The sad reality is that if you want your string-processing code to be at all fast while still being correct, you have to have at least a basic understanding of Unicode and use it correctly - and that rarely means doing much of anything at the code point level. It's much more likely that it needs to be at either the code unit or grapheme level. But either way, without a programmer understanding the details and programming accordingly, the code is just plain going to be wrong somewhere. The idea that we can have string-processing "just work" without the programmer having to worry about the details of Unicode is unfortunately largely a fallacy - at least if you care about efficiency.

By operating at the code point level, we're just generating code that looks like it works when it doesn't really, and it's less efficient. It certainly works in more cases than just using ASCII would, but it's still broken for Unicode handling just like if the code were assuming that char was always an entire character. As such, I don't really see how there can be much defense for auto-decoding. It was done on the incorrect assumption that code points represented actually characters (for that you actually need graphemes) and that the loss in speed was worth the correctness, with the idea that anyone wanting the speed could work around the auto-decoding. We could get something like that if we went to the grapheme level, but that would hurt performance that much more. Either way, operating at the code point level everywhere is just plain wrong. This isn't just a case of "it's annoying" or "we're don't like it." It objectively results in incorrect code.

- Jonathan M Davis



August 26, 2018
On Sunday, August 26, 2018 7:25:12 PM MDT Neia Neutuladh via Digitalmars-d wrote:
> > Can I just throw in here that I like autodecoding and I think
> > it's good?
> > If you want ranges that iterate over bytes, then just use
> > arrays of bytes. If you want Latin1 text, use Latin1 strings.
> > If you want Unicode, you get Unicode iteration. This seems
> > right and proper to me. Hell I'd love if the language was
> > *more* aggressive about validating casts to strings.
>
> Same here. I do make unicode errors more often than I'd care to admit (someString[$-1] being the most common; I need to write a lastChar helper function), but autodecoding means I can avoid that class of errors.

Except that it doesn't really. It just makes it so that you make those errors at the code point level instead of at the code unit level, where the error is less obvious, because it works correctly for more characters. But it's still wrong in general. e.g. code using auto-decoding is generally going to be horribly broken when operating on Hebrew text because of all of the combining characters that get used there.

- Jonathan M Davis



August 27, 2018
On Sunday, 26 August 2018 at 19:34:39 UTC, Manu wrote:
> On Sun, 26 Aug 2018 at 12:10, RhyS via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>
>> On Sunday, 26 August 2018 at 18:18:04 UTC, drug wrote:
>> > It's rather funny to see how one man who forced to program in
>> > programming language he doesn't like can triggers comments from
>> > lurkers that they don't like D too. No offense.
>> > D is in great form and is getting much better and better and
>> > I'd like to ask D community to continue their good work and
>> > make D great again.
>>
>> Most people lurking here are people that WANT to use D but are offset by the issues. D is not bad as a language but it has issue. Their are issues at every step in the D eco system and each of those create a barrier.
>>
>> Its those same issues that never seem to get solved and are secondary citizens compared to adding more "future" features or trying to Up-one C++...
>>
>> Its not BetterC or static if or whatever new feature of the month, that brings in new people. You can advertise D as much as you want, but when people download D and very few people stay, is that not a hint...
>>
>> The fact that only recently the D Poll pointed out that most people are using VSC and not VS. I am like "what, you only figure that out now". Given the mass popularity of VSC... That alone tells you how much the mindset of D is stuck in a specific eco space.
>
> Industry tends to use VS, because they fork-out for the relatively
> expensive licenses.
> I work at a company with a thousand engineers, all VS users, D could
> find home there if some rough edges were polished, but they
> *absolutely must be polished* before it would be taken seriously.
> It is consistently expressed that poor VS integration is an absolute
> non-starter.
>
> While a majority of people (hobbyists?) that take an online poll in an
> open-source community forum might be VSCode users, that doesn't mean
> VS is a poor priority target.
> Is D a hobby project, or an industry solution? I vote the latter. I
> don't GAF about peoples hobbies, I just want to use D to _do my job_.
> Quality VS experience is critical to D's adoption in that sector.
> Those 1000 engineers aren't reflected in your poll... would you like them to be?

Do you see a path from here to there that's planned?

I think it's very difficult winning over people that expect to see the same degree of polish as in a project thats older and has much more commercial support. In other words as a thought experiment if everyone in the community were to stop and work only on VS and debugging polish, how many years would it be before your colleagues were willing to switch?

I think it might be a while.

I'm not suggesting that polish isn't worth working on, but one might be realistic about what may be achieved.

I think D is a classic example of Clayton Christensen's Innovators Dilemma.  In the beginning a certain kind of innovation starts at the fringe.  It's inferior alongst some dimensions compared to the products with high market share and so it gets kind of ignored.  But for some particular reasons it has a very high appeal to some groups of people and so it keeps growing mostly unnoticed and over tiny expands the niches where it is used.

This can keep going for a long time.  And then something in the environment changes and it's like it becomes an overnight success.  For American cars it was the oil price shock of the 1970s.  Japanese cars then might have been seen as inferior but they were energy efficient and they worked.

I think it's possible that for D this will arise from the interaction of data set sizes growing - storage prices drop at 40% a year and somehow people find a way to use that cheaper storage - whilst processing power and memory latency and bandwidth is a sadder tale. But it might be something else.

So people who say that there is no place for D in the kind of work they do might sometimes be right.  Frustrating because if only the polish were there, but polish is a lot of work and not everyone is interested in it.  They might not be right about broader adoption because the world is a very big place,most people don't talk about their work, and because some of the factors that present huge obstacles in some environments simply don't apply in others.

Thinking about frustrations as an entrepreneurial challenge may be ultimately more generative than just hoping someone will do something.  I do wonder if there isn't an opportunity in organising people from the community to work on projects that enterprise users would find valuable but that won't get done otherwise.  Organising the work might not be difficult, but it takes time and attention, which enterprise users are not long on.


August 27, 2018
On Monday, 27 August 2018 at 01:45:37 UTC, Laeeth Isharc wrote:
> I think D is a classic example of Clayton Christensen's Innovators Dilemma.  In the beginning a certain kind of innovation starts at the fringe.  It's inferior alongst some dimensions compared to the products with high market share and so it gets kind of ignored.

Those inferior dimentions for me are productivity in the form of code-navigation, proper autocompletion, a package manager with a wizard, stability of the toolchain, auto C/C++/D header translation and a project manager with a wizard.
All with proper GUI support since I'm IDE challenged.
I leterally can't work whitout one.
I guess I could, but someone would have to invest serious time in helping me out with the issues I face for getting up to a level of skill where I would be productive in D whitout an IDE.

I can't create or improve this tooling by myself  because I don't have the neccesary knowledge and skill required for said tooling, but I'm willing to back some kind of crowd funded kickstarter if needed.

Don't get me wronf, every few months when i check D's ecosystem out I see massive improvements and I acknowledge that there is a good amounth of tooling for D already, it just isn't up to a standard for me where I can say that D is a good alternative for the usecases where it should shine brightly.
August 27, 2018
On Friday, 24 August 2018 at 02:33:31 UTC, Jonathan M Davis wrote:
> Walter Bright wrote:
>> My personal opinion is that constructors that throw are an execrable programming practice, and I've wanted to ban them. (Andrei, while sympathetic to the idea, felt that too many people relied on it.) I won't allow throwing constructors in dmd or any software I have authority over.
>
> Wow. I'm surprised by this.
>
> I expect that you'd have a riot on your hands though if you actually tried to push for getting rid of throwing constructors.

A generation of programmers have been mislead down a deep rabbit hole thinking that "Constructors" are things that "Construct" objects.

This has to led to a generation of vaguely smelly code that "does too much work in the constructor" (of which throwing exceptions is evidence).

The last few years I have told myself (and anyone who doesn't back away fast enough) that "Constructors" do _not_ construct objects, they are "Name Binders." (Sort of like lisp's "let" macro)

They bind instance variable names to pre-existing sub-objects.

This attitude coupled with an a rule of thumb, "make it immutable unless I prove to myself that I _need_ it to be mutable" has led to a major improvement in my code.
August 27, 2018
Or to put it another way....

RAII should be

"Taking Ownership of a Resource is Initialization, and relinquishing ownership is automatic at the object life time end, but Failure to Acquire a Resource Is Not An Exceptional Circumstance"

Not as catchy, but far less problematic.