May 29, 2018
On Wednesday, May 30, 2018 08:43:33 rikki cattermole via Digitalmars-d wrote:
> On 30/05/2018 8:37 AM, Tony wrote:
> > On Tuesday, 29 May 2018 at 20:19:09 UTC, bachmeier wrote:
> >> I don't think it's difficult to do that yourself. There's no need to have a formal split. One example is that it's really nice to have the GC available for part of the program and avoid it for another part. @nogc gives you a guarantee. Different variants of the language are a special case of this that is equivalent to annotating the entire program to restrict behavior. That's rarely desirable.
> >
> > What would be an example of a type of application (or maybe that should be "which type of domain" or "which type of developer") where you would want part of it to do garbage collection and the rest of it do not do garbage collection?
>
> GUI's, audio systems, language tooling, games, I'm sure somebody can come up with a much more longer list.

Basically, stuff that can't afford to have the GC pause the program for more than a millisecond or two has to be careful with the GC, but your average program is going to be perfectly fine with it, and in many cases, it's just part of the program that can't afford the pause - e.g. a thread for an audio or video pipeline. The rest of the program can likely afford it just fine, but that thread or group of threads has to be at least close to realtime, so it can't use the GC. It's kind of like @safe vs @system. It's not uncommon for most of your program to be able to be @safe just fine, but certain stuff just can't be. However, that's not necessarily a good reason to make it so that the entire program is @system. So, you make that piece @sytem and use @trusted appropriately. With the GC, you typically use it and then turn it off or avoid it in select pieces of code that can't afford it. In most cases, it should not be necessary to avoid it entirely even if you can't afford it in part of your program (though as with pretty much everything, there are bound to be exceptions).

- Jonathan M Davis

May 29, 2018
On Tuesday, 29 May 2018 at 21:06:52 UTC, Jonathan M Davis wrote:
> On Wednesday, May 30, 2018 08:43:33 rikki cattermole via Digitalmars-d wrote:
>> On 30/05/2018 8:37 AM, Tony wrote:
>> > On Tuesday, 29 May 2018 at 20:19:09 UTC, bachmeier wrote:
>> >> I don't think it's difficult to do that yourself. There's no need to have a formal split. One example is that it's really nice to have the GC available for part of the program and avoid it for another part. @nogc gives you a guarantee. Different variants of the language are a special case of this that is equivalent to annotating the entire program to restrict behavior. That's rarely desirable.
>> >
>> > What would be an example of a type of application (or maybe that should be "which type of domain" or "which type of developer") where you would want part of it to do garbage collection and the rest of it do not do garbage collection?
>>
>> GUI's, audio systems, language tooling, games, I'm sure somebody can come up with a much more longer list.
>
> Basically, stuff that can't afford to have the GC pause the program for more than a millisecond or two has to be careful with the GC, but your average program is going to be perfectly fine with it, and in many cases, it's just part of the program that can't afford the pause - e.g. a thread for an audio or video pipeline. The rest of the program can likely afford it just fine, but that thread or group of threads has to be at least close to realtime, so it can't use the GC.

You cant call any code that might take a lock if you're doing real time audio, so that means no malloc/free either. That's standard practice. You either allocate everything up front or you do something like I do which is lock free queues ferrying things to and from the audio thread as needed.

I mean the point is needing different memory management for different parts of the program is already a thing with real time audio, GC doesnt really change that.

May 29, 2018
On Tuesday, 29 May 2018 at 05:29:00 UTC, Ali wrote:
> On Tuesday, 29 May 2018 at 03:56:05 UTC, Dmitry Olshansky wrote:
>> It seems C++ is following the road of PL/I, which is growing language way beyond the point anyone can understand or implement all of it.
>
> A key line from this paper
>
>>  We now have about 150 cooks; that’s not a good way to get a tasty and balanced meal.
>
> I don't think Bjarne is against adding feature to C++, or even constantly adding feature
> he even admits to support some of the features he mention in his list
>
> I think he is worried about
> 1. the huge number of features being targeted at once
> 2. the features coming from different independent teams, making them less likely to be coherent

Which is ironic considering...

Ken Thomson : " Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair. And he said “no” to no one. He put every feature in that language that ever existed. It wasn’t cleanly designed—it was just the union of everything that came along. And I think it suffered drastically from that."

Donald Knuth : "Whenever the C++ language designers had two competing ideas as to how they should solve some problem, they said "OK, we'll do them both". So the language is too baroque for my taste."

May 30, 2018
On Tuesday, 29 May 2018 at 23:55:07 UTC, Dave Jones wrote:
> Which is ironic considering...
>
> Ken Thomson : " Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair. And he said “no” to no one. He put every feature in that language that ever existed. It wasn’t cleanly designed—it was just the union of everything that came along. And I think it suffered drastically from that."
>
> Donald Knuth : "Whenever the C++ language designers had two competing ideas as to how they should solve some problem, they said "OK, we'll do them both". So the language is too baroque for my taste."

good old Ken and Don are from a generation where you could (typically) understand the whole langauge.

those times have passed. no really.. they have...I'm not kidding...

It is now just complete nonsense that one person should be able to understand a modern programming langauge. At best, they will understand some of it.

These days, it must be about collaboration - which is something D suffers from not having, due to people believing that they should be able to understand it all, and therefore progress should stop when this no longer becomes possible.

That is essentially a human-ego driven perspective, that holds back progress.

Progress in modern times requires collaboration. People who know and understand parts, connecting and collaborating with people who know and understand other parts.

That is the way the C++ design by committee works. It might not be perfect, but its much better than having a King that you cannot say 'no' too (ie Vasa), or a King that always says 'no' to the people.

D needs more collaborators, and less kings.
May 30, 2018
On Tuesday, 29 May 2018 at 20:37:38 UTC, Tony wrote:
> On Tuesday, 29 May 2018 at 20:19:09 UTC, bachmeier wrote:
>
>> I don't think it's difficult to do that yourself. There's no need to have a formal split. One example is that it's really nice to have the GC available for part of the program and avoid it for another part. @nogc gives you a guarantee. Different variants of the language are a special case of this that is equivalent to annotating the entire program to restrict behavior. That's rarely desirable.
>
> What would be an example of a type of application (or maybe that should be "which type of domain" or "which type of developer") where you would want part of it to do garbage collection and the rest of it do not do garbage collection?

I believe that's how the Weka guys say they use D for their distributed, parallel filesystem, so you can add it to the list of applications others gave you:

https://www.youtube.com/watch?v=RVpaNM-f69s
June 01, 2018
On Tuesday, 29 May 2018 at 01:46:47 UTC, Walter Bright wrote:
> A cautionary tale we should all keep in mind.
>
> http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0.pdf
>
> https://www.reddit.com/r/programming/comments/8mq10v/bjarne_stroustroup_remeber_the_vasa_critique_of/
>
> https://news.ycombinator.com/item?id=17172057

I don't know if you guys have realized but D is heading towards a similar direction too. A lot of the language's features are half baked plus parts of the language are only known well by few people. Including why certain things work the way they work.

Example will be when to use immutable or const, in or const (const scope?),...

I'm afraid certain things are been introduced without careful consideration of how it affects or relates to previous features. Too many ways of doing same things with sometimes slight differences doesn't help the language's future.

Too many unfinished masterpieces.
June 01, 2018
With regard to having, say, a GUI written with garbage collection, and then needing to have non-garbage collected code to process audio, could that not be done with GC D calling C? And, if there was a garbage-collected D (D for Applications) and a non-GC D (D for Systems Programming), couldn't one be linked with the other? And before you say "but it should all be together coming out of one compiler" - take a moment to Remember the Vasa!


I don't seriously expect two D-ish compilers, but it does seem to make more sense with regard to adding automatic reference counting to a language that already has garbage collection, as well as working to remove garbage collection from the standard library. Presumably at the beginning and for much of D's history, garbage collection was a premier selling point, along with OOP.

But with regard to various compile-time stuff and function annotations and other things that didn't exist years ago, has that resulted in noticeably faster programming and/or noticeably higher code quality by those utilizing it?
June 01, 2018
On Tuesday, 29 May 2018 at 23:55:07 UTC, Dave Jones wrote:
> On Tuesday, 29 May 2018 at 05:29:00 UTC, Ali wrote:
>> On Tuesday, 29 May 2018 at 03:56:05 UTC, Dmitry Olshansky wrote:
>>> It seems C++ is following the road of PL/I, which is growing language way beyond the point anyone can understand or implement all of it.
>>
>> A key line from this paper
>>
>>>  We now have about 150 cooks; that’s not a good way to get a tasty and balanced meal.
>>
>> I don't think Bjarne is against adding feature to C++, or even constantly adding feature
>> he even admits to support some of the features he mention in his list
>>
>> I think he is worried about
>> 1. the huge number of features being targeted at once
>> 2. the features coming from different independent teams, making them less likely to be coherent
>
> Which is ironic considering...
>
> Ken Thomson : " Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair. And he said “no” to no one. He put every feature in that language that ever existed. It wasn’t cleanly designed—it was just the union of everything that came along. And I think it suffered drastically from that."
>
> Donald Knuth : "Whenever the C++ language designers had two competing ideas as to how they should solve some problem, they said "OK, we'll do them both". So the language is too baroque for my taste."

A dysregulation of caution is more the rule than the exception in modern times.  People say yes when they should have said no, and then after the mess becomes evident in reaction to it they say no when they should be saying yes (in response to efforts to clear things up).  Viz the response before and after the credit crisis.



June 01, 2018
On Friday, 1 June 2018 at 18:18:17 UTC, Tony wrote:

> But with regard to various compile-time stuff and function annotations and other things that didn't exist years ago, has that resulted in noticeably faster programming and/or noticeably higher code quality by those utilizing it?

Yes, though you also can't compare a typical programmer from the D world with a typical guy from an enterprisey language world.  The D guy I am begging please to consider copying memory just this once, and guy from a certain different community I am trying to encourage to consider using a profiler.  Anyway in one little comparison for a market data service some D code I pulled together was quite a bit faster than the code written in a certain other enterprise language.  D was just storing binary blobs and the other one was talking to MongoDB so it's a totally unfair comparison.  But what I wrote quickly in a few weeks not caring about performance at all and feeling guilty about it was 2,000x faster than the solution written in a more traditional language that took months to write.  And there was almost no code, so with the exception of three hairy lines it was much more readable and clear.

Of course it's unfair to compare two different back ends, but that's also the point - there's a difference in values between different communities.  Somebody told me that XYZ company that developed his language are the experts and what do I know so I will not even think about how it works beneath.  The D programmer has a virtue of a different kind (and one must never forget that any virtue, taken to the extreme, and out of balance with other virtues becomes a vice) - she knows that it's just code and whilst uncomfortable in the beginning with persistence one can figure it out.  Who the hell do you think you are to write a C compiler?  That's still echoing today from the founding moment.

Values are perhaps much more important than features in determining whether one should use a modern basically sound language.  Is it a problem that if you install the latest DMD on windows or Ubuntu it might not work the first time, and it definitely won't if you are trying to show someone.

That very much depends.

Are you someone and do you work with people who are put off by the first five minutes and indeed repeated encounters with the rough around the edges aspects of D?

I was a bit daunted by the bleeding edge reputation of Arch Linux so I tried everything else, but when I found it I knew I had come home.  For my own workstation, not something to use on critical infrastructure of course - though on the whole I would ideally be able to adapt to failure rather than try to make sure the impossible to prevent never happens.

Nassim Taleb says that something is resilient if it survives stress and antifragile if it benefits from stress and disorder.  Well, sometimes it has been a pain in the neck, but I would by far rather deal with regular small infelicities than less frequent big ones.

We are in an age of specialisation and resulting fragmentation - not just in programming but in many other fields too.  The edge in life is always shifting.  In 1998 I was nervous about moving to DE Shaw as a trader and an older chap with white hair (we were all younger then so Angus was an anomaly) said don't worry - you have some technical skills that most people won't have for a decade, so if it doesn't work out you will be fine.  And today it's the other way around - because I followed my interests I ended up knowing enough about quite a few different things where the package is not that common.  And yet there's a value in knowing the whole picture in one mind that can't be substituted for by a committee.

And what's true there is true with other skills too.  So the infelicities of D actually serve as a moat to filter for the worthy and a training regime to keep you fit.  Taleb talks about checking into an expensive hotel where there is a guy in a suit paying the bellboy to carry his bags upstairs.  Then an hour later he sees the same guy in the gym lifting weights using a machine.  And he has a point that to a certain extent it's possible to use found challenges to stay fit more than people are in the habit of doing today.

There's also a breadth found in the community that used to be the norm when I started programming in 1983 but disappeared in the years.  In which other community will I have dinner with a naval architect and come away with good ideas that I can steal and put to good use for what I am doing?

Anyway beyond performance and the specific virtues of programmers from the D community (which may well be vices in an environment that ought to be based on different values), yes I do think CTFE, introspection, sane templates and code generation make a great deal of difference to code readability.  There's much less of it for a start, and it's easy to understand what it's doing.  Vs a real example of a 4k file of copy paste C# wrapping a C native code interface.  Microsoft advise using HTML templates for code generation - I thought at first surely an April Fool.

And annotations are great, and take a look at how Atila Neves uses them for example. People weren't thrilled by them in the beginning but it's proven very useful in practice.  The Remedy Games talk by Ethan describing their use was very cool.

I think sometimes stress and unhappiness is caused by wanting something or someone to be something one is familiar with rather than what it intrinsically is.  If you have problems using D at work try and figure out a way to solve them.  Or work with others to improve things.   But it could well be that it's not the right place for it.  Maybe it's not technically a fit, but it could well be a question of values - the existing values of the group or the values appropriate to the domain the company is working within.  Possibly it could be just that we are all very impatient these days whereas processes of social change take the time they take.

The economist Brynjolfsson has written about this from the perspective of organisational architecture.  The PC was quite available in 1982 and yet in 1987 Solow, a renowned expert on growth said that "computers are everywhere but in the productivity statistics".  A decade later people were talking about the productivity miracle, and that was because of computers.   But why did it take so long?  Well it took computers time to mature and what Andy gaveth, Bill kept on taking away.  However more than anything it was because organisational architecture needed to change to truly benefit from new technologies.  And we know how much most people like change - it takes a while as a result.

D is a very ambitious language.  Therefore it's not surprising it develops more slowly, but that does not say much about it's eventual destiny.  Lots of things are insignificant nothings in the beginning, but some of them become very important indeed.  I was writing to someone earlier about the revival in US manufacturing that was obvious enough in 2011 based on outlook - perception about compound growth is very strange.  Hence the phenomenon of the overnight success that took decades.  It wasn't an overnight success - just that people weren't paying attention and only woke up to it when it passed a threshold of perception.

So things are as they are, and wishing or pretending otherwise won't make it different.  But in the meantime the fact that many places have values that get in the way of trying something a committee hasn't approved is a tragic waste of potential for the individual and company involved.  But it's also an opportunity because we are hiring in London and Hong Kong and I've never seen anything like it.  I do my whole spiel and then it turns out I needn't have bothered.  Excellent programmer.  "So I can work with intelligent and virtuous people and write D at work?"  Okay.  There's an aspect of hyperbole in my telling of it, but not that much.

The main market test of D should be not popularity but are people using it to get real work done and do they find commercial benefits from their choices.  Given a little patience the rest will follow.  You only need one leader in each sector to talk about it, and over time a few more will try.  There's a process of contagion that's slow in thd beginning to human perception and then not - the overnight success actually decades in the making.

One can control what one does, but one can't control what others think of it.  It's by far better then to focus on making D the best language and ecosystem it can be (an intrinsic quality) rather than fretting about popularity.  The broader world is just slow to catch on, but they do catch on eventually, particularly when conditions shift.

Growth in data set sizes meeting CPU manufacturers having some different constraints and things to worry about plus stagnation in relative memory performance and storage moving to the motherboard.  I think recognition of conditions shifting is just a matter of time.  I don't think it's a coincidence that Weka,the disruptive storage startup, used D successfully or that performance-wise they utterly dominated their competition.  William Gibson said the future is here already, just unevenly distributed.  So exotic situations today become quotidian ones tomorrow.  Maybe performance mattering just a bit more than it used to is part of that.  I don't yet work with big data but even on middling data a 2,000x performance improvement attained with zero effort or concern about performance - that's okay and does make a real difference.
June 02, 2018
On Friday, 1 June 2018 at 18:18:17 UTC, Tony wrote:
> But with regard to varians compile-time stuff and function annotations and other things that didn't exist years ago, has that resulted in noticeably faster programming and/or noticeably higher code quality by those utilizing it?

These are exactly the things that enable us to bring a very large code base to D. Not just faster or better, it makes the difference between impossible and possible. And we are engineers needing to solve real-world problems, not CS nerds that find these features merely interesting from a theoretical perspective. Stay tuned for an announcement...