December 04, 2019
On Tuesday, 3 December 2019 at 21:11:49 UTC, Andrei Alexandrescu wrote:
> It would be a mistake to presuppose that hex string literals are a good precedent, however. Heredocs have no library alternative.

Alternative can be any other type of string or an import expression.
December 04, 2019
On Tuesday, 3 December 2019 at 21:20:57 UTC, H. S. Teoh wrote:
> 2) It places the blame of the syntax highlighting issue at the wrong
>    place: syntax highlighters should be fixed, not the other way round.

It requires efficient memory management. Wait, it requires memory management? Also the usual tradeoff between space, complexity and time, maybe hashtable and CSPRNG. Usually delimited strings are simply not implemented as the only reasonable option, but then people here say that such highlighter "doesn't support D". So, it's not really a problem for highlighter, delimited strings simply don't exist there, and can opt in by choosing a different highlighter.
December 04, 2019
On 12/3/19 5:11 PM, Dennis wrote:
> What criteria should a language feature have to be candidate for removal, and why don't context-sensitive string literals fit those criteria? What sources of language complexity can be removed instead?

That got me thinking. Here's what I'd opine.

A good DIP creates a scientific argument. It would have the general attitude of building, through a series of factual statements, a hypothesis that is convincing. A neutral person with the proper background would read the facts and reach the conclusion as much as the author. (In contrast, a DIP that is not scientific would attempt to use qualitative arguments and rhetoric in an attempt to create an opinion trend.)

Consider someone reads a DIP proposing the removal of here docs containing facts such as these:

* "We have analyzed x languages and of these, we found y historical issues related to mistaken or poor performance implementation of heredocs. [... details ...]"

* "Across x editors, we discovered that x1 do not implement here docs for any of their supported languages, x2 do not implement them for D, and x3 implement them with severe performance bottlenecks. [... details ...]"

" "In the D compiler issue, we found x bug reports issued over y years. They took z days on average to fix. x1 issues are still open. [... details ...]"

* "The code dedicated to heredocs in the D reference parser is y lines long, which constitutes z% of the entire lexer. Lexing of heredocs is t% slower than any other equivalent strings, revealing a serious performance bottleneck. [... details ...]"

With such arguments at hand, a proposal would build a powerful argument that anyone can easily verify and take into consideration. No need for argumentation, explanations, etc. Conversely, if one does such an investigation and gets no meaningful results, the conclusion that heredocs are okay as they are would also be immediate.

Now it may be argued that all of this is hard work, and of high risk - even if the DIP is well-argued, it could be rejected. Also, is the result of the work (a small language simplification) worth the effort?

Sadly I know of no solution to this. What I can say is that it's the main dilemma tormenting graduate students doing research. A colleague of mine in the PhD program said he has any number of ideas to research, but the cognitive load of putting work into something that may not pan out is paralyzing him, so he ends up doing nothing for long periods of time. He ended up not finishing his degree. For all I know he was smarter and better than many who did graduate.
December 04, 2019
On 12/3/2019 2:11 PM, Dennis wrote:
> Why is it a bad DIP?

I think Andrei covered that fairly well.

> What criteria should a language feature have to be candidate for removal,

This would be a good opening for a separate thread.

> and why don't context-sensitive string literals fit those criteria?

The only real cost identified is poor support for syntax highlighting in some text editors. On the other hand, heredocs are a common language feature, and other methods of doing it are so clumsy people rarely have the stomach to do it.

> What sources of language complexity can be removed instead?

This would be a good opening for a separate thread.


December 04, 2019
On 12/4/2019 5:35 AM, Timon Gehr wrote:
> On 04.12.19 12:10, Dennis wrote:
>>
>>
>> Now my proposed next one is:
>>
>> - Small feature: context-sensitive string literals
>>    Small problem: accidentally bumps the complexity class of D's lexical grammar.
> 
> A small fix for this small problem is to just say in the specification that heredoc identifiers may not exceed 1e100 characters. ;)
> 
> Another fix could be to just go over the language specification and replace all wrongly applied CS terms by a short explanation of what is actually going on. 

Another case of my lack of academic CS training showing. I would appreciate it if qualified people would indeed go through the D spec and correct misuse of the terms.

I know Timon likes to excoriate my conflation of "assert" and "assume", which have precise CS definitions. I'm sure there's plenty more in the spec.

> (In practice, when Walter says D's grammar is context-free, what he means is
> that parsing does not depend on semantic analysis on a prefix of the code, a
> property that C++ has which implies context-sensitivity and is usually
> abbreviated this way, and Walter's aim was to contrast D to this.)

That's right. I often express it in even simpler (but less precise) terms - a symbol table is not required to parse it. Yes, I know the pedant will point out that heredoc has a symbol table with exactly one symbol in it, but please, allow me to concede that in advance and spare us :-)
December 04, 2019
On Wednesday, 4 December 2019 at 21:57:00 UTC, Andrei Alexandrescu wrote:
> A good DIP creates a scientific argument. It would have the general attitude of building, through a series of factual statements, a hypothesis that is convincing. A neutral person with the proper background would read the facts and reach the conclusion as much as the author. (In contrast, a DIP that is not scientific would attempt to use qualitative arguments and rhetoric in an attempt to create an opinion trend.)

That will prevent qualitative incremental improvements. You cannot make quantitative arguments without very large amounts of data... there is no such dataset, only github.

If the DIP had provided an argument for an alternative here-document syntax that was easier to parse then it is probable that there would have been few objections to it. It could have been automated.

There is really no use in pretending that language changes are apolitical. They are usually inherently political.

December 04, 2019
On Wednesday, 4 December 2019 at 22:57:21 UTC, Walter Bright wrote:
> Another case of my lack of academic CS training showing. I would appreciate it if qualified people would indeed go through the D spec and correct misuse of the terms.

I don't think a spec has to use a lot of CS terms, probably better to describe it in language that most users can understand.

Like, the other day I got confused by the usage of the term "covariant" in
https://dlang.org/spec/function.html

It says stuff like "a pure function … is covariant with an impure function", "Nothrow functions are covariant with throwing ones.", "Safe functions are covariant with trusted or system functions." and "System functions are not covariant with trusted or safe functions."

This doesn't tell me anything even if I happened to remember what the term means. My understanding is that covariant means that if T(A) is related to T'(A') then T<:T' and A<:A', wheras covariant means that one of the subtyping relations point the other way.

I cannot fix it either, since I don't know what was meant...

December 04, 2019
There are a lot of DIPs in the pipeline, and this looks highly unlikely to get traction, based on the comments. I suggest withdrawing it.

December 04, 2019
On Wednesday, 4 December 2019 at 23:35:09 UTC, Ola Fosheim Grøstad wrote:
> Like, the other day I got confused by the usage of the term "covariant" in

In that context, if you replace "covariant with" with "can act as a substitute for" it would work pretty well.
December 04, 2019
On Wednesday, 4 December 2019 at 23:52:54 UTC, Adam D. Ruppe wrote:
> On Wednesday, 4 December 2019 at 23:35:09 UTC, Ola Fosheim Grøstad wrote:
>> Like, the other day I got confused by the usage of the term "covariant" in
>
> In that context, if you replace "covariant with" with "can act as a substitute for" it would work pretty well.

That is much easier to understand, for sure. I think the best parts of the documentation is where examples are provided.