August 14, 2019
On 8/13/2019 9:32 AM, H. S. Teoh wrote:
> So you see, the seemingly insignificant choice of <> as template
> argument list delimiters has far-reaching consequences.  In retrospect,
> it was a bad design decision.
There were some who warned about this from the beginning, but they were overruled. There was a prevailing attitude that implementation complexity was not relevant, only the user experience.

Another parsing problem exists in C as well:

  A * B;

Is that a declaration of B or is it a multiply expression? It cannot be determined until the compiler knows what A is, which requires semantic analysis.

You might think D has the same problem, but I added a simple rule to D:

  "If it can be parsed as a declaration, it's a declaration."

Which means A*B; is always a declaration, even if A is later determined to be a variable, in which case the compiler gives an error.

The reason this works is because A*B; is an expression with no effect, and hence people don't write such code. You might then think "what if * is an overloaded operator with side effects?" There's another rule for that, and that is overloading of arithmetic operators should be to implement arithmetic, which shouldn't have side effects.

(If you absolutely must have operator overloading, you can write (A*B); but such is strongly discouraged.)

The fact that in 20 years pretty much nobody has noticed that D operates this way is testament to it being the correct decision and it "just works".
August 14, 2019
On Wed, Aug 14, 2019 at 11:51:55AM -0700, Walter Bright via Digitalmars-d wrote:
> On 8/13/2019 9:32 AM, H. S. Teoh wrote:
> > So you see, the seemingly insignificant choice of <> as template argument list delimiters has far-reaching consequences.  In retrospect, it was a bad design decision.
>
> There were some who warned about this from the beginning, but they were overruled. There was a prevailing attitude that implementation complexity was not relevant, only the user experience.

Having grown up in that era, I can sympathize with that attitude, but I must say that given today's hindsight, citing user experience as the reason for choosing <> as delimiters is very ironic indeed. :-D


> Another parsing problem exists in C as well:
> 
>   A * B;
> 
> Is that a declaration of B or is it a multiply expression? It cannot be determined until the compiler knows what A is, which requires semantic analysis.
> 
> You might think D has the same problem, but I added a simple rule to D:
> 
>   "If it can be parsed as a declaration, it's a declaration."
> 
> Which means A*B; is always a declaration, even if A is later determined to be a variable, in which case the compiler gives an error.
> 
> The reason this works is because A*B; is an expression with no effect, and hence people don't write such code. You might then think "what if * is an overloaded operator with side effects?" There's another rule for that, and that is overloading of arithmetic operators should be to implement arithmetic, which shouldn't have side effects.
> 
> (If you absolutely must have operator overloading, you can write
> (A*B); but such is strongly discouraged.)
> 
> The fact that in 20 years pretty much nobody has noticed that D operates this way is testament to it being the correct decision and it "just works".

Wow.  I never even noticed that, all this time! :-D

It also goes to strengthen the argument that operator overloading should not be abused the way it has been in C++.  Choosing << and >> for I/O seemed like a clever thing to do at the time, but it led to all sorts of silliness like unexpected precedence and ambiguity with actual arithmetic shift operations, necessitating the proliferation of parentheses around I/O chains (which, arguably, defeats the aesthetics of "<<" and ">>" in the first place).  And don't even mention that Boost monstrosity that uses operator overloading for compile-time regexen. Ick. The very thought makes me cringe.


T

-- 
Why can't you just be a nonconformist like everyone else? -- YHL
August 14, 2019
On 8/14/2019 12:08 PM, H. S. Teoh wrote:
> It also goes to strengthen the argument that operator overloading should
> not be abused the way it has been in C++.  Choosing << and >> for I/O
> seemed like a clever thing to do at the time, but it led to all sorts of
> silliness like unexpected precedence and ambiguity with actual
> arithmetic shift operations, necessitating the proliferation of
> parentheses around I/O chains (which, arguably, defeats the aesthetics
> of "<<" and ">>" in the first place).

Worse, you cannot do anything that requires persistent state, because for `A<<B` if A sets a state, an exception can be thrown before B is executed, and now the iostream state is hosed. It isn't thread-safe, either.


> And don't even mention that Boost
> monstrosity that uses operator overloading for compile-time regexen.
> Ick. The very thought makes me cringe.

That was one of the examples that convinced me to make it hard to do such things with D.
August 14, 2019
On 8/13/2019 2:13 AM, Jacob Carlborg wrote:
> I suggest you look into incremental compilation, if you haven't done that already.

Back in the 90's, when dinosaurs ruled the Earth, it was demanded of us why our linker (Optlink) didn't do incremental linking, as Microsoft just released incremental linking support and did a good job marketing it.

The answer, of course, was that Optlink would do a full link faster than MS-Link would do an incremental link.

The other (perennial) problem with incremental work was amply illustrated by MS-Link - it would produce erratically broken executables because of mistakes in the dependency management. Most people just kinda gave up and did full links just because they could get reliable builds that way.

Not correctly handling every single dependency means you'll get unrepeatable builds, which is a disaster with dev tools.

The solution I've always focused on was doing the full builds faster.

Although it is not implemented, the design of D is such that lexing, parsing, semantic analysis, and code generation can be done concurrently. Running the lex/parse in parallel across all the imports can be a nice win.
August 14, 2019
On 8/12/2019 2:40 AM, Atila Neves wrote:
> Indeed.

Building a syntax check into one's code editor would probably double compile speeds because half the errors would be detected before you even wrote the file out :-)

August 15, 2019
On Friday, 9 August 2019 at 13:17:02 UTC, Atila Neves wrote:
> From experience, it makes me work much slower if I don't get results in less than 100ms. If I'm not mistaken, IBM did a study on this that I read once but never managed to find again about how much faster people worked on short feedback cycles.

This is bit an exaggeration right? We're talking about the speed of a human blink.

I can't see a great difference between 1 sec vs 100 ms "while working". Of course someone could say if you did 10 consecutive compilations, then 10 x 100ms = 1 sec while in the other case would be 10 seconds, but this is extreme, you usually take a time change code and compile.

But overall I couldn't be bothered at all.

Now imagine waiting ~40 seconds just to open any solution on Visual Studio (Mostly used for projects where I work), on a NOT so old machine, and like 4 ~ 10 seconds every time you run an app while debugging.

That's is the meaning of pain.

Matheus.
August 15, 2019
On Thursday, 15 August 2019 at 00:45:06 UTC, matheus wrote:
> I can't see a great difference between 1 sec vs 100 ms "while working".

It is pretty frustrating and easy to lose your train of thought as things make you wait.

Though I find 1 second to still generally be acceptable, once we get into like 3 seconds it is getting absurd.
August 15, 2019
On Wednesday, 14 August 2019 at 22:56:30 UTC, Walter Bright wrote:
> On 8/12/2019 2:40 AM, Atila Neves wrote:
>> Indeed.
>
> Building a syntax check into one's code editor would probably double compile speeds because half the errors would be detected before you even wrote the file out :-)

I already have that with flycheck in emacs. My main concern is running tests, specifically only the tests needed to be compiled/interpreted and getting fast feedback on whether I broke anything or not.
August 15, 2019
On Thursday, 15 August 2019 at 00:45:06 UTC, matheus wrote:
> On Friday, 9 August 2019 at 13:17:02 UTC, Atila Neves wrote:
>> From experience, it makes me work much slower if I don't get results in less than 100ms. If I'm not mistaken, IBM did a study on this that I read once but never managed to find again about how much faster people worked on short feedback cycles.
>
> This is bit an exaggeration right?

No, no exaggeration.

> We're talking about the speed of a human blink.

Apparently blinks are slower than that (I just googled). It doesn't matter though, since it has an effect. As I mentioned before, latencies over 10ms on an audio interface are noticeable by musicians when they play live through effects, and that's an order of magnitude removed.

> I can't see a great difference between 1 sec vs 100 ms "while working".

I can. It makes me want to punch the wall.

> Of course someone could say if you did 10 consecutive compilations, then 10 x 100ms = 1 sec while in the other case would be 10 seconds, but this is extreme, you usually take a time change code and compile.

No, it's not that. It's that it interrupts my train of thought. I can work faster if I get faster feedback.

> But overall I couldn't be bothered at all.
>
> Now imagine waiting ~40 seconds just to open any solution on Visual Studio (Mostly used for projects where I work), on a NOT so old machine, and like 4 ~ 10 seconds every time you run an app while debugging.
>
> That's is the meaning of pain.

I can take waiting 40s once a day. I can't take waiting >1s every time I build though.



August 15, 2019
On Thursday, 15 August 2019 at 15:00:43 UTC, Atila Neves wrote:
> On Thursday, 15 August 2019 at 00:45:06 UTC, matheus wrote:
>> On Friday, 9 August 2019 at 13:17:02 UTC, Atila Neves wrote:
>>> From experience, it makes me work much slower if I don't get results in less than 100ms. If I'm not mistaken, IBM did a study on this that I read once but never managed to find again about how much faster people worked on short feedback cycles.
>>
>> This is bit an exaggeration right?
>
> No, no exaggeration.

Don't know any compiler that's that fast, definitely not D and not even Jai.

>> We're talking about the speed of a human blink.
>
> Apparently blinks are slower than that (I just googled). It doesn't matter though, since it has an effect. As I mentioned before, latencies over 10ms on an audio interface are noticeable by musicians when they play live through effects, and that's an order of magnitude removed.
>
>> I can't see a great difference between 1 sec vs 100 ms "while working".
>
> I can. It makes me want to punch the wall.

The difference is noticable but really not to that point. What do you do when you have to wait 30 mins? I guess some people are just less trigger happy and patient than others.

>> Of course someone could say if you did 10 consecutive compilations, then 10 x 100ms = 1 sec while in the other case would be 10 seconds, but this is extreme, you usually take a time change code and compile.
>
> No, it's not that. It's that it interrupts my train of thought. I can work faster if I get faster feedback.

Wouldn't compiler errors do the same.thing, if not worse? Not only do they interrupt your train of thought they require you to think about something else entirely. What do you do when you get a compiler error?

>> But overall I couldn't be bothered at all.
>>
>> Now imagine waiting ~40 seconds just to open any solution on Visual Studio (Mostly used for projects where I work), on a NOT so old machine, and like 4 ~ 10 seconds every time you run an app while debugging.
>>
>> That's is the meaning of pain.
>
> I can take waiting 40s once a day. I can't take waiting >1s every time I build though.

Feel like you don't have to wait. You can continue to do other things while it is compiling. I suppose some people aren't as good at multi tasking.