September 25, 2014
On 9/24/14, 3:20 AM, Jacob Carlborg wrote:
> On 24/09/14 07:37, Walter Bright wrote:
>
>> So help out!
>
> You always say we should help out instead of complaining. But where are
> all the users that want C++ support. Let them implement it instead and
> lets us focus on actual D users we have now.
>

Maybe Facebook needs D to interface with C++?
September 25, 2014
On Thursday, 25 September 2014 at 18:51:13 UTC, H. S. Teoh via
Digitalmars-d wrote:
> You don't know if
> recompiling after checking out a previous release of your code will
> actually give you the same binaries that you shipped 2 months ago.

To be clear, even if nothing changed, re-running the build may
produce different output.  This is actually a really hard problem
- some build tools actually use entropy when producing their
outputs, and as a result running the exact same tool with the
same parameters in the same [apparent] environment will produce a
subtly different output.  This may be intended (address
randomization) or semi-unintentional (generating a unique GUID
inside a PDB so the debugger can validate the symbols match the
binaries.)  Virtually no build system in use can guarantee the
above in all cases, so you end up making trade-offs - and if you
don't really understand those tradeoffs, you won't trust your
build system.

What else may mess up the perfection of repeatability of your
builds?  Environment variables, the registry (on Windows), any
source of entropy (the PRNG, the system clock/counters, any
network access), etc.

Build engineers themselves don't trust the build tooling because
for as long as we have had the tooling, no one has invested
enough into knowing what is trustworthy or how to make it that
way.  It's like always coding without a typesafe language, but
which "gets the job done."  Until you've spent some time in the
typesafe environment, maybe you can't realize the benefit.
You'll say "well, now I have to type a bunch more crap, and in
most cases it wouldn't have helped me anyway" right up until you
are sitting there at 3AM the night before shipping the product
trying to track down why your Javascript program - I mean build
process - isn't doing what you thought it did.  Just because you
CAN build a massive software system in Javascript doesn't mean
the language is per-se good - it may just mean you are sufficient
motivated to suffer through the pain.  I'd rather make the whole
experience *enjoyable* (hello TypeScript?)

Different people will make different tradeoffs, and I am not here
to tell Andrei or Walter that they *need* a new build system for
D to get their work done - they don't right now.  I'm more
interested in figuring out how to provide a platform to realize
the benefits for build like we have for our modern languages, and
then leveraging that in new ways (like better sharing between the
compiler, debugger, IDEs, test and packaging.)

September 25, 2014
On 9/25/2014 6:49 AM, Andrei Alexandrescu wrote:
> FWIW I'm glad no random name changes. I've
> recently used Rust a bit and the curse of D users as of 6-7 years ago reached
> me: most code I download online doesn't compile for obscure reasons, it's nigh
> impossible to figure out what the fix is from the compiler error message,
> searching online finds outdated documentation that tells me the code should
> work, and often it's random name changes (from_iterator to from_iter and such,
> or names are moved from one namespace to another).

The name changes cause much disruption and are ultimately pointless changes.


> For the stuff we eliminate we should provide good error messages that recommend
> the respective idiomatic solutions.

That's normal practice already.

September 25, 2014
On Thu, Sep 25, 2014 at 07:19:14PM +0000, Cliff via Digitalmars-d wrote:
> On Thursday, 25 September 2014 at 18:51:13 UTC, H. S. Teoh via Digitalmars-d wrote:
> >You don't know if recompiling after checking out a previous release of your code will actually give you the same binaries that you shipped 2 months ago.
> 
> To be clear, even if nothing changed, re-running the build may produce different output.  This is actually a really hard problem - some build tools actually use entropy when producing their outputs, and as a result running the exact same tool with the same parameters in the same [apparent] environment will produce a subtly different output.  This may be intended (address randomization) or semi-unintentional (generating a unique GUID inside a PDB so the debugger can validate the symbols match the binaries.)  Virtually no build system in use can guarantee the above in all cases, so you end up making trade-offs - and if you don't really understand those tradeoffs, you won't trust your build system.

Good point.


> What else may mess up the perfection of repeatability of your builds?  Environment variables, the registry (on Windows), any source of entropy (the PRNG, the system clock/counters, any network access), etc.

Well, obviously if your build process involves input from output, then it's impossible to have 100% repeatable builds. But at least we can do it (and arguably *should* do it) when there are no outside outputs.

I actually have some experience in this area, because part of the website project I described in another post involves running gnuplot to plot statistics of a certain file by connecting to an SVN repository and parsing its history log. Since the SVN repo is not part of the website repo, obviously the build of a previous revision of the website will never be 100% repeatable -- it will always generate the plot for the latest history rather than what it would've looked like at the time of the previous revision. But for the most part, this doesn't matter.

I *did* find that imagemagick was messing up my SCons scripts because it would always insert timestamped metadata into the generated image files, which caused SCons to always see the files as changed and trigger redundant rebuilds. This is also an example of where sometimes you do need to override the build system's default mechanisms to tell it to chill out and not rebuild that target every time. I believe SCons lets you do this, though I solved the problem another way -- by passing options to imagemagick to suppress said metadata.

Nevertheless, I'd say that overall, builds should be reproducible by default, and the user should tell the build system when it doesn't have to be -- rather than the other way round. Just like D's motto of safety first, unsafe if you ask for it.


[...]
> Different people will make different tradeoffs, and I am not here to tell Andrei or Walter that they *need* a new build system for D to get their work done - they don't right now.  I'm more interested in figuring out how to provide a platform to realize the benefits for build like we have for our modern languages, and then leveraging that in new ways (like better sharing between the compiler, debugger, IDEs, test and packaging.)

Agreed. I'm not saying we *must* replace the makefiles in dmd / druntime / phobos... I'm speaking more categorically, that build systems in general have advanced beyond the days of make, and it's high time people started learning about them. While you *could* write an entire application in assembly language (I did), times have moved on, and we now have far more suitable tools for the job.


T

-- 
"Computer Science is no more about computers than astronomy is about telescopes." -- E.W. Dijkstra
September 25, 2014
On 9/25/2014 4:08 AM, Don wrote:
> [...]

I agree with Andrei, it's a good list. I'll move these issues to the next step in the removal process.


> I'd also like to see us getting rid of those warts like assert(float.nan) being
> true.

See discussion:

https://issues.dlang.org/show_bug.cgi?id=13489


> Ask yourself, if D had no users other than you, so that you break *anything*,
> what would you remove? Make a wishlist, and then find out what's possible.
> Remember, when you did that before, we successfully got rid of 'bit', and there
> was practically no complaint.

Top of my list would be the auto-decoding behavior of std.array.front() on character arrays. Every time I'm faced with that I want to throw a chair through the window.

Probably second would be having const and purity by default.

September 25, 2014
On 9/25/2014 10:45 AM, Andrei Alexandrescu wrote:
> On 9/25/14, 10:26 AM, Atila Neves wrote:
>> On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg wrote:
>>> On 25/09/14 09:38, Atila Neves wrote:
>>>
>>>> Here's one: having to manually generate the custom main file for
>>>> unittest builds. There's no current way (or at least there wasn't when I
>>>> brought it up in the dub forum) to tell it to autogenerate a D file from
>>>> a dub package and list it as a dependency of the unittest build.
>>>
>>> Hmm, I haven't used Dub to run unit tests. Although, DMD has a "-main"
>>> flag that adds an empty main function.
>>
>> I don't want an empty main function. I want the main function and the
>> file it's in to be generated by the build system.
>
> Why would be the focus on the mechanism instead of the needed outcome? -- Andrei


I've found:

   -main -unittest -cov

to be terribly convenient when developing modules. Should have added -main much sooner.
September 25, 2014
On Thu, Sep 25, 2014 at 12:40:28PM -0700, Walter Bright via Digitalmars-d wrote:
> On 9/25/2014 4:08 AM, Don wrote:
[...]
> >Ask yourself, if D had no users other than you, so that you break *anything*, what would you remove? Make a wishlist, and then find out what's possible.  Remember, when you did that before, we successfully got rid of 'bit', and there was practically no complaint.
> 
> Top of my list would be the auto-decoding behavior of std.array.front() on character arrays. Every time I'm faced with that I want to throw a chair through the window.

LOL... now I'm genuinely curious what's Andrei's comment on this. :-P Last I heard, Andrei was against removing autodecoding.


> Probably second would be having const and purity by default.

Some of this could be mitigated if we expanded the sphere of attribute inference. I know people hated the idea of auto == infer attributes, but I personally support it.

Perhaps an alternate route to that is to introduce a @auto (or whatever you wanna call it, @infer, or whatever) and promote its use in D code, then slowly phase out functions not marked with @infer. After a certain point, @infer will become the default, and explicit @infer's will become no-op, and then subsequently dropped. This is very ugly, though. I much prefer extending auto to mean infer.


T

-- 
The peace of mind---from knowing that viruses which exploit Microsoft system vulnerabilities cannot touch Linux---is priceless. -- Frustrated system administrator.
September 25, 2014
On 2014-09-25 21:02, Ary Borenszweig wrote:

> Maybe Facebook needs D to interface with C++?

But I only see Andrei working on that. Don't know how much coding he does in practice for C++ compatibility.

-- 
/Jacob Carlborg
September 25, 2014
On Thu, Sep 25, 2014 at 12:51:45PM -0700, Walter Bright via Digitalmars-d wrote: [...]
> I've found:
> 
>    -main -unittest -cov
> 
> to be terribly convenient when developing modules. Should have added -main much sooner.

Yeah, I do that all the time nowadays when locally testing Phobos fixes. In the past I'd have to write yet another empty main() in yet another temporary source file, just to be able to avoid having to wait for the entire Phobos testsuite to run after a 1-line code change.


T

-- 
"How are you doing?" "Doing what?"
September 25, 2014
On 2014-09-25 20:49, H. S. Teoh via Digitalmars-d wrote:

> The compiler and compile flags are inputs to the build rules in SCons.
>
> In my SCons projects, when I change compile flags (possibly for a subset
> of source files), it correctly figures out which subset (or the entire
> set) of files needs to be recompiled with the new flags. Make fails, and
> you end up with an inconsistent executable.
>
> In my SCons projects, when I upgrade the compiler, it recompiles
> everything with the new compiler. Make doesn't detect a difference, and
> if you make a change and recompile, suddenly you got an executable 80%
> compiled with the old compiler and 20% compiled with the new compiler.
> Most of the time it doesn't make a difference... but when it does, have
> fun figuring out where the problem lies. (Or just make clean; make yet
> again... the equivalent of which is basically what SCons would have done
> 5 hours ago.)
>
> In my SCons projects, when I upgrade the system C libraries, it
> recompiles everything that depends on the updated header files *and*
> library files. In make, it often fails to detect that the .so's have
> changed, so it fails to relink your program. Result: your executable
> behaves strangely at runtime due to wrong .so being linked, but the
> problem vanishes once you do a make clean; make.

I see, thanks for the explanation.

-- 
/Jacob Carlborg