December 22, 2017
On Fri, Dec 22, 2017 at 06:21:04PM +0000, Russel Winder wrote:
> On Fri, 2017-12-22 at 09:42 -0800, H. S. Teoh via Digitalmars-d wrote: […]
> > that are a pain to manage. Yes I know dub does it "automatically", but the problem with dub is that it tries to do too much -- it wants to be a build system in addition to being a packaging system. The former is OK, I guess, even though I really wish it was more configurable in terms of how it manages local repository caches. But as a build system, I'm sorry to say that dub sucks. Or at least, its docs suck, 'cos I can't figure out how to make it do what I want. After struggling with it for about a week or two, I threw in the towel and went back to SCons.  Nowadays I only use dub for updating vibe.d via a dummy blank project.
> > 
> […]
> 
> Just to reiterate, SCons D support now has a ProgramAllAtOnce builder for those that want to use Unit-Threaded in their D codebases using SCons.

For D projects, I've been finding that Command() has been the best tool for me in terms of configuring exactly how I want things built. I used to use (early versions of) your SCons D build tools (and thanks for that!), but ultimately went back to Command() because I found it very frustrating to have my builds break because of an incompatible change in the D tooling whenever I upgrade SCons.  So until the SCons D tooling API has stabilized, I'll probably hold off for the time being.

Also, for vibe.d projects, I've been finding the need to write my own scanner in order to pick up Diet template (*.dt) dependencies, so that builds would trigger correctly when Diet templates are changed. There is no standard way to do this, unfortunately; so far I've been scanning for `render!(.*)` lines, but this doesn't always work if `render` is instantiated with parameters generated from CTFE.  Manual hardcoding has been necessary to get this part of my dependency tree to work.

In the long term, I think an approach similar to tup will have to be adopted. O(n) dependency scanning just doesn't cut it anymore for code the size of today's large software projects. And with dynamic dependencies (e.g. CTFE-dependent imports) that are bound to happen in D code with heavy metaprogramming, there's really no sane way to manage dependencies explicitly; you really need to just instrument the compiler and record all input files it reads the way tup does.  I shouldn't be needing to write custom scanners just to accomodate CTFE-generated imports that may change again after a few more commits. It's SSOT (single source of truth) all over again: the compiler is the ultimate authority that determines which file depends on what, and having to repeat this information in your build script (or independently derive it via scanners) introduces fragility / incompleteness into your build system.


> Also I have the beginnings of a Dub SCons tool for using Dub as a package manager in a SCons build.
[...]

That's nice, though for now, I'm sticking with manually updating my dependencies when needed.  One thing I found annoying with dub was the sheer amount of time it spent at startup to scan all dependencies and packages and possibly downloading a whole bunch of stuff. The network latency really kills the compile-test-debug cycle time.  I know there's a switch to suppress this behaviour, but the initial dependency scanning is still pretty slow even in spite of that.  When a 1-line change requires waiting 15-20 seconds just to recompile, that really breaks my workflow.

Plus, sometimes I *don't* want anything updated -- when debugging a program, the last thing I want is for dub or the build script or whatever to decide to link in a slightly different version of a library, and suddenly I'm no longer sure if the new crash is caused by the library or my own code, or the bug may now be masked by the slightly different behaviour of an upgraded library.

I know that for people who want things done for them automatically and handed over on a silver platter, dub is great.  Unfortunately, it doesn't work for me. (But I also know that I don't represent typical usage, so take all this with a grain of salt.)


T

-- 
Маленькие детки - маленькие бедки.
December 24, 2017
On Fri, 2017-12-22 at 10:39 -0800, H. S. Teoh via Digitalmars-d wrote:
> […]
> 
> For D projects, I've been finding that Command() has been the best
> tool
> for me in terms of configuring exactly how I want things built. I
> used
> to use (early versions of) your SCons D build tools (and thanks for
> that!), but ultimately went back to Command() because I found it very
> frustrating to have my builds break because of an incompatible change
> in
> the D tooling whenever I upgrade SCons.  So until the SCons D tooling
> API has stabilized, I'll probably hold off for the time being.

This is sad because without users the D support in SCons will not improve.

There have been no changes in the D support SCons 2.6 → 3.0 other than adding ProgramAllAtOnce so I have no idea what you found that broke. We need a test case so as to fix it for 3.0.2 or 3.1.0

> Also, for vibe.d projects, I've been finding the need to write my own
> scanner in order to pick up Diet template (*.dt) dependencies, so
> that
> builds would trigger correctly when Diet templates are changed. There
> is
> no standard way to do this, unfortunately; so far I've been scanning
> for
> `render!(.*)` lines, but this doesn't always work if `render` is
> instantiated with parameters generated from CTFE.  Manual hardcoding
> has
> been necessary to get this part of my dependency tree to work.

Let's write a standard publish it via SCons_D_Experiments initially and then put it is SCons Contrib or into the distribution.

People doing their own thing and not sharing is a good way of not getting good things into the core.

> In the long term, I think an approach similar to tup will have to be
> adopted. O(n) dependency scanning just doesn't cut it anymore for
> code
> the size of today's large software projects. And with dynamic
> dependencies (e.g. CTFE-dependent imports) that are bound to happen
> in D
> code with heavy metaprogramming, there's really no sane way to manage
> dependencies explicitly; you really need to just instrument the
> compiler
> and record all input files it reads the way tup does.  I shouldn't be
> needing to write custom scanners just to accomodate CTFE-generated
> imports that may change again after a few more commits. It's SSOT
> (single source of truth) all over again: the compiler is the ultimate
> authority that determines which file depends on what, and having to
> repeat this information in your build script (or independently derive
> it
> via scanners) introduces fragility / incompleteness into your build
> system.

Again unless we do something nothing will change.

I am not sure you can get away from some element of O(n) behaviour if a build system is to detect what is to be rebuilt in a compile then link system. Obviously there are ways of minimising cf. Tup and Ninja vs. Make and to some extent SCons. Tup still has a form of scan it is just very fast due to the use of the file system tools it uses.

So if SCons is to be abandoned for D builds let's agree that and got on with the tool that SCons and Dub are not.

[…]
> That's nice, though for now, I'm sticking with manually updating my
> dependencies when needed.  One thing I found annoying with dub was
> the
> sheer amount of time it spent at startup to scan all dependencies and
> packages and possibly downloading a whole bunch of stuff. The network
> latency really kills the compile-test-debug cycle time.  I know
> there's
> a switch to suppress this behaviour, but the initial dependency
> scanning
> is still pretty slow even in spite of that.  When a 1-line change
> requires waiting 15-20 seconds just to recompile, that really breaks
> my
> workflow.

I have been dithering with replacing the use of Dub itself, with a SCons tool to replace Dub and work directly with the repository. Dub's build structure really isn't useful for anything other than using Dub as a built system.

Having two modes: update each time vs. only update when the developer requires it is important. Unless a version glob is used, checking dependencies should never take long.

> Plus, sometimes I *don't* want anything updated -- when debugging a
> program, the last thing I want is for dub or the build script or
> whatever to decide to link in a slightly different version of a
> library,
> and suddenly I'm no longer sure if the new crash is caused by the
> library or my own code, or the bug may now be masked by the slightly
> different behaviour of an upgraded library.

Isn't this consequent on the Dub version specification? If a specific version is required this behaviour should not happen.

> I know that for people who want things done for them automatically
> and
> handed over on a silver platter, dub is great.  Unfortunately, it
> doesn't work for me. (But I also know that I don't represent typical
> usage, so take all this with a grain of salt.)

<panto-mode>
Oh no it isn't.
</panto-mode>

I am not a fan of Dub as a build system, but it appears to be the accepted standard, or in my view sub-standard. (Trying to develop GtkD code with Dub is a pain in the ####.)

Should the community push to ditch Make, CMake, SCons, Dub and use
Reggae (and hence Tup or Ninja)?

Not a simple question. For example CLion requires CMake. CMake-D appears not to work so we can do D in CLion. Work on D in IntelliJ IDEA is progressing but is relatively slow due to relying on volunteers. Compare Rust which is now officially supported by JetBrains. This makes a huge difference.

The develoment environment is almost as important as the programming language.

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk


December 24, 2017
On Sunday, 24 December 2017 at 12:09:56 UTC, Russel Winder wrote:
> I am not a fan of Dub as a build system, but it appears to be the accepted standard, or in my view sub-standard. (Trying to develop GtkD code with Dub is a pain in the ####.)
>
> Should the community push to ditch Make, CMake, SCons, Dub and use
> Reggae (and hence Tup or Ninja)?

I like SCons. Do any of the others have advantages over SCons?

The Python dependency IME does complicate things because it's not trivial to get it working on Windows. It's been too long to remember specifics, but it was an adventure, and if I've got a working D installation, why am I messing around with Python?
December 24, 2017
On Sun, 2017-12-24 at 13:27 +0000, bachmeier via Digitalmars-d wrote:
> 
[…]
> I like SCons. Do any of the others have advantages over SCons?

That is a moot point. For me SCons is the tool of choice when I am not using Meson.

> The Python dependency IME does complicate things because it's not trivial to get it working on Windows. It's been too long to remember specifics, but it was an adventure, and if I've got a working D installation, why am I messing around with Python?

Python being hard to install on Windows has been over since Python 3.4. A lot of effort went into making Python installation really easy. Python 3.6 should provide no problems. I am not sure Chocolately does as good a job is using the official installer. But as a Linux user I have no personal evidence.

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk


December 24, 2017
On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
> I'm going to do a writeup on the state of D in Web Development, APIs and Services for 2017. I need the perspective of the community too along with my personal experience. Please help out. More details the better.
>
> 0. Since when did you or company start using D in this area?

I have been fooling around with D for web-related stuff for a year or so, but nothing terribly concrete. I've been actively migrating my RSS system to D mostly in the past month and a bit.

> 1. Do you use a framework? Which one?

I use vibe.d.

> 2. Why that approach and what would have done otherwise?

It was there. Without it, I probably would have looked at Hunt briefly and then cobbled together something based on CGI.

> 3. Which task exactly do you use D to accomplish?

https://github.com/dhasenan/pierce/

I use D for all backend code, so:

* reading feeds
* mucking about with the database
* authentication

> 4. Which (dub) packages do you use and for what purpose?

* arsd-official:dom: an excellent HTML/XML parser
* datefmt: date parsing, primarily, with a side of formatting
* pbkdf2: password hashing
* urld: to have consistent URL handling with other applications
* vibe-d-postgresql: postgres

> 5. How do you host your software code (cloud platforms,  vps,  PaaS, docker,  Openshift, kubernetes, etc)?

I have a couple VPS boxes with linode that I deploy stuff to.

PaaS providers do me a concern. Docker is

> 6. What are some constraints and problems in using D for such tasks?

vibe.d is all non-blocking IO and it's not always easy to find a library that does a thing with non-blocking IO. You can dispatch tasks to a worker thread, but that uses D's `shared` system, which imposes barriers I'm not terribly familiar with. That also doesn't give you Futures.

I believe my code still has blocking database access, at least in production. With my user volume (one for the D alpha, three for the C# version), this isn't yet a huge problem; however, I've had to fork a process for handling background tasks, since they can be long.

dub doesn't know how to dynamically link dependencies. This means my binary is 38MB and takes half a minute to copy to the server. Since I worked out how to do dynamic linking manually, I'm going to add that in, and then I'll be able to rsync everything. That should reduce my application's binary size to a megabyte or less, which will be a lot nicer.

vibe doesn't have a logging appender for std.experimental.logger, and vibe's logging system isn't terribly awesome. I wrote a vibe-compatible rolling file appender for std.experimental.logger in the end. It's kind of weird, though, that std.experimental.logger doesn't separate layout from the output type. It makes sense for some potential appenders to ignore the layout system you would give them -- like if you have an appender for some sort of structured logging API that accepts protobuf-encoded events. But most logs are just text, and if I don't like the layout, I need to write a whole logger.

> 7. What solutions do you recommend?

vibe.d isn't a bad option. It's got a lot of stuff in it. However, it might be simpler on the whole to use fastcgi.
December 28, 2017
On Sun, Dec 24, 2017 at 12:09:56PM +0000, Russel Winder wrote:
> On Fri, 2017-12-22 at 10:39 -0800, H. S. Teoh via Digitalmars-d wrote:
[...]
> > In the long term, I think an approach similar to tup will have to be adopted. O(n) dependency scanning just doesn't cut it anymore for code the size of today's large software projects. And with dynamic dependencies (e.g. CTFE-dependent imports) that are bound to happen in D code with heavy metaprogramming, there's really no sane way to manage dependencies explicitly; you really need to just instrument the compiler and record all input files it reads the way tup does. I shouldn't be needing to write custom scanners just to accomodate CTFE-generated imports that may change again after a few more commits. It's SSOT (single source of truth) all over again: the compiler is the ultimate authority that determines which file depends on what, and having to repeat this information in your build script (or independently derive it via scanners) introduces fragility / incompleteness into your build system.
> 
> Again unless we do something nothing will change.
> 
> I am not sure you can get away from some element of O(n) behaviour if a build system is to detect what is to be rebuilt in a compile then link system. Obviously there are ways of minimising cf. Tup and Ninja vs.  Make and to some extent SCons. Tup still has a form of scan it is just very fast due to the use of the file system tools it uses.
> 
> So if SCons is to be abandoned for D builds let's agree that and got on with the tool that SCons and Dub are not.

OK, I may have worded things poorly here.  What I meant was that with "traditional" build systems like make or SCons, whenever you needed to rebuild the source tree, the tool has to scan the *entire* source tree in order to discover what needs to be rebuilt. I.e., it's O(N) where N is the size of the source tree.  Whereas with tup, it uses the Linux kernel's inotify mechanism to learn about which file(s) being monitored have been changed since the last invocation, so that it can scan the changed files in O(n) time where n is the number of changed files, and in the usual case, n is much smaller than N. It's still linear in terms of the size of the change, but sublinear in terms of the size of the entire source tree.

I think it should be obvious that an approach whose complexity is proportional to the size of the changeset is preferable to an approach whose complexity is proportional to the size of the entire source tree, esp.  given the large sizes of today's typical software projects.  If I modify 1 file in a project of 10,000 source files, rebuilding should not be orders of magnitude slower than if I modify 1 file in a project of 100 files.

In this sense, while SCons is far superior to make in terms of usability and reliability, its core algorithm is still inferior to tools like tup. Now, I've not actually used tup myself other than a cursory glance at how it works, so there may be other areas in which it's inferior to SCons.  But the important thing is that it gets us away from the O(N) of traditional build systems that requires scanning the entire source tree, to the O(n) that's proportional to the size of the changeset. The former approach is clearly not scalable. We ought to be able to update the dependency graph in proportion to how many nodes have changed; it should not require rebuilding the entire graph every time you invoke the build.


> […]
> > One thing I found annoying with dub was the sheer amount of time it spent at startup to scan all dependencies and packages and possibly downloading a whole bunch of stuff. The network latency really kills the compile-test-debug cycle time.  I know there's a switch to suppress this behaviour, but the initial dependency scanning is still pretty slow even in spite of that.  When a 1-line change requires waiting 15-20 seconds just to recompile, that really breaks my workflow.
> 
> I have been dithering with replacing the use of Dub itself, with a SCons tool to replace Dub and work directly with the repository. Dub's build structure really isn't useful for anything other than using Dub as a built system.
> 
> Having two modes: update each time vs. only update when the developer requires it is important. Unless a version glob is used, checking dependencies should never take long.

Preferably, checking dependencies ought not to be done at all unless the developer calls for it. Network access is slow, and I find it intolerable when it's not even necessary in the first place.  Why should it need to access the network just because I changed 1 line of code and need to rebuild?


> > Plus, sometimes I *don't* want anything updated -- when debugging a program, the last thing I want is for dub or the build script or whatever to decide to link in a slightly different version of a library, and suddenly I'm no longer sure if the new crash is caused by the library or my own code, or the bug may now be masked by the slightly different behaviour of an upgraded library.
> 
> Isn't this consequent on the Dub version specification? If a specific version is required this behaviour should not happen.

The documentation does not help in this respect. The only thing I could find was a scanty description of how to invoke dub in its most basic forms, with little or no information (or hard-to-find information) on how to configure it more precisely.  Also, why should I need to hardcode a specific version of a dependent library just to suppress network access when rebuilding?! Sometimes I *do* want to have the latest libraries pulled in -- *when* I ask for it -- just not every single time I build.


[...]
> I am not a fan of Dub as a build system, but it appears to be the accepted standard, or in my view sub-standard. (Trying to develop GtkD code with Dub is a pain in the ####.)

AFAIK, the only standard that Dub is, is a packaging system for D.  I find it quite weak as a build tool.  That's the problem, it tries to do too much.  It would have been nice if it stuck to just dealing with packaging, rather than trying to do builds too, and doing it IMO rather poorly.


> Should the community push to ditch Make, CMake, SCons, Dub and use
> Reggae (and hence Tup or Ninja)?
> 
> Not a simple question. For example CLion requires CMake. CMake-D appears not to work so we can do D in CLion. Work on D in IntelliJ IDEA is progressing but is relatively slow due to relying on volunteers.  Compare Rust which is now officially supported by JetBrains. This makes a huge difference.
> 
> The develoment environment is almost as important as the programming language.
[...]

Honestly, I don't care to have a "standard" build system for D. A library should be able to produce a .so or .a, and have an import path, and I couldn't care less how that happens; the library could be built by a hardcoded shell script for all I care. All I should need to do in my code is to link to that .so or .a and specify -I with the right import path(s). Why should upstream libraries dictate how my code is built?!

To this end, a standard way of exporting import paths in a D library (it can be as simple as a text file in the code repo, or some script or tool akin to llvm-config or sdl-config that spits out a list of paths / libraries / etc) would go much further than trying to shoehorn everything into a single build system.


T

-- 
No! I'm not in denial!
February 01, 2018
On Thu, 2017-12-28 at 10:21 -0800, H. S. Teoh via Digitalmars-d wrote:
> 
> […]

Apologies for taking so long to get to this.

> OK, I may have worded things poorly here.  What I meant was that with
> "traditional" build systems like make or SCons, whenever you needed
> to
> rebuild the source tree, the tool has to scan the *entire* source
> tree
> in order to discover what needs to be rebuilt. I.e., it's O(N) where
> N
> is the size of the source tree.  Whereas with tup, it uses the Linux
> kernel's inotify mechanism to learn about which file(s) being
> monitored
> have been changed since the last invocation, so that it can scan the
> changed files in O(n) time where n is the number of changed files,
> and
> in the usual case, n is much smaller than N. It's still linear in
> terms
> of the size of the change, but sublinear in terms of the size of the
> entire source tree.

This I can agree with. SCons definitely has to check hashes to determine which files have changed in a "not just space change" way on the leaves of the build ADG. I am not sure what Ninja does, but yes Tup uses inotify to filter the list of touched, but not necessarily changed, files. For my projects build time generally dominates check time so I don't see much difference. Except that Ninja is way faster than Make as a backend to CMake.

> I think it should be obvious that an approach whose complexity is
> proportional to the size of the changeset is preferable to an
> approach
> whose complexity is proportional to the size of the entire source
> tree,
> esp.  given the large sizes of today's typical software projects.  If
> I
> modify 1 file in a project of 10,000 source files, rebuilding should
> not
> be orders of magnitude slower than if I modify 1 file in a project of
> 100 files.

Is it obvious, but complexity is not everything, wall clock time is arguably more important. As is actual build time versus preparation time. SCons does indeed have a large up-front ADG check time for large projects. I believe there is the Parts overlay on SCons for dealing with big projects. I believe the plan for later in the year is for the most useful parts of Parts to become part of the main SCons system.

> In this sense, while SCons is far superior to make in terms of
> usability
> and reliability, its core algorithm is still inferior to tools like
> tup.

However Tup is not getting traction compared to CMake (and either Make of preferably Ninja backend – I wonder if there is a Tup backend).

> Now, I've not actually used tup myself other than a cursory glance at
> how it works, so there may be other areas in which it's inferior to
> SCons.  But the important thing is that it gets us away from the O(N)
> of
> traditional build systems that requires scanning the entire source
> tree,
> to the O(n) that's proportional to the size of the changeset. The
> former
> approach is clearly not scalable. We ought to be able to update the
> dependency graph in proportion to how many nodes have changed; it
> should
> not require rebuilding the entire graph every time you invoke the
> build.

I am not using Tup much simply because I have not started using it much, I just use SCons, Meson, and when I have to CMake/Ninja. In the end my projects are just not big enough for me to investigate the faster build times Tup reputedly brings.

> 
> […]

> Preferably, checking dependencies ought not to be done at all unless
> the
> developer calls for it. Network access is slow, and I find it
> intolerable when it's not even necessary in the first place.  Why
> should
> it need to access the network just because I changed 1 line of code
> and
> need to rebuild?

This was the reason for Waf, split the SCons system into a configuration set up and build à la Autotools. CMake also does this. As does Meson. I have a preference for this way. And yet I still use SCons quite a lot!

> 
[…]
> The documentation does not help in this respect. The only thing I
> could
> find was a scanty description of how to invoke dub in its most basic
> forms, with little or no information (or hard-to-find information) on
> how to configure it more precisely.  Also, why should I need to
> hardcode
> a specific version of a dependent library just to suppress network
> access when rebuilding?! Sometimes I *do* want to have the latest
> libraries pulled in -- *when* I ask for it -- just not every single
> time
> I build.

If Dub really is to become the system for D as Cargo is for Rust, it clearly needs more people to work on it and evolve the code and the documentation. Whilst no-one does stuff, the result will be rhetorical ranting on the email lists.

> […]
> 
> AFAIK, the only standard that Dub is, is a packaging system for D.  I
> find it quite weak as a build tool.  That's the problem, it tries to
> do
> too much.  It would have been nice if it stuck to just dealing with
> packaging, rather than trying to do builds too, and doing it IMO
> rather
> poorly.

No argument from me there, except Cargo. Cargo does a surprisingly good job of being a package management and build system. Even the go command is quite good at it for Go. So I am re-assessing my old dislike of this way – I used to be a "separate package management and build, and leave build to build systems" person, I guess I still am really. However Cargo is challenging my view, where Dub currently does not.

Given the thought above, unless I and others actually get on a evolve Dub, nothing will change.

> 
[…]
> Honestly, I don't care to have a "standard" build system for D. A
> library should be able to produce a .so or .a, and have an import
> path,
> and I couldn't care less how that happens; the library could be built
> by
> a hardcoded shell script for all I care. All I should need to do in
> my
> code is to link to that .so or .a and specify -I with the right
> import
> path(s). Why should upstream libraries dictate how my code is built?!

This last point is one of the biggest problems with the current Dub system, and a reason many people have no intention of using Dub for build.

Your earlier points in this paragraph should be turned into issues on the Dub source repository, and indeed the last one as well. And then we should create pull requests.

I actually think a standard way is a good thing, but that there should be other ones as well. SCons, CMake, Meson, etc. all need ways of building D for those who do not want to use the standard way. Seems reasonable to me. However SCons and Meson support for D is not yet as good as it would be to have it, and last tine I tried CMake-D didn't work for me.

> To this end, a standard way of exporting import paths in a D library
> (it
> can be as simple as a text file in the code repo, or some script or
> tool
> akin to llvm-config or sdl-config that spits out a list of paths /
> libraries / etc) would go much further than trying to shoehorn
> everything into a single build system.

So let's do it rather than just talk about it?

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk


February 02, 2018
On Thu, Feb 01, 2018 at 12:56:28PM +0000, Russel Winder wrote: [...]
> Apologies for taking so long to get to this.

Not a problem, you and I are both busy, and it's perfectly understandable that we can't respond to things instantly.


> On Thu, 2017-12-28 at 10:21 -0800, H. S. Teoh via Digitalmars-d wrote:
[...]
> > OK, I may have worded things poorly here.  What I meant was that with "traditional" build systems like make or SCons, whenever you needed to rebuild the source tree, the tool has to scan the *entire* source tree in order to discover what needs to be rebuilt. I.e., it's O(N) where N is the size of the source tree.  Whereas with tup, it uses the Linux kernel's inotify mechanism to learn about which file(s) being monitored have been changed since the last invocation, so that it can scan the changed files in O(n) time where n is the number of changed files, and in the usual case, n is much smaller than N. It's still linear in terms of the size of the change, but sublinear in terms of the size of the entire source tree.
> 
> This I can agree with. SCons definitely has to check hashes to determine which files have changed in a "not just space change" way on the leaves of the build ADG. I am not sure what Ninja does, but yes Tup uses inotify to filter the list of touched, but not necessarily changed, files. For my projects build time generally dominates check time so I don't see much difference. Except that Ninja is way faster than Make as a backend to CMake.

In small projects like my personal ones, SCons still does a fast enough job that I don't really care about the difference between O(N) and O(n). So I still use SCons for them -- SCons does have a really nice interface and is generally pleasant to work with, so I don't feel an immediate need to improve the build system.

But in a large project, like the one I work with at my job, containing 500,000 source files (not counting data files and the like that also need to be processed by the build system), the difference can become very pronounced.  In our case, we use make, which doesn't scan file contents, and recursive make at that, so the initial scanning pause is not noticeable. However, this perceived speed comes at the heavy cost of reliability. On more occasions than I'd wish anyone else to experience, I've had problems with faulty software builds caused not by actual bugs in the code, but merely by make not rebuilding something when it should be, or not cleaning up stray stale files when it should, causing stale object files to be linked instead of the real objects.  It has simply become an accepted fact of life to `make clean; make`. Well actually, it's even worse than that -- our `make clean` does *not* clean everything that might potentially be a problem, so I have for the last ≥5 years resorted to a script that manually deletes everything that isn't under version control.  (What makes it even sadder is that the version control server is overloaded and it's faster to delete files locally than to checkout a fresh copy of the workspace, which is essentially what my script amounts to.)

At one point I was on the verge of proposing SCons as a make replacement, but balked when initial research into the prospect showed that SCons consistently had performance issues with needing to scan the entire source tree before it begins building.  That, coupled with general resistance to change in your average programmer workforce and their general unfamiliarity with make alternatives, made me back off from making the proposal.

Had tup been around at that time, it would likely have turned the tables with its killer combo of sublinear (relative to workspace size) scanning and reliability.  That's why I think that any modern build system that's going to last into the future must have these two features, at a minimum.


> > I think it should be obvious that an approach whose complexity is proportional to the size of the changeset is preferable to an approach whose complexity is proportional to the size of the entire source tree, esp.  given the large sizes of today's typical software projects.  If I modify 1 file in a project of 10,000 source files, rebuilding should not be orders of magnitude slower than if I modify 1 file in a project of 100 files.
> 
> Is it obvious, but complexity is not everything, wall clock time is arguably more important.

Using inotify() to update your dependency tree has basically zero wall clock time because it's done in the background. You can't beat that with anything that requires scanning upon invocation of the build tool.


> As is actual build time versus preparation time. SCons does indeed have a large up-front ADG check time for large projects. I believe there is the Parts overlay on SCons for dealing with big projects. I believe the plan for later in the year is for the most useful parts of Parts to become part of the main SCons system.

But still, the fundamental design limitation remains: scanning time is proportional to workspace size, as opposed to being proportional to changeset size.  Judging by current trends in software sizes, this issue is only going to become increasingly important.


> > In this sense, while SCons is far superior to make in terms of usability and reliability, its core algorithm is still inferior to tools like tup.
> 
> However Tup is not getting traction compared to CMake (and either Make of preferably Ninja backend – I wonder if there is a Tup backend).

I mentioned Tup as an example of a superior build algorithm to the decades-old make model. I'm not partial to Tup itself, and it doesn't concern me whether or not it's gaining traction.  What I'm more concerned with is whether the underlying algorithm of (insert whatever build system you prefer here) is going to remain relevant going forward.


> > Now, I've not actually used tup myself other than a cursory glance at how it works, so there may be other areas in which it's inferior to SCons.  But the important thing is that it gets us away from the O(N) of traditional build systems that requires scanning the entire source tree, to the O(n) that's proportional to the size of the changeset. The former approach is clearly not scalable. We ought to be able to update the dependency graph in proportion to how many nodes have changed; it should not require rebuilding the entire graph every time you invoke the build.
> 
> I am not using Tup much simply because I have not started using it much, I just use SCons, Meson, and when I have to CMake/Ninja. In the end my projects are just not big enough for me to investigate the faster build times Tup reputedly brings.

Given its simplicity and lack of historical baggage, I'm expecting Tup will be pretty fast, if not on par, with existing make-based designs, when it comes to small to medium projects.  But for large projects of today's scale, I'm expecting Tup is going to outstrip its competitors by orders of magnitude, maybe more, *while still maintaining build reliability*. (It *may* be possible to beat Tup in speed if you sacrifice reliability, but I'm not considering that option as viable.) Tup is getting pretty close to doing the absolute minimum work you need to do in order for a code change to be reflected in the build products. Any less than that, and you start risking unreliable builds (i.e. outdated build products are not rebuilt).


[...]
> > Preferably, checking dependencies ought not to be done at all unless the developer calls for it. Network access is slow, and I find it intolerable when it's not even necessary in the first place.  Why should it need to access the network just because I changed 1 line of code and need to rebuild?
> 
> This was the reason for Waf, split the SCons system into a configuration set up and build à la Autotools. CMake also does this. As does Meson. I have a preference for this way. And yet I still use SCons quite a lot!

IMO, if a build system relies on network access as part of its dependency graph, then something has gone horribly wrong. (Aside from NFS and the like, of course.)  Updating libraries is IMO not the build system's job; that's what a package manager is supposed to be doing. The build system should be concerned solely with producing build products, given the current state of the source tree.  It has no business going about *updating* the source tree from the network willy-nilly just because it can.  That's simply an unworkable model -- I could be in the middle of debugging something, and then I rebuild and suddenly the bug can no longer be reproduced because the build tool has "helpfully" replaced one of my libraries with a new version and now the location of the bug has shifted, putting my hours' worth of work in narrowing down the locus of the bug to waste.


[...]
> > The documentation does not help in this respect. The only thing I could find was a scanty description of how to invoke dub in its most basic forms, with little or no information (or hard-to-find information) on how to configure it more precisely.  Also, why should I need to hardcode a specific version of a dependent library just to suppress network access when rebuilding?! Sometimes I *do* want to have the latest libraries pulled in -- *when* I ask for it -- just not every single time I build.
> 
> If Dub really is to become the system for D as Cargo is for Rust, it clearly needs more people to work on it and evolve the code and the documentation. Whilst no-one does stuff, the result will be rhetorical ranting on the email lists.

The problem is that I have fundamental disagreements with dub's design, and therefore find it difficult to bring myself to work on its code, since my first inclination would be to rip its guts out and rewrite from scratch, which I don't think Sönke will take kindly to, much less merge into the official repo.  I suppose if I were pressed I could bring myself to contribute to its documentation, but right now, I've switched back to SCons for my builds and basically confined dub to a dummy empty project that fetches and builds my dependent libraries and nothing else. This setup works well for me, so I don't really have much motivation to improve dub's docs or otherwise improve dub -- I won't be using it very much after all.


[...]
> > AFAIK, the only standard that Dub is, is a packaging system for D. I find it quite weak as a build tool.  That's the problem, it tries to do too much.  It would have been nice if it stuck to just dealing with packaging, rather than trying to do builds too, and doing it IMO rather poorly.
> 
> No argument from me there, except Cargo. Cargo does a surprisingly good job of being a package management and build system. Even the go command is quite good at it for Go. So I am re-assessing my old dislike of this way – I used to be a "separate package management and build, and leave build to build systems" person, I guess I still am really. However Cargo is challenging my view, where Dub currently does not.
[...]

Then perhaps you should submit PRs to dub to make it more Cargo-like. ;-)


[...]
> > Honestly, I don't care to have a "standard" build system for D. A library should be able to produce a .so or .a, and have an import path, and I couldn't care less how that happens; the library could be built by a hardcoded shell script for all I care. All I should need to do in my code is to link to that .so or .a and specify -I with the right import path(s). Why should upstream libraries dictate how my code is built?!
> 
> This last point is one of the biggest problems with the current Dub system, and a reason many people have no intention of using Dub for build.
> 
> Your earlier points in this paragraph should be turned into issues on the Dub source repository, and indeed the last one as well. And then we should create pull requests.

Good idea.  Though I can't see this changing without rather intrusive changes to the way dub works, so I'm not sure if Sönke would be open to this sort of change.  But submitting issues to that effect wouldn't hurt.


> I actually think a standard way is a good thing, but that there should be other ones as well. SCons, CMake, Meson, etc. all need ways of building D for those who do not want to use the standard way. Seems reasonable to me. However SCons and Meson support for D is not yet as good as it would be to have it, and last tine I tried CMake-D didn't work for me.

A standard way to build would be fine if we were starting out from scratch, in a brand new ecosystem, like Rust.  The problem is, D has always supported C/C++-style builds since day 1, and D codebases have been around for far longer than dub has been, and have become entrenched in the way they are built.  So for dub (or any other packaging / build system, really) to come along and be gratuitously incompatible with how existing build systems work, is a big showstopper, and gives off the impression of being a walled garden -- either you embrace it fully to the exclusion of all else, or you're left out in the cold.


> > To this end, a standard way of exporting import paths in a D library (it can be as simple as a text file in the code repo, or some script or tool akin to llvm-config or sdl-config that spits out a list of paths / libraries / etc) would go much further than trying to shoehorn everything into a single build system.
> 
> So let's do it rather than just talk about it?
[...]

Sure.  Since this information is ostensibly already present in a dub project (encoded somewhere in dub.json or dub.sdl), it seems to make little sense to introduce yet another new thing that nobody implements. So a first step might be to enhance dub with a command-line command to output import paths / linker paths in a machine-readable format.  Then existing dub projects can be immediately made accessible to external build systems by having said build systems invoke:

	dub config	# or whatever verb is chosen for this purpose

Perhaps, to eliminate the need for existing build scripts to need to parse JSON or something like that, we could provide finer-grained subcommands, like:

	dub config import-paths
	dub config linker-paths
	dub config dynamic-library-paths

and it would output, respectively, something along the lines of:

	/path/to/somelibrary/src
	/path/to/someotherlib/src
	/path/to/yetanotherlib/submodule1/import
	/path/to/yetanotherlib/submodule2/import

	/path/to/somelibrary/generated/os/64/lib
	/path/to/someotherlib/generated/lib
	/path/to/yetanotherlib/generated/sub/module1/out
	/path/to/yetanotherlib/generated/sub/module2/out

	-lsomelibrary
	-lsomeotherlib
	-lyetanotherlib


Not 100% sure what to do with existing non-dub projects. Perhaps a text file in some standard location.


T

-- 
In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
1 2
Next ›   Last »