June 12, 2017
p.s.: like, i've seen one Fallout 2 recreatoin project where they're doing translation of screen coords to hex coords not with two simple formulas, but by looping over 40000 (200x200) hex objects (objects!), and calling *virtual* method to ask if a hex contains such coords. and you know what? *nobody* noticed any slowdown. 'cause there is none.

it took me ~5 mins to visualise scewed coordinate axes and derive the formulas. it took 'em ~30 seconds to write a loop (i guess). and their work is actually *better* than mine: they had 4.30 mins free, while i was doing useless things.
June 12, 2017
On Mon, Jun 12, 2017 at 09:53:42PM +0300, ketmar via Digitalmars-d wrote:
> H. S. Teoh wrote:
> 
> > Don't forget tup, and others inspired by it, which use modern OS
> > features to reduce the cost of determining what to build to an O(1)
> > database lookup rather than an O(n) whole-source tree scan.
> 
> added complexity for nothing. disks gradually replacing with ssd, amount of RAM allows to cache alot, and CPUs are faster and faster (and has more cores).

CPU speed has nothing to do with this, disk roundtrip is always going to be the bottleneck, even with SSD.


> i still have HDD, 8GB of RAM, and 32-bit system. and it even worse some time ago. in the last ~8 years i was using my k8jam for various projects: several kb to multimegabytes of code and thousand files/alot of subdirs. source tree scanning and dependency resolving NEVER was any significant factor.

The problem is not building the entire source tree -- that's not the use case we're targeting.  We're talking about incremental builds, which are not relevant to make because in make they are inherently unreliable.

Several KB is kids' play.  I work with a source tree with at least 800MB of source code (not counting auxiliary scripts and other resources that must be compiled as part of the build process), producing a 130MB archive of *compressed* executables, libraries, and other resources. The only way I can even maintain my sanity is to `cd deeply/nested/subdir; make` to compile the specific target I'm working on. Scanning dependencies take a *long* time, and can mean the difference between being able to test a 1 line change in 1 second vs. 5 *minutes* while the thing scans 800MB worth of source code just to recompile 2 source files.

It's either that, or scan the timestamps instead of the file contents, like make does, which will be fast, but you then have the lovely side-effect of unreliable builds and non-existent bugs that appear in the executable but has no representation in the source code.  *Then* you have to ditch incremental buids altogether and do wholesale 'make clean; make', so it's time to take a nap for 20 mins while it recompiles the entire source tree just to account for a 1-line bug fix.


> and i really mean it: it was less than a second even for a huge projects, where linking alone took long enough that i could get coffee and cigarette.  ;-)
[...]

Your project is not huge enough. :-D


T

-- 
Do not reason with the unreasonable; you lose by definition.
June 12, 2017
On 2017-06-12 09:00, Jonathan M Davis via Digitalmars-d wrote:

> It's true that we don't have to constantly edit the makefiles, so it's not a
> constant pain point, but it does come up every time we add or remove any
> modules, and the pain in dealing with the makefiles and the time wasted with
> them adds up over time.

For some reason, in most of the PRs I've made to DMD I had to modify the makefiles, just recently [1]. And not just the makefiles, now there's a Visual Studio project that needs to kept up to date as well.

[1] https://github.com/dlang/dmd/pull/6837

-- 
/Jacob Carlborg
June 12, 2017
H. S. Teoh wrote:

>> and i really mean it: it was less than a second even for a huge
>> projects, where linking alone took long enough that i could get coffee
>> and cigarette.  ;-)
> [...]
>
> Your project is not huge enough. :-D

~20MB, >2000 source files. for *this* it was something like 0.5-3 seconds (it obviously oscillates). and of course, i'm not talking about full rebuilds. this is *all* the time k8jam spent before invoking compiler/linker.

and for pathological 800MB use cases... don't do that. you obviously don't need to have thing thing as a one huge project (althru i'm sure that k8jam can do it).

k8jam can use timestamps and md5 sums to detect changes (althru i'm usually using only timestamps, and had zero problems with it ever), and it can optionally cache gathered info.

note that even for small "helloworld" C project, k8jam also checks *all* standard libc include files that which brought into the project even by simple `#inlcude <stdlib.h>`! and i never bothered to optimize this, 'cause it takes no time anyway.
June 12, 2017
p.s.: btw, k8jam is not a "make wrapper", it is a self-contained build system, in one ~100 kb binary. ;-)
June 12, 2017
On Mon, Jun 12, 2017 at 10:41:13PM +0300, ketmar via Digitalmars-d wrote: [...]
> ~20MB, >2000 source files.
[...]

Toy project. :-D ;-)  The project I work on has more than 78,000 files, of which more than 50,000 are source code.


> and for pathological 800MB use cases... don't do that. you obviously don't need to have thing thing as a one huge project (althru i'm sure that k8jam can do it).
[...]

Unfortunately, I don't get to make this decision; I'm only an employee.

Besides, putting everything in an 800MB source tree is actually necessary because this is software for an embedded system -- we have to basically build the entire OS along with all its utilities and other application software that will run on it.


T

-- 
ASCII stupid question, getty stupid ANSI.
June 12, 2017
On Monday, 12 June 2017 at 20:04:22 UTC, H. S. Teoh wrote:
> On Mon, Jun 12, 2017 at 10:41:13PM +0300, ketmar via we have to basically build the entire OS along with all its utilities and other application software that will run on it.


... and tup can do it [1]... ;-P

/P

[1] http://gittup.org/gittup/
June 12, 2017
H. S. Teoh wrote:

i'm pretty sure that i *don't* want to know more. ;-)
June 13, 2017
On Sunday, 11 June 2017 at 19:17:36 UTC, Andrei Alexandrescu wrote:
> Phobos' posix.mak offers the ability to only run unittests for one module:
>
> make std/range/primitives.test BUILD=debug -j8
>
> ... or package:
>
> make std/range.test BUILD=debug -j8
>
> It runs module tests in parallel and everything. This is definitely awesome. But say I misspell things by using a dot instead of the slash:
>
> make std.range.test BUILD=debug -j8
>
> Instead of an error, I get a no-op result that looks like success. How can that situation be converted to an error?
>
>
> Thanks,
>
> Andrei

I've shared this same frustration.  I once took a stab at creating a "DMake" system, where you could use standard MAKE syntax, or D code side by side.  I never finished it but you can find the concept here.  If there is interest I'd be willing to finish it.

https://github.com/marler8997/dmake

June 13, 2017
On Monday, 12 June 2017 at 07:00:46 UTC, Jonathan M Davis wrote:
> On Monday, June 12, 2017 06:34:31 Sebastien Alaiwan via Digitalmars-d wrote:
>> On Monday, 12 June 2017 at 06:30:16 UTC, ketmar wrote:
>> > Jonathan M Davis wrote:
>> >> It's certainly a pain to edit the makefiles though
>> >
>> > and don't forget those Great Copying Lists to copy modules. forgetting to include module in one of the lists was happened before, not once or twice...
>>
>> I don't get it, could you please show an example?
>
> posix.mak is a lot better than it used to be, but with win{32,64}.mak, you have to list the modules all over the place. So, adding or removing a module becomes a royal pain, and it's very easy to screw up. Ideally, we'd just list the modules once in one file that was then used across all of the platforms rather than having to edit several files every time we add or remove anything. And the fact that we're using make for all of this makes that difficult if not impossible (especially with the very limited make that we're using on Windows).

Are you implying that we are currently keeping compatibility with NMAKE (the 'make' from MS)?

GNU make inclusion mechanism makes it possible and easy to share a list of modules between makefiles.

Before switching to a fancy BS, we might benefit from learning to fully take advantage of the one we currently have!