Jump to page: 1 2 3
Thread overview
I like dlang but i don't like dub
Mar 18, 2022
Alain De Vos
Mar 18, 2022
Cym13
Mar 18, 2022
Tobias Pankrath
Mar 18, 2022
H. S. Teoh
Mar 18, 2022
Ali Çehreli
Mar 18, 2022
H. S. Teoh
Mar 19, 2022
mee6
Mar 21, 2022
Alexandru Ermicioi
Mar 21, 2022
Tobias Pankrath
Mar 20, 2022
Adam D Ruppe
Mar 20, 2022
Ali Çehreli
Mar 21, 2022
H. S. Teoh
Mar 22, 2022
IGotD-
Mar 22, 2022
H. S. Teoh
Mar 19, 2022
Guillaume Piolat
Mar 21, 2022
Dadoum
Mar 21, 2022
Tobias Pankrath
Mar 21, 2022
rikki cattermole
Mar 21, 2022
Marcone
Mar 22, 2022
AnimusPEXUS
Mar 22, 2022
Marcone
Mar 22, 2022
Mike Parker
Mar 22, 2022
Adam D Ruppe
March 18, 2022

Dlang includes some good ideas.
But dub pulls in so much stuff. Too much for me.
I like things which are clean,lean,little,small.
But when i use dub it links with so many libraries.
Are they really needed ?
And how do you compare to pythons pip.
Feel free to elaborate.

March 18, 2022

On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:

>

Dlang includes some good ideas.
But dub pulls in so much stuff. Too much for me.
I like things which are clean,lean,little,small.
But when i use dub it links with so many libraries.
Are they really needed ?
And how do you compare to pythons pip.
Feel free to elaborate.

Long story short, dub isn't needed. If you prefer pulling dependencies and compiling them by hand nothing is stopping you.

As for comparison to pip, I'd say that dub compares favourably actually. Yes, it does do more than pip, and that used to annoy me. But if you look at it from the stance of a user it makes sense: when you pull dependencies or a package using pip you expect to be able to run them immediately. Python isn't a compiled language, but D is and to get these packages and dependencies to be run immediately it needs to do more than pip: download dependencies, manages their version and compile them. This last part is the reason for most of the added complexity to dub IMHO.

March 18, 2022

On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:

>

Dlang includes some good ideas.
But dub pulls in so much stuff. Too much for me.
I like things which are clean,lean,little,small.
But when i use dub it links with so many libraries.
Are they really needed ?
And how do you compare to pythons pip.
Feel free to elaborate.

Dub is fantastic at some places, e.g. if you need to just execute something from code.dlang.org via dub run, or single file packages (https://dub.pm/advanced_usage) are great to write small cmdline utilities with dependencies.

I don't like it as a build system and it is notoriously hard to integrate into existing build systems. You can look at meson (which had some D related bug fixes recently) or reggae for that. Or just do dmd -i as long as compile times are low enough.

March 18, 2022
On Fri, Mar 18, 2022 at 04:13:36AM +0000, Alain De Vos via Digitalmars-d-learn wrote:
> Dlang includes some good ideas.
> But dub pulls in so much stuff. Too much for me.
> I like things which are clean,lean,little,small.
> But when i use dub it links with so many libraries.
> Are they really needed ?
[...]

As a package manager, dub is OK, it does its job.

As a build system, I find dub simply isn't good enough for my use cases. Its build model is far too simplistic, and does not integrate well with external build systems (esp in projects involving multiple programming languages).  I rather prefer SCons for my general build needs.

As far as dub pulling in too many libraries: IMNSHO this is a malaise of modern software in general. (No) thanks to the code reuse mantra, nobody seems satisfied until they refactor every common function into its own package, and everything depends on everything else, so doing something trivial like displaying a static webpage with vibe.d pulls in 25 packages just so it can be built.

I much rather prefer Adam's arsd libs[1], where you can literally just copy the module into your own workspace (they are almost all standalone single-file modules, except for a small number of exceptions) and just build away. No hairy recursive dependencies to worry about, everything you need is encapsulated in a single file.  That's the kind of dependency philosophy I subscribe to.  The dependency graph of a project should not be more than 2 levels deep (preferably just 1). You shouldn't have to download half the world just to build a hello world program. And you definitely shouldn't need to solve an NP-complete problem[2] just so you can build your code.

[1] https://github.com/adamdruppe/arsd/
[2] https://research.swtch.com/version-sat - Dependency hell is
NP-complete.


T

-- 
It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton
March 18, 2022
tldr; I am talking on a soap box with a big question mind hovering over on my head: Why can't I accept pulling in dependencies automatically?

On 3/18/22 07:48, H. S. Teoh wrote:

> As a package manager, dub is OK, it does its job.

As a long-time part of the D community, I am ashamed to admit that I don't use dub. I am ashamed because there is no particular reason, or my reasons may not be rational.

> As a build system

I have seen and used a number of build systems that were started after make's shortcomings and they ended up with their own shortcomings. Some of them were actually program code that teams would write to build their system. As in steps "compile these, then do these". What? My mind must have been tainted by the beauty of make that writing build steps in a build tool strikes me as unbelievable... But it happened. I don't remember its name but it was in Python. You would modify Python code to build your programs. (?)

I am well aware of make's many shortcomings but love it's declarative style where things happen automatically. That's one smart program there. A colleague loves Bazel and is playing with it. Fingers crossed...

> I much rather prefer Adam's arsd libs[1], where you can literally just
> copy the module into your own workspace (they are almost all standalone
> single-file modules

That sounds great but aren't there common needs of those modules to share code from common modules?

It is ironic that packages being as small as possible reduces the chance of dependencies of those modules and at the same time it increases the total number of dependencies.

> The dependency graph of a project
> should not be more than 2 levels deep (preferably just 1).

I am fortunate that my programs are commond line tools and libraries that so far depended only on system libraries. The only outside dependency is cmake-d to plug into our build system. (I don't understand or agree with all of cmake-d but things are in an acceptable balance at the moment.) The only system tool I lately started using is ssh. (It's a topic for another time but my program copies itself to the remote host over ssh to work as a pair of client and server.)

> You shouldn't have to download half the world

The first time I learned about pulling in dependencies terrified me. (This is the part I realize I am very different from most other programmers.) I am still terrified that my dependency system will pull in a tree of code that I have no idea doing. Has it been modified to be malicious overnight? I thought it was possible. The following story is an example of what I was exactly terrified about:


https://medium.com/hackernoon/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5

Despite such risks many projects just pull in code. (?) What am I missing?

I heard about a team at a very high-profile company actually reviewing such dependencies before accepting them to the code base. But reviewing them only at acceptance time! Once the dependency is accepted, the projects would automatically pull in all unreviewed changes and run potentially malicious code on your computer. I am still trying to understand where I went wrong. I simply cannot understand this. (I want to believe they changed their policy and they don't pull in automatically anymore.)

When I (had to) used Go for a year about 4 years ago, it was the same: The project failed to build one morning because tere was an API change on one of the dependencies. O... K... They fixed it in a couple of hours but still... Yes, the project should probably have depended on a particular version but then weren't we interested in bug fixes or added functionality? Why should we have decided to hold on to version 1.2.3 instead of 1.3.4? Should teams follow their many dependencies before updating? Maybe that's the part I am missing...

Thanks for listening... Boo hoo... Why am I like this? :)

Ali

March 18, 2022
On Fri, Mar 18, 2022 at 11:16:51AM -0700, Ali Çehreli via Digitalmars-d-learn wrote:
> tldr; I am talking on a soap box with a big question mind hovering over on my head: Why can't I accept pulling in dependencies automatically?

Because it's a bad idea for your code to depend on some external resource owned by some anonymous personality somewhere out there on the 'Net that isn't under your control.


> On 3/18/22 07:48, H. S. Teoh wrote:
> 
> > As a package manager, dub is OK, it does its job.
> 
> As a long-time part of the D community, I am ashamed to admit that I don't use dub. I am ashamed because there is no particular reason, or my reasons may not be rational.

I have only used dub once -- for an experimental vibe.d project -- and that only using a dummy empty project the sole purpose of which was to pull in vibe.d (the real code is compiled by a different build system).

And I'm not even ashamed to admit it. :-P


> > As a build system
> 
> I have seen and used a number of build systems that were started after make's shortcomings and they ended up with their own shortcomings. Some of them were actually program code that teams would write to build their system. As in steps "compile these, then do these". What? My mind must have been tainted by the beauty of make that writing build steps in a build tool strikes me as unbelievable... But it happened. I don't remember its name but it was in Python. You would modify Python code to build your programs. (?)

Maybe you're referring to SCons?  I love SCons... not because it's Python, but because it's mostly declarative (the Python calls don't actually build anything immediately -- they register build actions with the build engine and are executed later by an opaque scheduler). The procedural part is really for things like creating lists of files and such (though for the most common tasks there are already basically-declarative functions available for use), or for those occasions where the system simply doesn't have the means to express what you want to do, and you need to invent your own build recipe and plug it in.


> I am well aware of make's many shortcomings but love it's declarative style where things happen automatically. That's one smart program there. A colleague loves Bazel and is playing with it. Fingers crossed...

Make in its most basic incarnation was on the right path.  What came after, however, was a gigantic mess. The macro system, for example, which leads to spaghetti code of the C #ifdef-hell kind.  Just look at dmd/druntime/phobos' makefiles sometime, and see if you can figure out what exactly it's trying to do, and how.

There's also implementational issues, the worst of which is non-reproducibility: running `make` after making some changes has ZERO guarantees about the consistency of what happens afterwards. It *may* just work, or it may silently link in stale binaries from previous builds that silently replace some symbols with obsolete versions, leading to heisenbugs that exist in your executable but do not exist in your code.  (I'm not making this up; I have seen this with my own eyes in my day job on multiple occasions.)

The usual bludgeon-solution to this is `make clean; make`, which defeats the whole purpose of having a build system in the first place (just write a shell script to recompile everything from scratch, every time). Not to mention that `clean` isn't a built-in rule, and I've encountered far too many projects where `make clean` doesn't *really* clean everything thoroughly. Lately I've been resorting to `git clean -dfx` as a nuke-an-ant solution to this persistent problem. (Warning: do NOT run the above git command unless you know what you're doing. :-P)


> > I much rather prefer Adam's arsd libs[1], where you can literally just copy the module into your own workspace (they are almost all standalone single-file modules
> 
> That sounds great but aren't there common needs of those modules to share code from common modules?

Yes and no. The dependencies aren't zero, to be sure.  But Adam also doesn't take code reuse to the extreme, in that if some utility function can be written in 2-3 lines, there's really no harm repeating it across modules.  Introducing a new module just to reuse 2-3 lines of code is the kind of emperor's-clothes philosophy that leads to Dependency Hell.

Unfortunately, since the late 70's/early 80's code reuse has become the sacred cow of computer science curriculums, and just about everybody has been so indoctrinated that they would not dare copy-n-paste a 2-3 line function for fear that the Reuse Cops would come knocking on their door at night.


> It is ironic that packages being as small as possible reduces the chance of dependencies of those modules and at the same time it increases the total number of dependencies.

IMNSHO, when the global dependency graph becomes non-trivial (e.g., NP-complete Dependency Hell), that's a sign that you've partitioned your code wrong.  Dependencies should be simple, i.e., more-or-less like a tree, without diamond dependencies or conflicting dependencies of the kind that makes resolving dependencies NP-complete.

The one-module-per-dependency thing about Adam's arsd is an ideal that isn't always attainable. But the point is that one ought to strive in the direction of less recursive dependencies rather than more.  When importing a single Go or Python module triggers the recursive installation of 50+ modules, 45 of which I've no idea why they're needed, is a sign that something has gone horribly, horribly wrong with the whole thing; we're losing sight of the forest for the trees. That way be NP-complete dragons.


> > The dependency graph of a project should not be more than 2 levels
> > deep (preferably just 1).
> 
> I am fortunate that my programs are commond line tools and libraries that so far depended only on system libraries. The only outside dependency is cmake-d to plug into our build system. (I don't understand or agree with all of cmake-d but things are in an acceptable balance at the moment.) The only system tool I lately started using is ssh. (It's a topic for another time but my program copies itself to the remote host over ssh to work as a pair of client and server.)

I live and breathe ssh. :-D  I cannot imagine getting anything done at all without ssh.  Incidentally, this is why I prefer a vim-compatible programming environment over some heavy-weight IDE any day. Running an IDE over ssh is out of the question.


> > You shouldn't have to download half the world
> 
> The first time I learned about pulling in dependencies terrified me.

This is far from the first time I encountered this concept, and it *still* terrifies me. :-D


> (This is the part I realize I am very different from most other
> programmers.)

I love being different! ;-)


> I am still terrified that my dependency system will pull in a tree of code that I have no idea doing. Has it been modified to be malicious overnight? I thought it was possible. The following story is an example of what I was exactly terrified about:
> 
> https://medium.com/hackernoon/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5

EXACTLY!!!  This is the sort of thing that gives nightmares to people working in network security.  Cf. also the Ken Thompson compiler hack.


> Despite such risks many projects just pull in code. (?) What am I
> missing?

IMNSHO, it's because of the indoctrination of code reuse. "Why write code when you can reuse something somebody else has already written?" Sounds good, but there are a lot of unintended consequences:

1) You become dependent on code of unknown provenance written by authors of unknown motivation; how do you know you aren't pulling in malicious code?  (Review the code you say? Ha! If you were that diligent, you'd have written the code yourself in the first place.  Not likely.)  This problem gets compounded with every recursive dependency (it's perhaps imaginable if you carefully reviewed library L before using it -- but L depends on 5 other libraries, each of which in turn depends on 8 others, ad nauseaum. Are you seriously going to review ALL of them?)

2) You become dependent on an external resource, the availability of which may not be under your control. E.g., what happens if you're on the road without an internet connection, your local cache has expired, and you really *really* need to recompile something?  Or what if one day, the server on which this dependency was hosted suddenly upped and vanished itself into the ether?  Don't tell me "but it's hosted on XYZ network run by Reputable Company ABC, they'll make sure their servers never go down!" -- try saying that 10 years later when you suddenly really badly need to recompile your old code.  Oops, it doesn't compile anymore, because a critical dependency doesn't exist anymore and nobody has a copy of the last ancient version the code compiled with.

3) The external resource is liable to change any time, without notice (the authors don't even know you exist, let alone who you are and why changing some API will seriously break your code). Wake up the day of your important release, and suddenly your project doesn't compile anymore 'cos upstream committed an incompatible change. Try explaining that one to your irate customers. :-P


> I heard about a team at a very high-profile company actually reviewing such dependencies before accepting them to the code base. But reviewing them only at acceptance time! Once the dependency is accepted, the projects would automatically pull in all unreviewed changes and run potentially malicious code on your computer.

Worse yet, at review time library L depended on external packages X, Y, Z.  Let's grant that X, Y, Z were reviewed as well (giving the benefit of the doubt here).  But are the reviewers seriously going to continue reviewing X, Y, Z on an ongoing basis?  Perhaps X, Y, Z depended upon P, Q, R as well; is *anyone* who uses L going to even notice when R's maintainer turned rogue and committed some nasty backdoor into his code?


> I am still trying to understand where I went wrong. I simply cannot understand this. (I want to believe they changed their policy and they don't pull in automatically anymore.)

If said company is anything like the bureaucratic nightmare I have to deal with every day, I'd bet that nobody cares about this 'cos it's not their department.  Such menial tasks are owned by the department of ItDoesntGetDone, and nobody ever knows what goes on there -- we're just glad they haven't bothered us about show-stopping security flaws yet. ;-)


> When I (had to) used Go for a year about 4 years ago, it was the same: The project failed to build one morning because tere was an API change on one of the dependencies. O... K... They fixed it in a couple of hours but still...  Yes, the project should probably have depended on a particular version but then weren't we interested in bug fixes or added functionality? Why should we have decided to hold on to version 1.2.3 instead of 1.3.4? Should teams follow their many dependencies before updating? Maybe that's the part I am missing...

See, this is the fundamental problem I have with today's philosophy of "put it all in `the cloud', that's the hip thing to do". I do *not* trust that code from some external server somewhere out there isn't going to just vanish into the ether suddenly, or keel over and die the day after, or just plain get hacked (very common these days) and had trojan code inserted into the resource I depend on.  Or the server just plain becomes unreachable because I'm on the road, or my ISP is acting up (again), or the network it's on got sanctioned overnight and now I'm breaking the law just by downloading it.

I also do *not* trust that upstream isn't going to commit some incompatible change that will fundamentally break my code in ways that are very costly to fix.  I mean, they have every right to do so, why should they stop just because some anonymous user out here depended on their code?  I want to *manually* verify every upgrade to make sure that it hasn't broken anything, before I commit to the next version of the dependency. AND I want to keep a copy of the last working version on my *local harddrive* until I'm 100% sure I don't need it anymore.  I do NOT trust some automated package manager to do this for me correctly (I mean, software can't possibly ever fail, right?).

And you know what?  I've worked with some projects that have lasted for over a decade or two, and on that time scale, the oft-touted advantages of code reuse have taken on a whole new perspective that people these days don't often think about.  I've seen several times how as time goes on, external dependencies become more of a liability than an asset. In the short term, yeah it lets you get off the ground faster, saves you the effort of reinventing the wheel, blah blah blah.  In the long term, however, these advantages don't seem so advantageous anymore:

- You don't *really* understand the code you depend on, which means if
  upstream moves in an incompatible direction, or just plain abandons
  the project (the older the project the more likely this happens), you
  would not have the know-how to replicate the original functionality
  required by your own code.

- Sometimes the upstream breakage is a subtle one -- it works most of
  the time, but in this one setting with this one particular customer
  the behaviour changed. Now your customer is angry and you don't have
  the know-how to fix it (and upstream isn't going to do it 'cos the old
  behaviour was a bug).

- You may end up with an irreplacable dependency on abandoned old code,
  but since it isn't your code you don't have the know-how to maintain
  it (e.g., fix bugs, security holes, etc.). This can mean you're stuck
  with a security flaw that will be very expensive to fix.

- Upstream may not have broken anything, but the performance
  characteristics may have changed (for the worse). I'm not making this
  up -- I've seen an actual project where compiling with the newer
  library causes a 2x reduction in runtime performance. Many months
  after, it was somewhat improved, but still inferior to the original
  *unoptimized* library. And complaining upstream didn't help -- they
  insisted their code wasn't intended to be used this way, so the
  performance issues are the user's fault, not theirs.

- Upstream licensing terms may change, leaving you stuck up the creek
  without a paddle.

Writing the code yourself may have required more up-front investment (and provoke the ire of the Code Reuse police), but you have the advantage that you own the code, have a copy of it always available, won't have licensing troubles, and understand the code well enough to maintain it over the long term.  You become independent of the network availability, immune to outages and unwanted breaking changes.

The code reuse emperor has no clothes, but his cronies brand me as heretic scum worthy only to be spat out like a gnat. Such is life. ;-)


> Thanks for listening... Boo hoo... Why am I like this? :)
[...]

'cos you're the smart one. ;-)  Most people don't even think about these issues, and then years later it comes back and bites them in the behind.


T

-- 
It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton
March 19, 2022

On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:

>

Dlang includes some good ideas.
But dub pulls in so much stuff. Too much for me.
I like things which are clean,lean,little,small.
But when i use dub it links with so many libraries.
Are they really needed ?
And how do you compare to pythons pip.
Feel free to elaborate.

DUB changed my programming practice.

To understand why DUB is needed I think it's helpful to see the full picture, at the level of your total work, in particular recurring costs.

An example

My small software shop operation (sorry) is built on DUB and if I analyze my own package usage, there are 4 broad categories:

  • Set A. Proprietary Code => 8 packages, 30.4 kloc

  • Set B. Open source, that I wrote, maintain, and evolve => 33 packages, 88.6 kloc

  • Set C. Open source, that I maintain minimally, write it in part only => 5 packages, 59.1 kloc

  • Set D. Foreign packages (not maintaining it, nor wrote it. Stuff like arsd) => 14 package, 45.9 kloc

=> Total = 224 kloc, only counting non-whitespace lines here.

This is only the code that needs to be kept alive and maintained. Obviously code that is more R&D and/or temporary bear no recurring cost.

Visually:

Set A: ooo         30.4 (proprietary)
Set B: ooooooooo   88.6 (open-source)
Set C: oooooo      59.1 (open-source)
Set D: oooo        45.9 (open-source)
--------------------------------------
Total: oooooooooooooooooooooo

What is the cost of maintaining all that?

At a very minimum, all code in A + B + C + D needs to build with the D compiler since the business use it, and build at all times.

Maintaining the "it builds" invariant takes a fixed cost m(A) + m(B) + m(C) + m(D).

Here m(D) is beared by someone else.
As B and C are open-source and maintained by me, the cost of building B and C for someone else is zero, that's why ecosystem is so important for language, as a recurrent expense removal. And indeed, open-source ecosystem is probably the main driver of language adoption, as a pure capital gain.

Now consider the cost of evolving and bug fixing instead of just building.
=> This is about the same reasoning, with perhaps bug costs being less transferrable. Reuse delivers handsomely, and is cited by the Economics of Software Quality as one of the best driver for increased quality [1]. Code you don't control, but trust, is a driver for increased quality (and as the book demonstrate: lowered cost/defect/litigations).

Now let's pretend DUB doesn't exist

For maintaining the invariant "it builds with latest compiler", you'd have to pay:
   m(A) + m(B) + m(C) but then do another important task:

   => Copy each new updated source in dependent projects.

Unfortunately this isn't trivial at all, that code is now duplicated in several place.

Realistically you will do this on an as-needed basis. And then other people can rely on none of your code (it doesn't build, statistically) and then much fewer ecosystem becomes possible (because nothing builds and older version of files are everywhere).

Without using DUB, you can't have a large set of code that maintain this or that invariant, and will have to rely to an attentional model where only the last thing you worked on is up-to-date.

DUB also make it easy to stuff your code into the B and C categories which provides value for everyone. With DUB you won't have say VisualD projects because the cost of maintaining the invariant "has a working VisualD project" would be too high, but with DUB because it's declarative it's almost free.

[1] "The Economics of Software Quality" - Jones, Bonsignour, Subramanyam

March 19, 2022
On Friday, 18 March 2022 at 21:04:03 UTC, H. S. Teoh wrote:
> Review the code you say? Ha! If you were that diligent, you'd have written the code yourself in the first place.  Not likely.

That logic doesn't make sense. Reading code takes way less time than writing good code, especially for larger projects.


March 20, 2022
On Friday, 18 March 2022 at 18:16:51 UTC, Ali Çehreli wrote:
> As a long-time part of the D community, I am ashamed to admit that I don't use dub. I am ashamed because there is no particular reason, or my reasons may not be rational.


dub is legitimately awful. I only use it when forced to, and even making my libs available for others to use through it is quite an unnecessary hassle due to its bad design.

> That sounds great but aren't there common needs of those modules to share code from common modules?

Some. My policy is:

1) Factor out shared things when I *must*, not just because I can.

So if it just coincidentally happens to be the same code, I'd actually rather copy/paste it than import it. Having a private copy can be a hassle - if a bug fix applies to both, I need to copy/paste it again - but it also has two major benefits: it keeps the build simple and it keeps the private implementation actually private. This means I'm not tempted to complicate the interface to support two slightly different use cases if the need arises; I have the freedom to edit it to customize for one use without worrying about breaking it for another.

When I must factor out it is usually because it is part of a shared public *interface* rather than an implementation detail. A shared interface happens when interoperability is required. The biggest example in my libs is the Color and MemoryImage objects, which are loaded from independent image format modules and then can be loaded into independent screen drawing or editing modules. Loading an image then being unable to display it without a type conversion* would be a bit silly, hence the shared type.

* Of course, sometimes you convert anyway. With .tupleof or getAsBytes or something, you can do agnostic conversions, but it is sometimes nice to just have `class SpecialImage { this(GenericImage) { } }` to do the conversions and that's where a shared third module comes in, so they can both `import genericimage;`.

2) Once I do decide to share something, there's a policy of tiers:

first tier has zero imports (exceptions made for druntime and SOMETIMES phobos, but i've been strict about phobos lately too). These try to be the majority of them, providing interop components and some encapsulated basic functionality. They can import other things but only if the user actually uses it. For example, dom.d has zero imports for basic functions. But if you ask it to load a non-utf8 file, or a file from the web, it will import arsd.characterencodings and/or arsd.http2 on-demand.

Basic functionality must just work, it allows those opt-in extensions though.

second tier has generally just one import, and it must be from the first tier or maybe a common C library. I make some exceptions to add an interop interface module too, but I really try to keep it to just one. These built on the interop components to provide some advanced functionality. This is where my `script.d` comes in, for example, extending `jsvar.d`'s basic functionality with a dynamic script capability.

I also consider `minigui.d` to be here, since it extends simpledisplay's basic drawing with a higher-level representation of widgets and controls, though since simpledisplay itself imports color.d now (it didn't when I first wrote them... making that change was something I really didn't want to do, but was forced to by practical considerations), minigui does have two imports... but still, I'm leaving it there.

Then, finally, there's the third tier, which I call the application/framework tier, which is the rarest one in my released code (but most common in my proprietary code, where I just `dmd -i` it and use whatever is convenient). At this point, I'll actually pull in whatever I want (from the arsd package, that is) so there is no limit on the number of imports. I still tend to minimize them, but won't take extraordinary effort. This is quite rare for me to do in a library module since this locks it out of use by any other library module! Obviously, no tier one or two files can import a tier three, so if I want to actually reuse anything in there, it must be factored out back to independence first.

C libraries btw are themselves also imports, so I also minimize them, but there's again some grey area: postgres.d use both database.d as the shared interface, but libpq as its implementation. I still consider it tier two though, despite a C library being even harder for the user to set up than 50 arsd modules.

3) I try to minimize and batch breaking changes, including breaks to the build instructions. When I changed simpledisplay to import color, it kinda bugged me since for a few years at that point, I told people they can just download it off my website and go.

I AM considering changing this policy slightly and moving more to tier two, so it is the majority instead of tier one. All my new instructions say "dmd -i" instead of "download the file" but I still am really iffy on if it is worth it. Merging the Uri structs and the event loops sounds nice, having the containers and exception helpers factored out would bring some real benefits, but I've had this zero-or-one import policy for so long, making it one-or-two seems too far. But I'll decide next year when the next breaking change release is scheduled.

(btw my last breaking change release was last summer and it actually broke almost nothing since i was able to migration path it to considerable joy. im guessing most my users never even noticed it happened)

> Despite such risks many projects just pull in code. (?) What am I missing?

It is amazing how many pretty silly things are accepted as gospel.

March 20, 2022
On 3/20/22 05:27, Adam D Ruppe wrote:

> So if it just coincidentally happens to be the same code, I'd actually
> rather copy/paste it than import it.

This is very interesting because it is so much against common guidelines. I first read about such copy/paste in a book (my guess is John Lakos's Large Scale C++ Software Design book because my next example below is from that book.) The author was saying exactly the same thing: Yes, copy/paste is bad but dependencies are bad as well.

I was surprised by John Lakos's decision to use external header guards. In addition to the following common include guard idiom:

// This is foo.h
#ifndef INCLUDED_FOO_H_
#define INCLUDED_FOO_H_
// ...
#endif // INCLUDED_FOO_H_

He would do the same in the including modules as well:

// This is bar.c
#ifndef INCLUDED_FOO_H_
#include "foo.h"
#endif
// ...

Such a crazy idea and it is completely against the DRY principle! However, according to his measurements on hardware and file systems at that time, he was saving a lot of build time. (File system's reading the file many times to determine that it has already been included was too expensive. Instead, he was determining it from the very include guard macro himself.)

Those were the first examples when I started to learn that it was possible to go against common guidelines.

I admire people like you and John Lakos who don't follow guidelines blindly. I started to realize the power of engineering very late. Engineering almost by definition should break guidelines.

Ali

« First   ‹ Prev
1 2 3