April 19, 2019
On Sat, Apr 20, 2019 at 01:14:12AM -0400, Nick Sabalausky (Abscissa) via Digitalmars-d wrote:
> On 4/19/19 5:58 PM, H. S. Teoh wrote:
[...]
> > Yes. I for one dumped Gradle shortly after starting my Android project, because it just didn't let me do what I need to do, or at least not easily.  Gradle expects your typical Java codebase with standard source tree structure.  I needed D codegen and cross-compilation as integral parts of my build.  The two ... let's just say, don't work very well together.  It's the "my way or the highway" philosophy all over again.
> 
> No big surprise why a "my way or the highway philosophy" is more successful with a JVM audience than a D audience ;)
> 
> Like I alluded to: different language -> different audience -> different needs and requirements -> different "right" answers.

What I have trouble comprehending is, the technology needed to support *both* use cases already exists.  So why not use it??  Why arbitrarily exclude certain use cases in favor of one specific one, when there is no technological reason not to support all use cases?  We already have the tech to fly to the moon and back, yet we arbitrarily impose the restriction that the aircraft must remain in contact with the runway at all times, because some grandma on the plane is scared of heights (aka codebases that don't conform to the One True Way To Organize And Build Source Code).  It doesn't make any sense to me.


T

-- 
It's bad luck to be superstitious. -- YHL
April 26, 2019
On Fri, Apr 26, 2019 at 02:37:31PM +0100, Russel Winder via Digitalmars-d wrote:
> On Fri, 2019-04-19 at 14:58 -0700, H. S. Teoh via Digitalmars-d wrote:
> > 
> […]
> > Yes. I for one dumped Gradle shortly after starting my Android project, because it just didn't let me do what I need to do, or at least not easily.  Gradle expects your typical Java codebase with standard source tree structure.  I needed D codegen and cross-compilation as integral parts of my build.  The two ... let's just say, don't work very well together.  It's the "my way or the highway" philosophy all over again.  Yes it hides away a lot of complexity, and does a lot of nice things automatically -- when what you need to do happens to match the narrow use case Gradle was designed to do.  But when you need to do something *other* than the One Accepted Way, you're in for a long uphill battle -- assuming it's even possible in the first place.  To that, I say, No Thank You, I have other tools that fit better with how I work.
> > 
> 
> Gradle is definitely not rigid as implied in the above: it can work with any source structure. True there is a default, and "convention over configuration" is the main philosophy, but this can be easily overiden by those who need to.
> 
> There are hooks for doing pre-compilation code generation, though I suspect whilst there is support for C++, there is no ready made support for D.

Here's a question, since you obviously know Gradle far better than my admitted ignorance: does Gradle support the following?

- Compile a subset of source files into an executable, and run said
  executable on some given set of input files in order to produce a set
  of auto-generated source files. Then compile the remaining source
  files along with the auto-generated ones to produce the final product.

- Compile a subset of source files into an executable, then run said
  executable on a set of data files to transform them in some way, then
  invoke a second program on the transformed data files to produce a set
  of images that are then post-processed by image-processing tools to
  produce the final outputs.

- Given a set of build products (executables, data files, etc.), pack
  them into an archive, compress, sign, etc., to produce a deliverable.

- Have all of the above work correctly and minimally whenever one or
  more input files (either data files or source files) changes.
  Minimally meaning if a particular target doesn't depend on the changed
  input files, it doesn't get rebuilt, and if an intermediate build
  product is identical to the previous build, subsequent steps are
  elided.

If Gradle can support all of the above, then it may be worthy of consideration.

Of course, the subsequent question would then be, how easy is it to accomplish the above, i.e., is it just a matter of adding a few lines to the build description, or does it involve major build system hackery.


> That you chose to ditch Gradle and go another way is entirely fine, but to denigrate Gradle as above based on what appears to be a single episode quickly abandoned, is a bit unfair to Gradle.

I did not intend to denigrate Gradle; I was merely relating my own experience with it.  Clearly, it works very well for many people, otherwise it wouldn't even be on the radar in the first place.  But it took far too much effort than I was willing to put in to get it to do what I need to do, esp. when I already have an SCons system that I'm already familiar with, and can get up and running with minimal effort. Given such a choice, it should be obvious why I decided against using Gradle.


[...]
> The elided comments on Gradle requiring JVM and thus lots of memory and relatively slow startup is very fair. Much better grounds for your ditching of Gradle than not finding the Gradle way of doing things you needed to do.

See, this is the problem I have with many supposedly "modern" tools. They are giant CPU and memory hogs, require tons of disk space for who knows what, take forever to start up, and chug along taking their sweet time just to get the simplest of tasks done, where supposedly "older" or "antiquated" tools fire up in an instant, get right to work, and get the same task done in a fraction of the time.  This in itself may have been excusable, but then these "modern" tools also come with a whole bunch of red tape, arbitrary limitations, and the philosophy of "follow our way or walk there on foot yourself".  IOW they do what "antiquated" tools do poorly, and can't even do advanced things said "antiquated" tools can do without any fuss -- and this not because of *technical* limitations, but because it was arbitrarily decided at some point that use cases X and Y will not be supported, just 'cuz.

So somebody please enlighten this here dinosaur of *why* I should choose these "modern" tools over my existing, and IMO better, ones?

This is why I'm a fan of empowering the user.  Instead of being obsessed over how to package your software in a beautiful jewel-encrusted box with a gold ribbon on top, (which IMO is an utter waste of time and only attracts those with an inflated sense of entitlement), give the user the tools to do what the software can do. Instead of delivering a magic black box that does everything but with the hood welded shut, give them the components within the black box with which they can build something far beyond what you may have initially conceived.  I believe *that* is the way to progress, not this "hand me my software on a silver platter" philosophy so pervasive nowadays.

(The jewel-encrusted box is still nice, BTW -- but only as long as it's not at the expense of welding the hood shut.)


> I feel Gradle is probably not a good choice for D builds now, but it could be.  This is because no-one is using for that purpose and so the easy to use, D oriented tools are not available, everything has to be done with the base Gradle tools.
[...]

If Gradle is dependent on Java, then using it for D builds would be kind of ... anticlimactic. :-D  But probably the primary issue would be the slow startup times and heavy storage requirements that goes against the grain of our present (cringe-worthy) fast-fast-fast slogan. I suppose it's no coincidence that the Gradle logo is ... just as slow and mellow as the tool itself. ;-)  (I love RL elephants, BTW, so this isn't meant in a denigrating way. But I'd have a very hard time justifying the order-of-magnitude increase in my edit-compile-run cycle turnaround times.  Life is far too short to have to wait for such things.)


T

-- 
Talk is cheap. Whining is actually free. -- Lars Wirzenius
April 27, 2019
On Friday, 26 April 2019 at 17:30:50 UTC, H. S. Teoh wrote:
> On Fri, Apr 26, 2019 at 02:37:31PM +0100, Russel Winder via Digitalmars-d wrote:
>> On Fri, 2019-04-19 at 14:58 -0700, H. S. Teoh via Digitalmars-d wrote:
>> > 
>> […]
>> > Yes. I for one dumped Gradle shortly after starting my Android project, because it just didn't let me do what I need to do, or at least not easily.  Gradle expects your typical Java codebase with standard source tree structure.  I needed D codegen and cross-compilation as integral parts of my build.  The two ... let's just say, don't work very well together.  It's the "my way or the highway" philosophy all over again.  Yes it hides away a lot of complexity, and does a lot of nice things automatically -- when what you need to do happens to match the narrow use case Gradle was designed to do.  But when you need to do something *other* than the One Accepted Way, you're in for a long uphill battle -- assuming it's even possible in the first place.  To that, I say, No Thank You, I have other tools that fit better with how I work.
>> > 
>> 
>> Gradle is definitely not rigid as implied in the above: it can work with any source structure. True there is a default, and "convention over configuration" is the main philosophy, but this can be easily overiden by those who need to.
>> 
>> There are hooks for doing pre-compilation code generation, though I suspect whilst there is support for C++, there is no ready made support for D.
>
> Here's a question, since you obviously know Gradle far better than my admitted ignorance: does Gradle support the following?
>
> - Compile a subset of source files into an executable, and run said
>   executable on some given set of input files in order to produce a set
>   of auto-generated source files. Then compile the remaining source
>   files along with the auto-generated ones to produce the final product.
>
> - Compile a subset of source files into an executable, then run said
>   executable on a set of data files to transform them in some way, then
>   invoke a second program on the transformed data files to produce a set
>   of images that are then post-processed by image-processing tools to
>   produce the final outputs.

Not sure what you find difficult about this. You create 2 tasks, one task generates the executable. The second task runs the executable, add the first task as a dependency of the second task. It really just boils down to, do this task before this other task and a task should be able to be just a command you can run. Which is basically what every build system supports.

https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:adding_dependencies_to_tasks

> - Given a set of build products (executables, data files, etc.), pack
>   them into an archive, compress, sign, etc., to produce a deliverable.

This really is just a task where you run a command. Signing is different for basically every system, and depending on the system it can be different based on what program you are using to sign. If you expect the build system to know how to sign an executable by just doing "sign: true" then you aren't going to find a build system like that unless it only works on one system.

For zip files you can easily archive them:

https://docs.gradle.org/current/userguide/working_with_files.html#sec:archives

> - Have all of the above work correctly and minimally whenever one or
>   more input files (either data files or source files) changes.
>   Minimally meaning if a particular target doesn't depend on the changed
>   input files, it doesn't get rebuilt, and if an intermediate build
>   product is identical to the previous build, subsequent steps are
>   elided.

This has been around for quite a while, since 2.5 by the looks of it. You have to specify the input and output files so it can detect if they were changed.

https://docs.gradle.org/2.5/userguide/more_about_tasks.html#sec:up_to_date_checks

> If Gradle can support all of the above, then it may be worthy of consideration.
>
> Of course, the subsequent question would then be, how easy is it to accomplish the above, i.e., is it just a matter of adding a few lines to the build description, or does it involve major build system hackery.

What build systems do you know that support the above. Like I said somethings are kind of unreasonable, like expecting a build system to be able to sign your executables with just a simple flag. The signing process I have to do doesn't actually allow us to automate it. For some reason it won't work, you have to be physically be at the server for it to sign the executable. Seems like it might be a security feature, can't even remote into the server to sign it manually like that either.

>> That you chose to ditch Gradle and go another way is entirely fine, but to denigrate Gradle as above based on what appears to be a single episode quickly abandoned, is a bit unfair to Gradle.
>
> I did not intend to denigrate Gradle; I was merely relating my own experience with it.  Clearly, it works very well for many people, otherwise it wouldn't even be on the radar in the first place.  But it took far too much effort than I was willing to put in to get it to do what I need to do, esp. when I already have an SCons system that I'm already familiar with, and can get up and running with minimal effort. Given such a choice, it should be obvious why I decided against using Gradle.

Imagine that, learning something new takes a lot more effort than using something you already know. Who would have thought?

> [...]
>> The elided comments on Gradle requiring JVM and thus lots of memory and relatively slow startup is very fair. Much better grounds for your ditching of Gradle than not finding the Gradle way of doing things you needed to do.
>
> See, this is the problem I have with many supposedly "modern" tools. They are giant CPU and memory hogs, require tons of disk space for who knows what, take forever to start up, and chug along taking their sweet time just to get the simplest of tasks done, where supposedly "older" or "antiquated" tools fire up in an instant, get right to work, and get the same task done in a fraction of the time.  This in itself may have been excusable, but then these "modern" tools also come with a whole bunch of red tape, arbitrary limitations, and the philosophy of "follow our way or walk there on foot yourself".  IOW they do what "antiquated" tools do poorly, and can't even do advanced things said "antiquated" tools can do without any fuss -- and this not because of *technical* limitations, but because it was arbitrarily decided at some point that use cases X and Y will not be supported, just 'cuz.
>
> So somebody please enlighten this here dinosaur of *why* I should choose these "modern" tools over my existing, and IMO better, ones?

What do you consider giant CPU and memory hogs that require a ton of disk space? Cause from my experience the people here complain about VS taking up so much space, end up doing development work on some shitty tablet with 32 GB of HDD space.

I just use a python script, I don't feel like bothering with things like SCons. I execute exactly what I need and it is easy to see what it is doing and what is happening. It doesn't take long to compile either though, I usually only need to re-compile one component as well.

Use whatever you want, just don't go condemning something cause you never bothered to learn how to use it.


April 27, 2019
On Fri, 2019-04-26 at 10:30 -0700, H. S. Teoh via Digitalmars-d wrote:
> 
[…]
> Here's a question, since you obviously know Gradle far better than my admitted ignorance: does Gradle support the following?
> 
> - Compile a subset of source files into an executable, and run said
>   executable on some given set of input files in order to produce a set
>   of auto-generated source files. Then compile the remaining source
>   files along with the auto-generated ones to produce the final product.

Yes.

> - Compile a subset of source files into an executable, then run said
>   executable on a set of data files to transform them in some way, then
>   invoke a second program on the transformed data files to produce a set
>   of images that are then post-processed by image-processing tools to
>   produce the final outputs.

Yes.

> - Given a set of build products (executables, data files, etc.), pack
>   them into an archive, compress, sign, etc., to produce a deliverable.

Yes.

> - Have all of the above work correctly and minimally whenever one or
>   more input files (either data files or source files) changes.
>   Minimally meaning if a particular target doesn't depend on the changed
>   input files, it doesn't get rebuilt, and if an intermediate build
>   product is identical to the previous build, subsequent steps are
>   elided.

Yes.

> If Gradle can support all of the above, then it may be worthy of consideration.
> 
> Of course, the subsequent question would then be, how easy is it to accomplish the above, i.e., is it just a matter of adding a few lines to the build description, or does it involve major build system hackery.

Not all the things are part of Gradle as distributed, you have to write some tasks. However the infrastructure is there to make writing the tasks straightforward. This not one or two lines, but not major hackery.

[...]
> 
> See, this is the problem I have with many supposedly "modern" tools. They are giant CPU and memory hogs, require tons of disk space for who knows what, take forever to start up, and chug along taking their sweet time just to get the simplest of tasks done, where supposedly "older" or "antiquated" tools fire up in an instant, get right to work, and get the same task done in a fraction of the time.  This in itself may have been excusable, but then these "modern" tools also come with a whole bunch of red tape, arbitrary limitations, and the philosophy of "follow our way or walk there on foot yourself".  IOW they do what "antiquated" tools do poorly, and can't even do advanced things said "antiquated" tools can do without any fuss -- and this not because of *technical* limitations, but because it was arbitrarily decided at some point that use cases X and Y will not be supported, just 'cuz.

This is a general problem of software, not just build systems such as Gradle. It does seem though that Meson + Ninja are much lighter weight.

> So somebody please enlighten this here dinosaur of *why* I should choose these "modern" tools over my existing, and IMO better, ones?

What are the better ones, one cannot debate without knowledge. ;-)

> This is why I'm a fan of empowering the user.  Instead of being obsessed over how to package your software in a beautiful jewel-encrusted box with a gold ribbon on top, (which IMO is an utter waste of time and only attracts those with an inflated sense of entitlement), give the user the tools to do what the software can do. Instead of delivering a magic black box that does everything but with the hood welded shut, give them the components within the black box with which they can build something far beyond what you may have initially conceived.  I believe *that* is the way to progress, not this "hand me my software on a silver platter" philosophy so pervasive nowadays.
> 
> (The jewel-encrusted box is still nice, BTW -- but only as long as it's not at the expense of welding the hood shut.)

UI and UX are important, definitely. The Groovy DSL and the Kotlin DSL for writing build are focussed on convention over configuration so as to make things as simple as possible for the straightforward cases. But the tools are there to configure and programme the build for complicated cases. In this Gradle and SCons have similar capabilities though SCons lack many things built in to Gradle.

There are probably 10x as many build requirements as there are programmers. A single unopenable black box build system simply has no chance.

[…]
> 
> If Gradle is dependent on Java, then using it for D builds would be kind of ... anticlimactic. :-D  But probably the primary issue would be the slow startup times and heavy storage requirements that goes against the grain of our present (cringe-worthy) fast-fast-fast slogan. I suppose it's no coincidence that the Gradle logo is ... just as slow and mellow as the tool itself. ;-)  (I love RL elephants, BTW, so this isn't meant in a denigrating way. But I'd have a very hard time justifying the order-of-magnitude increase in my edit-compile-run cycle turnaround times.  Life is far too short to have to wait for such things.)

Gradle is JVM-based, but they have build servers and caching to make builds faster than you might think. Clearly the bulk of build focus is on JVM-related stuff, but C and C++ got added because someone paid for it to be added.

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk



April 27, 2019
Hi,

I think there is a lot to be said for using Gradle / and Maven repository as the package management tool. Why reinvent something that already works.

Gradle is a task dependency management tool at its core - it doesn't care what these tasks are, and you can write plugins to create any tasks you want. I have seen Gradle used successfully for C++ projects.

Regards
Dibyendu


April 27, 2019
On 4/27/19 10:25 AM, Gilter wrote:
> Use whatever you want, just don't go condemning something cause you never bothered to learn how to use it.
> 

As he already said, he wasn't condemning it, just relaying his experience with it.
April 27, 2019
On 4/26/19 1:30 PM, H. S. Teoh wrote:
> See, this is the problem I have with many supposedly "modern" tools.
> They are giant CPU and memory hogs, require tons of disk space for who
> knows what, take forever to start up, and chug along taking their sweet
> time just to get the simplest of tasks done, where supposedly "older" or
> "antiquated" tools fire up in an instant, get right to work, and get the
> same task done in a fraction of the time.  This in itself may have been
> excusable, but then these "modern" tools also come with a whole bunch of
> red tape, arbitrary limitations, and the philosophy of "follow our way
> or walk there on foot yourself".  IOW they do what "antiquated" tools do
> poorly, and can't even do advanced things said "antiquated" tools can do
> without any fuss -- and this not because of *technical* limitations, but
> because it was arbitrarily decided at some point that use cases X and Y
> will not be supported, just 'cuz.

+1

> So somebody please enlighten this here dinosaur of *why* I should choose
> these "modern" tools over my existing, and IMO better, ones?

Because "you just have to believe" and join the newer-is-always-better cult? Because we're told by everyday joe that we always have to keep up with new technology or get left behind, so by golly that must be true, after all, who are we to question those smarter than us, those from whom everyday joe obtained the "keep up or left behind" Truth?

I think THAT'S why you have to. ;)

> This is why I'm a fan of empowering the user.  Instead of being obsessed
> over how to package your software in a beautiful jewel-encrusted box
> with a gold ribbon on top, (which IMO is an utter waste of time and only
> attracts those with an inflated sense of entitlement), give the user the
> tools to do what the software can do. Instead of delivering a magic
> black box that does everything but with the hood welded shut, give them
> the components within the black box with which they can build something
> far beyond what you may have initially conceived.  I believe *that* is
> the way to progress, not this "hand me my software on a silver platter"
> philosophy so pervasive nowadays.
> 
> (The jewel-encrusted box is still nice, BTW -- but only as long as it's
> not at the expense of welding the hood shut.)

Unix philosophy. Yup. *nods*. "A tool should do one thing, and do it well." The Lego approach to software.

I grew up on MS-DOS and Windows (well, ok, Apple 2...then DOS/Win). And hey, they served their purposes for me. Fast-forward some years later and I was talking to a game programmer I had previously known (gamedev circles tend to be fairly Win/MS-heavy - and there's admittedly been reasons for that). At one point he stopped and said, "Hey, wait a minute, since when did you become a Linux guy?" Well, since I grew up and learned that I like being able to automate any repetitive sequence I need to, to aid my productivity and help manage cognitive load. The unix philosophy is key in enabling that.

It's also why a world of random hackers have been able to build and maintain a system that, in terms of capabilities, even giants like Microsoft and Google can't keep up with (although, certain business realities actually make it much more lucrative for large companies to keep their products artificially limited - so it's not as if they're especially motivated to compete with *nix on capabilities).
April 28, 2019
On Sat, Apr 27, 2019 at 04:41:56PM -0400, Nick Sabalausky (Abscissa) via Digitalmars-d wrote:
> On 4/26/19 1:30 PM, H. S. Teoh wrote:
[...]
> > This is why I'm a fan of empowering the user.  Instead of being obsessed over how to package your software in a beautiful jewel-encrusted box with a gold ribbon on top, (which IMO is an utter waste of time and only attracts those with an inflated sense of entitlement), give the user the tools to do what the software can do. Instead of delivering a magic black box that does everything but with the hood welded shut, give them the components within the black box with which they can build something far beyond what you may have initially conceived.  I believe *that* is the way to progress, not this "hand me my software on a silver platter" philosophy so pervasive nowadays.
[...]
> Unix philosophy. Yup. *nods*. "A tool should do one thing, and do it well." The Lego approach to software.

It's not so much doing one thing and doing it well (even though that is a natural consequence), but (1) giving the user access to the tools that *you* have at your disposal, and (2) making the interface *composable*, so that the user is empowered to put the blocks together in brand new ways that you've never thought of.


[...]
> "Hey, wait a minute, since when did you become a Linux guy?" Well, since I grew up and learned that I like being able to automate any repetitive sequence I need to, to aid my productivity and help manage cognitive load. The unix philosophy is key in enabling that.
[...]

Yes, the ability to automate *any arbitrary task* is a key functionality that modern designs seem to have overlooked. Automation is the raison d'etre of the first machines.  Humans are bad at repetitive tasks, and machines are supposed to take over these tasks so that humans can excel at the other things they are good at.

Yet the UIs of the modern day force you to click through endless nested levels of submenus and provide very little (if any at all) way to automate things.  And when such is provided, it's usually arbitrarily limited or encumbered in some way, and usually cannot come up to the full functionality accessible when you do things the manual way.  It doesn't make any sense.  Machines are supposed to abstract away the repetitive tasks and allow you to program them to do arbitrary things on the click of a button, not to force you to make repetitive gestures and get a wrist aneurysm because the GUI designer didn't think scripting was an important use case.  The whole point of a general-purpose computing machine is to be able to apply this automation to ANY ARBITRARY TASK, not merely some arbitrarily limited subset that the designers deigned worthy to be included.

Making your interface composable is what allows your code to be composable with any other code in a painless way.

This is why more and more in my own designs I'm leaning towards the guiding principle of programs being thin wrappers around reusable library modules that provide the real functionality. And the programs themselves should as much as possible provide the simplest, most straightforward interface that makes it composable with any other arbitrary program.

Composability, because the user is smarter than you and will think of use cases far beyond your wildest imaginations, so they should not be arbitrarily limited to only what you can think of.

Libraries, because no matter how awesome your program interface is, eventually somebody will want to reuse your functionality from their *own* program.  Allowing them to do that without needlessly cumbersome workarounds like spawning a subprocess and contorting their code to your quirky CLI syntax or method of invocation will make them very happy and loyal users, and will increase the scope of applicability of your code far beyond what you may have conceived.

(And no, remote procedure call is not an excuse to avoid providing a library API to your functionality.  RPC has its uses, but the user should be in control of how they wish your code to run. They should not need to spawn a daemon via JSON over HTTP to a remote server just to be able to call a single function. They *can* do this *if they want to*, but this should not be the *only* way of interfacing with your code. The user should be empowered to deploy your code according to *their* needs, not yours. They should be able to choose whether to call your function as a local function call to a DLL / .so on the local filesystem, or to run your library on a remote server accessed via RPC over the network.)

Empower the user to do what *they* want, how they want it, rather than straitjacket them to do what *you* want, the way you dictate it. This principle applies to all levels of code, from not writing a class that requires its methods to be called in a specific order otherwise it crashes or gets into an inconsistent state, to encapsulating your primary functionality in a library API that can be used anywhere in any way.

Life is too short to be wasted inventing shims and workarounds to functionalities that *should* have been made available generally with no strings attached and no straitjackets imposing arbitrary limitations. Empowerment, composability, automatability.  I'm tired of trying to work with software that doesn't provide all three.  It's just not worth my bother anymore.


T

-- 
MSDOS = MicroSoft's Denial Of Service
1 2
Next ›   Last »