January 18, 2019
On Fri, Jan 18, 2019 at 08:03:09PM +0000, Neia Neutuladh via Digitalmars-d-announce wrote:
> On Fri, 18 Jan 2019 11:43:58 -0800, H. S. Teoh wrote:
> > (1) it often builds unnecessarily -- `touch source.d` and it rebuilds source.d even though the contents haven't changed; and
> 
> Timestamp-based change detection is simple and cheap. If your filesystem supports a revision id for each file, that might work better, but I haven't heard of such a thing.

Barring OS/filesystem support, there's recent OS features like inotify that lets a build daemon listen for changes to files within a subdirectory. Tup, for example, uses this to make build times proportional to the size of the changeset rather than the size of the entire workspace.  I consider this an essential feature of a modern build system.

Timestamp-based change detection also does needless work even when there *is* a change.  For example, edit source.c, change a comment, and make will recompile it all the way down -- .o file, .so file or executable, all dependent targets, etc.. Whereas a content-based change detection (e.g. md5 checksum based) will stop at the .o step because the comment did not cause the .o file to change, so further actions like linking into the executable are superfluous and can be elided.  For small projects the difference is negligible, but for large-scale projects this can mean the difference between a few seconds -- usable for high productivity code-compile-test cycle -- and half an hour: completely breaks the productivity cycle.


> If you're only dealing with a small number of small files, content-based change detection might be a reasonable option.

Content-based change detection is essential IMO. It's onerous if you use the old scan-the-entire-source-tree model of change detection; it's actually quite practical if you use a modern inotify- (or equivalent) based system.


> > (2) it often fails to build necessary targets -- if for whatever reason your system clock is out-of-sync or whatever, and a newer version of source.d has an earlier date than a previously-built object.
> 
> I'm curious what you're doing that you often have clock sync errors.

Haha, that's just an old example from back in the bad ole days where NTP syncing is rare, and everyone's PC is slightly off anywhere from seconds to minutes (or if it's really badly-managed, hours, or maybe the wrong timezone or whatever).  The problem is most manifest when networked filesystems are involved.

These days, clock sync isn't really a problem anymore, generally speaking, but there's still something else about make that makes it fail to pick up changes.  I still regularly have to `make clean;make` makefile-based projects just to get the lousy system to pick up the changes.  I don't have that problem with more modern build systems. Probably it's an issue of undetected dependencies.


T

-- 
I think Debian's doing something wrong, `apt-get install pesticide', doesn't seem to remove the bugs on my system! -- Mike Dresser
January 18, 2019
On 2019-01-18 15:29, Mike Parker wrote:
> Not long ago, in my retrospective on the D Blog in 2018, I invited folks to write about their first impressions of D. Ron Tarrant, who you may have seen in the Lear forum, answered the call. The result is the latest post on the blog, the first guest post of 2019. Thanks, Ron!
> 
> The blog:
> https://dlang.org/blog/2019/01/18/d-lighted-im-sure/

Regarding Dub. If you only have a project without any dependencies or perhaps only system dependencies already available on the system it might not add that much. But as soon as you want to use someone else D code it helps tremendously. Dub both acts as a build tool and a package manager. It will automatically download the source code for the dependencies, build them and handle the imports paths. As for JSON files, it's possible to use the alternative format SDL. One extremely valuable feature this has over JSON is that it supports comments.

To address some of the direct questions in the blog post:

"information about how I would go about packaging a D app (with GtkD) for distribution".

When it comes to distribution D applications there isn't much that is specific to D. Most of the approaches and documentation that applies to any native language would apply to D as well. There are two D specific things (that I can think of for now) that are worth mentioning:

* When you compile a release build for distribution, use the LDC [1] compiler. It produces better code. You can also add things like LTO (Link Time Optimization) and possibly PGO (Profile Guided Optimization).

* If you have any static assets for you application, like images, sound videos, config files or similar, it's possible to embed those directly in the executable using the "import expression" [2] feature. This will read a file, at compile time, into a string literal inside the code.

Some more general things about distribution. I think it's more platform specific than language specific. I can only speak for macOS (since that's the main platform I use). There it's expected to have the application distributed as a disk image (DMG). This image would contain an application bundle. An application bundle is a regular directory with the ".app" extension with a specific directory and file structure. Some applications in the OS treats these bundles specially. For example, double clicking on this bundle in the file browser will launch the application. The bundle will contain the actual executable and and resources like libraries and assets like images and audio. In your case, don't expect a Mac user to have GTK installed, bundle that in the application bundle.

Then there's the issue of which versions of the platforms you want to support. For macOS it's possible to specify a minimum deployment target using the "MACOSX_DEPLOYMENT_TARGET" environment variable. This allows to build the application on the latest version of the OS but still have the application work on older versions.

On *BSD and Linux it's not that easy and Linux has the additional axis of distros which adds another level of issues. The best is to compile for each distro and version you want to support, but that's a lot of work. I would provide fully statically linked binaries, including statically linked to the C standard library. This way to can provide one binary for all version of and distros of Linux and you know it will always work.

"how to build on one platform for distribution on another (if that’s even possible)"

I can say that it's possible, but unless you're targeting a platform that doesn't provide a compiler, like mobile or an embedded platform, I think it's rare to need to cross-compile. I'll tell you way:

When building an application that targets multiple platforms you would need to test it at some point. That means running the application on all the supported platforms. That means you need to have access to these platforms. Usually a lot in the beginning when developing the application and in the end when doing final verification before a release.

Having said that, I'm all for automating as much as possible. That means automatically running all the tests and building the final release build and packing for distribution. For that I recommend hooking up you're project to one of the publicly available and free CI services. Travis CI [3] is one of them that supports Linux, macOS and Windows (early release [4]). AppVeyor is an alternative that has a much more mature support for Windows.

If you really want to cross-compile, it's possible if you use LDC. DMD can compile for the same platform for either 32 bit or 64 bit, but not for a different platform. I think it's simplest to use Docker. I have two Docker files for a container containing LDC setup for cross-compiling to macOS [6] and for Windows [7]. Unfortunately these Docker files pulls down the SDKs from someones (not mine) Dropbox account.

[1] https://github.com/ldc-developers/ldc
[2] https://dlang.org/spec/expression.html#import_expressions
[3] https://travis-ci.com/
[4] https://blog.travis-ci.com/2018-10-11-windows-early-release
[5] https://www.appveyor.com
[6] https://github.com/jacob-carlborg/docker-ldc-darwin/blob/master/Dockerfile
[7] https://github.com/jacob-carlborg/docker-ldc-windows/blob/master/Dockerfile

-- 
/Jacob Carlborg
January 18, 2019
On 2019-01-18 21:23, H. S. Teoh wrote:

> Haha, that's just an old example from back in the bad ole days where NTP
> syncing is rare, and everyone's PC is slightly off anywhere from seconds
> to minutes (or if it's really badly-managed, hours, or maybe the wrong
> timezone or whatever).

I had one of those issues at work. One day when I came in to work it was suddenly not possible to SSH into a remote machine. It worked the day before. Turns out the ntpd daemon was not running on the remote machine (for some reason) and we're using Kerberos with SSH, that means if the clocks are too much out of sync it will not be able to login. That was a ... fun, debugging experience.

-- 
/Jacob Carlborg
January 18, 2019
On Fri, Jan 18, 2019 at 09:41:14PM +0100, Jacob Carlborg via Digitalmars-d-announce wrote:
> On 2019-01-18 21:23, H. S. Teoh wrote:
> 
> > Haha, that's just an old example from back in the bad ole days where NTP syncing is rare, and everyone's PC is slightly off anywhere from seconds to minutes (or if it's really badly-managed, hours, or maybe the wrong timezone or whatever).
> 
> I had one of those issues at work. One day when I came in to work it was suddenly not possible to SSH into a remote machine. It worked the day before. Turns out the ntpd daemon was not running on the remote machine (for some reason) and we're using Kerberos with SSH, that means if the clocks are too much out of sync it will not be able to login. That was a ... fun, debugging experience.
[...]

Ouch.  Ouch!  That must not have been a pleasant experience in any sense of the word.  Knowing all too well how these things tend to go, the errors you get from the SSH log probably were very unhelpful, mostly stemming from C's bad ole practice or returning a generic unhelpful "failed" error code for all failures indiscriminately.  I had to work on SSH-based code recently, and it's just ... not a nice experience overall due to the way the C code was written.


T

-- 
GEEK = Gatherer of Extremely Enlightening Knowledge
January 19, 2019
On Friday, 18 January 2019 at 14:29:14 UTC, Mike Parker wrote:
> Not long ago, in my retrospective on the D Blog in 2018, I invited folks to write about their first impressions of D. Ron Tarrant, who you may have seen in the Lear forum, answered the call. The result is the latest post on the blog, the first guest post of 2019. Thanks, Ron!
>
> As a reminder, I'm still looking for new-user impressions and guest posts on any D-related topic. Please contact me if you're interested. And don't forget, there's a bounty for guest posts, so you can make a bit of extra cash in the process.
>
> The blog:
> https://dlang.org/blog/2019/01/18/d-lighted-im-sure/
>
> Reddit:
> https://www.reddit.com/r/programming/comments/ahawhz/dlighted_im_sure_the_first_two_months_with_d/

Nicely done. Very enjoyable, thanks for publishing this!

--Jon
January 19, 2019
On Friday, 18 January 2019 at 17:06:54 UTC, Steven Schveighoffer wrote:

> I had to use my parents' TV in the living room :) And I was made to learn typing before I could play games on it, so cruel...

LOL!

(Ahem) I feel your pain, sir.
January 19, 2019
On Friday, 18 January 2019 at 18:48:00 UTC, H. S. Teoh wrote:

> Very nice indeed!  Welcome aboard, Ron!

Thanks, H.S.

> I used to remember most of the opcodes by heart... though nowadays that memory has mostly faded away.
I used to write 6502 in my head while riding my bike to school, then write it out, do up a poke statement to jam it into RAM, and most of the time it worked first try. I was so impressed with myself.

> I won't bore you with my boring editor, vim (with no syntax highlighting -- yes I've been told I'm crazy, and in fact I agree

I read somewhere recently that syntax highlighting is considered a distraction, so you're not the only one. I use it mainly as a spellchecker. If it lights up, I know I spelled it right! :)

> Linux is my IDE, the whole of it :-P).
And I thought Atom had overhead! :) I do hope you know I'm kidding. I have been working up to installing Linux on something around here, too. And FreeBSD. I'm seriously short of hardware and space to set up other machines ATM, so it's going to have to wait.


January 19, 2019
On Friday, 18 January 2019 at 18:59:59 UTC, JN wrote:

> Just add a line in your dub.json file and you have the library. Need to upgrade to newer version? Just change the version in dub.json file. Need to download the problem from scratch? No problem, dub can use the json file to download all the dependencies in proper versions.

Any idea where we can find a gentle intro to dub?
January 19, 2019
On Friday, 18 January 2019 at 19:55:34 UTC, Meta wrote:

> Great read Ron. Can I ask which town in Newfoundland it was where you stayed back in 1985?

Sure. I was in St. Lawrence on the Burin Peninsula. Do you know it?
January 19, 2019
On Friday, 18 January 2019 at 20:30:25 UTC, Jacob Carlborg wrote:

> Regarding Dub.
[stuff deleted]
> [1] https://github.com/ldc-developers/ldc
> [2] https://dlang.org/spec/expression.html#import_expressions
> [3] https://travis-ci.com/
> [4] https://blog.travis-ci.com/2018-10-11-windows-early-release
> [5] https://www.appveyor.com
> [6] https://github.com/jacob-carlborg/docker-ldc-darwin/blob/master/Dockerfile
> [7] https://github.com/jacob-carlborg/docker-ldc-windows/blob/master/Dockerfile

Wow. That's a lot to think about. Thanks, Jacob. Looks like I've got my weekend reading all lined up. :)
1 2 3