April 15, 2017
On Saturday, 15 April 2017 at 15:11:08 UTC, Laeeth Isharc wrote:
>
> Not sure how much memory ldc takes to build.  If it would be helpful for ARM I could contribute a couple of servers on scaleway or similar.

That'd be great. Can you take initiative and send a mail to Kai and ask him about the buildbot setup he made?

Thanks!
  Johan

April 16, 2017
Am Sat, 15 Apr 2017 15:11:08 +0000
schrieb Laeeth Isharc <laeethnospam@nospam.laeeth.com>:

> 
> Not sure how much memory ldc takes to build.  If it would be helpful for ARM I could contribute a couple of servers on scaleway or similar.

At least for GDC building the compiler on low-end platforms is too resource demanding (Though the times when std.datetime needed > 2GB ram to compile are gone for good, IIRC). I think cross-compiler tetsing is the solution here but that involves some work on the DMD test runner.

> Gitlab has test runners built in, at least for enterprise version (which is not particularly expensive) and we have been happy with that.
> 
> Laeeth
> 

The free version has test runner as well. What bothers me about gitlab is the github integration. gitlab-CI only works with a gitlab instance so you have to mirror the github repository to gitlab. This is usually not too difficult, but you have to be careful to make pull request tsting and similar more complex ffeatures work correctly. I also think they don't have anything ready to push CI status to github.


-- Johannes

April 16, 2017
Am Sat, 15 Apr 2017 09:52:49 +0000
schrieb Johan Engelen <j@j.nl>:

> I'd be happy to use the Pi3 as permanent tester, if the risks of a hacker intruding my home network are manageable ;-)
> 

If you want to be sure use a cheap DMZ setup.

VLAN based:
Connect your PI to some switch supporting VLAN and use an untagged port
assigned to one VLAN (i.e. the raspberry port only communicates in one
VLAN). Then if you use an OpenWRT/LEDE or similar main router simply set
up a custom firewall zone for that VLAN and disable routing between this
zone and your home LAN zone.

If you don't have a capable main router there's another solution: Buy a
cheap wr841n router for 15€
(https://wiki.openwrt.org/toh/tp-link/tl-wr841nd)
* install LEDE (lede-project.org)
* connect the router to your home lan and the raspberry pi
  * home network: DHCP client, wan
  * raspberry pi: DHCP Server, lan
* Adjust firewall to drop packets to/from your local home LAN range
  (manually or using bcp38 and luci-app-bcp38 packages)


-- Johannes

April 16, 2017
On 16 April 2017 at 09:41, Johannes Pfau via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> Am Sat, 15 Apr 2017 15:11:08 +0000
> schrieb Laeeth Isharc <laeethnospam@nospam.laeeth.com>:
>> Gitlab has test runners built in, at least for enterprise version (which is not particularly expensive) and we have been happy with that.
>>
>> Laeeth
>>
>
> The free version has test runner as well. What bothers me about gitlab is the github integration. gitlab-CI only works with a gitlab instance so you have to mirror the github repository to gitlab. This is usually not too difficult, but you have to be careful to make pull request tsting and similar more complex ffeatures work correctly. I also think they don't have anything ready to push CI status to github.
>
>
> -- Johannes
>

I asked at a recent D meetup about what gitlab CI used as their backing platform, and it seems like it's a front for TravisCI.  YMMV, but I found the Travis platform to be too slow (it was struggling to even build GDC in under 40 minutes), and too limiting to be used as a CI for large projects.

Whereas I don't really have much bad to say about Semaphore, as it's able to download, build, *and* run testsuite in under 15 minutes at the best of times [1].

Johannes, what if I get a couple new small boxes, one ARM, one non-descriptive x86.  The project site and binary downloads could then be used to the non-descriptive box, meanwhile the ARM box and the existing server can be turned into a build servers - there's enough disk space and memory on the current server to have a at least half a dozen build environments on the current server, testing also i386 and x32 would be beneficial along with any number cross-compilers (testsuite can be ran with runnable tests disabled).

[1]: https://semaphoreci.com/d-programming-gdc/gdc/branches/master/builds/330
April 16, 2017
Am Sun, 16 Apr 2017 10:13:50 +0200
schrieb Iain Buclaw via Digitalmars-d <digitalmars-d@puremagic.com>:

> 
> I asked at a recent D meetup about what gitlab CI used as their backing platform, and it seems like it's a front for TravisCI.  YMMV, but I found the Travis platform to be too slow (it was struggling to even build GDC in under 40 minutes), and too limiting to be used as a CI for large projects.

That's probably for the hosted gitlab solution though. For self-hosted gitlab you can set up custom machines as gitlab workers. The biggest drawback here is missing gitlab integration.

> 
> Johannes, what if I get a couple new small boxes, one ARM, one non-descriptive x86.  The project site and binary downloads could then be used to the non-descriptive box, meanwhile the ARM box and the existing server can be turned into a build servers - there's enough disk space and memory on the current server to have a at least half a dozen build environments on the current server, testing also i386 and x32 would be beneficial along with any number cross-compilers (testsuite can be ran with runnable tests disabled).

Sounds like a plan. What CI server should we use though?

I tried concourse-ci which seems nice at first, but it's too opinionated to be useful for us (now worker cache, no way for newer commits to auto-cancel builds for older commits, ...)


-- Johannes

April 16, 2017
On 16 April 2017 at 11:20, Johannes Pfau via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> Am Sun, 16 Apr 2017 10:13:50 +0200
> schrieb Iain Buclaw via Digitalmars-d <digitalmars-d@puremagic.com>:
>
>>
>> I asked at a recent D meetup about what gitlab CI used as their backing platform, and it seems like it's a front for TravisCI.  YMMV, but I found the Travis platform to be too slow (it was struggling to even build GDC in under 40 minutes), and too limiting to be used as a CI for large projects.
>
> That's probably for the hosted gitlab solution though. For self-hosted gitlab you can set up custom machines as gitlab workers. The biggest drawback here is missing gitlab integration.
>
>>
>> Johannes, what if I get a couple new small boxes, one ARM, one non-descriptive x86.  The project site and binary downloads could then be used to the non-descriptive box, meanwhile the ARM box and the existing server can be turned into a build servers - there's enough disk space and memory on the current server to have a at least half a dozen build environments on the current server, testing also i386 and x32 would be beneficial along with any number cross-compilers (testsuite can be ran with runnable tests disabled).
>
> Sounds like a plan. What CI server should we use though?
>

I was thinking of keeping it simple, buildbot maybe?

http://buildbot.net/
April 16, 2017
On 16 April 2017 at 11:20, Johannes Pfau via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> Am Sun, 16 Apr 2017 10:13:50 +0200
>
> I tried concourse-ci which seems nice at first, but it's too opinionated to be useful for us (now worker cache, no way for newer commits to auto-cancel builds for older commits, ...)
>

Perhaps use docker layers as a cache then?
April 19, 2017
On 04/13/2017 06:16 PM, Joakim wrote:
>  From a certain point of view, you could say PC sales are only down 25%
> from their peak, that's not dead yet.  But the chart I linked shows
> their share of personal computing devices, including mobile, has dropped
> from 78% to a little less than 14% over the last decade.  I'd call that
> dying.

In other words: It can only be considered "dying" if you conveniently ignore certain facts, and instead look only at a stat that doesn't show the full picture.

April 20, 2017
On Wednesday, 19 April 2017 at 17:47:50 UTC, Nick Sabalausky (Abscissa) wrote:
> On 04/13/2017 06:16 PM, Joakim wrote:
>>  From a certain point of view, you could say PC sales are only down 25%
>> from their peak, that's not dead yet.  But the chart I linked shows
>> their share of personal computing devices, including mobile, has dropped
>> from 78% to a little less than 14% over the last decade.  I'd call that
>> dying.
>
> In other words: It can only be considered "dying" if you conveniently ignore certain facts, and instead look only at a stat that doesn't show the full picture.

On the contrary, my point is "the full picture" shows sales as a share of computing devices dying, whereas only looking at the PC market alone is misleading. The mobile tidal wave has taken away sales that would have been PCs instead, likely more than 25% if we extrapolate out the former PC sales growth rate, rather than just compare to the peak.

Just as the underpowered PC once disrupted the market for minicomputers and UNIX workstations, mobile is doing the same to the PC.  Whereas people used to check their email, browse the web, and ogle facebook on their PCs before, they just use a mobile device for those former PC-only activities now.

And now that the mobile market is so much bigger than PCs, they're finally going after what remains of the PC market: those who want to get work done on a bigger screen which supports viewing multiple windows at once.  This isn't going to happen overnight, as it will take years to roll out Android 7.0 Nougat with built-in multi-window, get all the needed productivity software ported over (though the S8 announcement notes that versions of Office and Photoshop are already done), and keep iterating and improving on the mobile multi-window experience.

But just as the PC once disrupted the computing market, mobile has already disrupted the PC market.  Mobile taking most of the rest of the PC market's sales with these multiwindow moves is inevitable.  Sure, there will always be a few who run powerful desktops, just as I'm sure there's someone out there running a UNIX workstation, but you never run into those people anymore because they're such a small niche.

I don't know why you get so worked up about this.  Yes, the new entrant won't have features the old computers had.  I was using virtual desktops on UNIX workstations regularly decades ago, but Microsoft didn't add that to the OS till Windows 10 a couple years ago.  So what?  Most people got by just fine.
April 20, 2017
On 20/04/2017 5:09 AM, Joakim wrote:
> I don't know why you get so worked up about this.  Yes, the new entrant
> won't have features the old computers had.  I was using virtual desktops
> on UNIX workstations regularly decades ago, but Microsoft didn't add
> that to the OS till Windows 10 a couple years ago.  So what?  Most
> people got by just fine.

The API to support multi-desktops has been in Windows for a very long time (Windows 2k[0]), it just wasn't exposed in stock Windows.
But if you knew where to look, they certainly did offer it[1]!

[0] https://msdn.microsoft.com/en-us/library/windows/desktop/ms682124(v=vs.85).aspx
[1] https://technet.microsoft.com/en-us/sysinternals/cc817881