February 09, 2018
On Friday, 9 February 2018 at 17:41:45 UTC, jmh530 wrote:
> On Friday, 9 February 2018 at 16:54:35 UTC, Seb wrote:
>>
>>
>> FYI: and for the lazy ones, there will hopefully be std.experimental.scripting soon:
>>
>> https://github.com/dlang/phobos/pull/5916
>
> Why not make this a package.d file for std?

Yes, that's the intended goal.
However, to convince everyone involved and to be able to experiment with this in the wild for a bit, we went with std.experimental first.

If drawbacks get discovered, it's a lot easier to retreat.
February 09, 2018
On Friday, 9 February 2018 at 19:28:40 UTC, Seb wrote:
>
> Yes, that's the intended goal.
> However, to convince everyone involved and to be able to experiment with this in the wild for a bit, we went with std.experimental first.
>
> If drawbacks get discovered, it's a lot easier to retreat.

Cool.

Do you know if compilation speed improves if using selective imports? E.g.
import std.experimental.scripting : writeln;
vs.
import std.experimental.scripting;

I suppose that's a general question wrt public imports, but in this case there is probably more to parse than in other smaller projects.
February 09, 2018
On Friday, 9 February 2018 at 02:09:57 UTC, Jonathan M Davis wrote:
> On Thursday, February 08, 2018 23:57:45 Rubn via Digitalmars-d wrote:
>> On Thursday, 8 February 2018 at 18:06:38 UTC, Walter Bright wrote:
>> > I.e. it isn't an issue of us D guys being dumb about the GC.
>>
>> So you could say it's a design flaw of D, attempting to use a GC where it isn't suited?
>
> You could say that, but many of us would not agree. Just because certain classes of GCs cannot be used with D does not mean that the fact that D has a GC built-in is not beneficial and ultimately a good design decision. Plenty of folks have been able to write very efficient code that uses D's GC. Obviously, there are use cases where it's better to avoid the GC, but for your average D program, the GC has been a fantastic asset.
>
> - Jonathan M Davis

I didn't say that a GC isn't beneficial, the problem is if you are going to be using the GC there are plenty of other languages that implement it better. The language is designed around the GC. Anytime I try and use an associative array, my program crashes because of the GC. The workaround I need to do is just make every associative array static. Maybe if phobos could be built into a shared library it wouldn't be as big of a problem. But that's not the case, before someone goes around crying that Phobos CAN be built into a shared library, remember platform matters!

You can write efficient code with Java, and that has an entire VM running between the cpu and the language. Efficiency isn't the issue. Writing code that is both GC and non-GC code is extremely difficult to do correctly. That it just isn't worth it at the end of the day, it complicates everything, and that is the design flaw. Having to use a complicated inefficient GC is just a side effect of the greater issue.
February 09, 2018
On 2/9/2018 1:14 AM, meppl wrote:
> let's say python is supposed to offer slow execution. So, python doesn't prove reference counting is fast (even if it is possible in theory). D on the other hand provides binaries who are expected to execute fast.

I believe it has been shown (sorry, no reference) that GC is faster in aggregate time, and RC is perceived faster because it doesn't have pauses.

This makes GC better for batch jobs, and RC better for interactive code.

Of course, the issue can get more complex. GC uses 3x the memory of RC, and so you can get extra slowdowns from swapping and cache misses.
February 09, 2018
On 2/9/2018 6:11 AM, Atila Neves wrote:
> It's easy enough to create std package like this:
> 
> module std;
> public import std.algorithm;
> //...

Yes, but I suspect that'll be a large negative for compile speed for smallish programs.
February 10, 2018
On Friday, 9 February 2018 at 19:50:50 UTC, jmh530 wrote:
> On Friday, 9 February 2018 at 19:28:40 UTC, Seb wrote:
>>
>> Yes, that's the intended goal.
>> However, to convince everyone involved and to be able to experiment with this in the wild for a bit, we went with std.experimental first.
>>
>> If drawbacks get discovered, it's a lot easier to retreat.
>
> Cool.
>
> Do you know if compilation speed improves if using selective imports? E.g.
> import std.experimental.scripting : writeln;
> vs.
> import std.experimental.scripting;
>
> I suppose that's a general question wrt public imports, but in this case there is probably more to parse than in other smaller projects.

AFACIT selective imports have no impact on the compilation speed at the moment.
It's quite likely that future versions of the compiler will take selective imports into account, but at the moment DMD reads the world of everything and only stops at templated structs/functions/classes etc.

See also:

https://issues.dlang.org/show_bug.cgi?id=13255
https://issues.dlang.org/show_bug.cgi?id=18414
February 10, 2018
On 08.02.2018 16:55, JN wrote:
> On Thursday, 8 February 2018 at 14:54:19 UTC, Adam D. Ruppe wrote:
>> Garbage collection has proved to be a smashing success in the industry, providing productivity and memory-safety to programmers of all skill levels.
> 
> Citation needed on how garbage collection has been a smashing success based on its merits rather than the merits of the languages that use garbage collection. Python was also a smashing success, but it doesn't use a garbage collector in it's default implementation (CPython). Unless you mean garbage collection as in "not manual memory management"? ...
> 

Even if "garbage collection" is taken to mean "collecting garbage", reference counting is garbage collection. Referring to RC as not GC makes no sense at all and was probably only invented because some people want to think that RC is good but GC is bad, being too lazy to say "tracing GC".
February 10, 2018
On Friday, 9 February 2018 at 21:24:14 UTC, Walter Bright wrote:
> Of course, the issue can get more complex. GC uses 3x the memory of RC, and so you can get extra slowdowns from swapping and cache misses.

Is the total memory consumption tripled, or only the extra memory used for tracking allocations?
February 10, 2018
On Friday, 9 February 2018 at 21:24:14 UTC, Walter Bright wrote:
> On 2/9/2018 1:14 AM, meppl wrote:
>> let's say python is supposed to offer slow execution. So, python doesn't prove reference counting is fast (even if it is possible in theory). D on the other hand provides binaries who are expected to execute fast.
>
> I believe it has been shown (sorry, no reference) that GC is faster in aggregate time, and RC is perceived faster because it doesn't have pauses.

RC is a form of GC. Also tracing GCs with pause times under 1ms are in production for seceral languages now.

>
> This makes GC better for batch jobs, and RC better for interactive code.

Yes GCs with lower pause time sacrifices throughput for low latency. RC included.

>
> Of course, the issue can get more complex. GC uses 3x the memory of RC,

 I’ve seen figures of about x2 but that was in an old paper on Boehm GC.

> and so you can get extra slowdowns from swapping

Oh come on... anything touching swap is usually frozen these days. Plus heap size is usually statically bounded for GC languages, choosen not to grow beyond ram.

> and cache misses.


February 10, 2018
On 2/10/18 10:14 AM, Dmitry Olshansky wrote:
> On Friday, 9 February 2018 at 21:24:14 UTC, Walter Bright wrote:
>> Of course, the issue can get more complex. GC uses 3x the memory of RC,
> 
>   I’ve seen figures of about x2 but that was in an old paper on Boehm GC.

This is the classic reference: https://people.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf. Executive review in the abstract: "With only three times as much memory, the collector runs on average 17% slower than explicit memory management. However, with only twice as much memory, garbage collection degrades performance by nearly 70%. When physical memory is scarce, paging causes garbage collection to run an order of magnitude slower than explicit memory management." -- Andrei