July 18, 2014
On Thursday, 17 July 2014 at 18:19:04 UTC, H. S. Teoh via Digitalmars-d wrote:
> On Thu, Jul 17, 2014 at 05:58:14PM +0000, Chris via Digitalmars-d wrote:
>> On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via Digitalmars-d
>> wrote:
> [...]
>> >AFAIK some work still needs to be done with std.string; Walter for
>> >one has started some work to implement range-based equivalents for
>> >std.string functions, which would be non-allocating; we just need a
>> >bit of work to push things through.
>> >
>> >DMD 2.066 will have @nogc, which will make it easy to discover which
>> >remaining parts of Phobos are still not GC-free. Then we'll know
>> >where to direct our efforts. :-)
>> >
>> >
>> >T
>> 
>> That's good news! See, we're getting there, just bear with us. This
>> begs the question of course, how will this affect existing code? My
>> code is string intensive.
>
> I don't think it will affect existing code (esp. given Walter's stance
> on breaking changes!). Probably the old GC-based string functions will
> still be around for backwards-compatibility. Perhaps some of them might
> be replaced with non-GC versions where it can be done transparently, but
> I'd expect you'd need to rewrite your string code to take advantage of
> the new range-based stuff. Hopefully the rewrites will be minimal (e.g.,
> pass in an output range as argument instead of getting a returned
> string, replace allocation-based code with a UFCS chain, etc.). The
> ideal scenario may very well be as simple as tacking on
> `.copy(myBuffer)` at the end of a UFCS chain. :-P
>
>
> T

That sounds good to me! This gives me time to upgrade my old code little by little and use the new approach when writing new code. Phew!

By the way, my code is string intensive and I still have some suboptimal (greedy) ranges here and there. But believe it or not, they're no problem at all. The application (a plugin for a screen reader) is fast and responsive* (according to user feedback) like any other screen reader plugin, and it hasn't crashed for ages (thanks to GC?) - knock on wood! I use a lot of lazy ranges too plus some pointer magic for work intensive algorithms. Plus D let me easily model the various relations between text and speech (for other use cases down the road). Maybe it is not a real time system, but it has to be responsive. So far, GC hasn't affected it negatively. Once the online version will be publicly available, I will report how well vibe.d performs. Current results are encouraging.

As regards Java, the big advantage of D is that it compiles to a native DLL and all users have to do is to double click on it to install. No "please download JVM" nightmare. I've been there. Users cannot handle it (why should they?), and to provide it as a developer is a waste of time and resources, and it might still go wrong which leaves both the users and the developers angry and frustrated.

* The only thing that bothers me is that there seems to be a slight audio latency problem on Windows, which is not D's fault. On Linux it speaks as soon as you press <Enter>.

July 18, 2014
On Friday, 18 July 2014 at 00:08:17 UTC, H. S. Teoh via Digitalmars-d wrote:
> On Thu, Jul 17, 2014 at 06:32:58PM +0000, Dicebot via Digitalmars-d wrote:
>> On Thursday, 17 July 2014 at 18:22:11 UTC, H. S. Teoh via Digitalmars-d
>> wrote:
>> >Actually, I've realized that output ranges are really only useful
>> >when you want to store the final result. For data in mid-processing,
>> >you really want to be exporting an input (or higher) range interface
>> >instead, because functions that take output ranges are not
>> >composable.  And for storing final results, you just use
>> >std.algorithm.copy, so there's really no need for many functions to
>> >take an output range at all.
>> 
>> Plain algorithm ranges rarely need to allocate at all so those are
>> somewhat irrelevant to the topic. What I am speaking about are variety
>> of utility functions like this:
>> 
>> S detab(S)(S s, size_t tabSize = 8)
>>     if (isSomeString!S)
>> 
>> this allocates result string. Proper alternative:
>> 
>> S detab(S)(ref S output, size_t tabSize = 8)
>>     if (isSomeString!S);
>> 
>> plus
>> 
>> void detab(S, OR)(OR output, size_t tab_Size = 8)
>>     if (   isSomeString!S
>>         && isSomeString!(ElementType!OR)
>>        )
>
> I think you're missing the input parameter. :)
>
> 	void detab(S, OR)(S s, OR output, size_t tabSize = 8) { ... }
>
> I argue that you can just turn it into this:
>
> 	auto withoutTabs(S)(S s, size_t tabSize = 8)
> 	{
> 		static struct Result {
> 			... // implementation here
> 		}
> 		static assert(isInputRange!Result);
> 		return Result(s, tabSize);
> 	}
>
> 	auto myInput = "...";
> 	auto detabbedInput = myInput.withoutTabs.array;
>
> 	// Or:
> 	MyOutputRange sink;	// allocate using whatever scheme you want
> 	myInput.withoutTabs.copy(sink);
>
> The algorithm itself doesn't need to know where the result will end up
> -- sink could be stdout, in which case no allocation is needed at all.

Yes this looks better.
July 19, 2014
On Thursday, 17 July 2014 at 14:05:02 UTC, Brian Rogoff wrote:
> On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:
>> If D came without GC, it would have replaced C++ a long time ago!
>
> That's overly optimistic I think, but I believe that the adoption rate would have been far greater for a D without GC, or perhaps with a more GC friendly design, as the GC comes up first or close in every D discussion with prospective adopters.

This claim is being made frequently, but you need to consider that D started out as a more simpler language than it is today. Many of the distinguishing advantages of D can only be made possible _in a safe way_ when there is a GC. Everyone seems to agree, for example, that array slicing is one of these features. Without a GC, you'd either have to add a complicated reference counting scheme, thus destroying performance and simplicity, or you'd have to rely on the user for ownership management, which is unsafe. (A third way would be borrowing, which D doesn't have (yet).) I also believe that the Range concept was introduced at a later stage in D's history, thus the GC avoidance strategies that are being implemented in Phobos right now weren't available back then.

Therefore I cannot agree that D would have been adopted more eagerly without a GC; in fact, the adoption rate would have likely been less, because the language would have been crippled.

>
> However, it's way too late to change that now. IMO, the way forward involves removing all or most hidden allocations from the D libraries, making programming sans GC easier (@nogc everywhere, a compiler switch, documentation for how to work around the lack of GC, etc.) and a much better, precise GC as part of the D release. Any spec changes necessary to support precision should be in a fast path.

Add borrowing!
July 19, 2014
On Thursday, 17 July 2014 at 19:14:06 UTC, Right wrote:
>  I'm rather fond of RAII, I find that I rarely every need shared semantics.
>  I use a custom object model that allows for weak_ptrs to unique_ptrs which I think removes some cases where people might otherwise be inclined to use shared_ptr.
>
>  Shared semantics are so rare in fact I would say I hardly use it at all, I go for weeks of coding without creating a shared type, not because I'm trying to do so, but because it just isn't necessary.
>
>  Which is why GC seems like such a waste, given my experience in C++, where I hardly need shared memory, I see little use for a GC(or even ARC etc), all it will do is decrease program performance, make deterministic destruction impossible, and prevent automatic cleanup of none memory resources.
>
>  Rust seems to have caught on to what C++ has accomplished here.

Though, GC is safer, easier and cheaper than ownership model, which is possible in D too, if you want it.
July 19, 2014
On 7/17/2014 5:06 PM, H. S. Teoh via Digitalmars-d wrote:
> 	MyOutputRange sink;	// allocate using whatever scheme you want
> 	myInput.withoutTabs.copy(sink);
>
> The algorithm itself doesn't need to know where the result will end up
> -- sink could be stdout, in which case no allocation is needed at all.

Exactly! The algorithm becomes completely divorced from the memory allocation. I believe this is a very powerful technique.

July 19, 2014
On 7/17/2014 11:44 AM, Russel Winder via Digitalmars-d wrote:
> With C++ I am coming to grips with RAII management of the heap. With
> Java, Groovy, Go and Python I rely on the GC doing a good job. I note
> though that there is a lot of evidence that the Unreal folk developed a
> garbage collector for C++ exactly because they didn't want to do the
> RAII thing.

RAII has a lot of costs associated with it that I am often surprised go completely unrecognized by the RAII comunity:

1. the "dec" operation (i.e. shared_ptr) is expensive

2. the inability to freely mix pointers allocated with different schemes

3. slices become mostly unworkable, and slices are a fantastic way to speed up a program

July 20, 2014
On Saturday, 19 July 2014 at 21:12:44 UTC, Walter Bright wrote:
>
> 3. slices become mostly unworkable, and slices are a fantastic way to speed up a program

They are even more fantastic for speeding up programming.
I think that programmer time isn't included often enough in discussions.

I have a program which I used D to quickly prototype and form my baseline implementation.
After getting a semi-refined implementation I converted the performance critical part to C++.
The D code that survived the rewrite uses slices + ranges, and it's not worth converting that to C++ code (it would be less elegant and isn't worth the time.)

The bottom line is that without D's slices, I might not have bothered bringing that small project to the level of completion it has today.
July 20, 2014
On 17 Jul 2014 13:40, "w0rp via Digitalmars-d" <digitalmars-d@puremagic.com> wrote:
>
> The key to making D's GC acceptable lies in two factors I believe.
>
> 1. Improve the implementation enough so that you will only be impacted by
GC in extermely low memory or real time environments.
> 2. Defer allocation more and more by using ranges and algorithms more,
and trust that compiler optimisations will make these fast.
>

How about
1. Make it easier to select which GC you want to use at runtime init.
2. Write an alternate GC aimed at different application uses (ie: real-time)

We already have (at least) three GC implementations for D.

Regards
Iain


July 20, 2014
On Sunday, 20 July 2014 at 08:41:16 UTC, Iain Buclaw via Digitalmars-d wrote:
> On 17 Jul 2014 13:40, "w0rp via Digitalmars-d"
>> The key to making D's GC acceptable lies in two factors I believe.
>>
>> 1. Improve the implementation enough so that you will only be impacted by
> GC in extermely low memory or real time environments.
>> 2. Defer allocation more and more by using ranges and algorithms more,
> and trust that compiler optimisations will make these fast.
>>
>
> How about
> 1. Make it easier to select which GC you want to use at runtime init.
> 2. Write an alternate GC aimed at different application uses (ie: real-time)
>

Yes, Please!

Being able to specify an alternate memory manager at compile-time, link-time and/or runtime would be most advantageous, and probably put an end to the GC-phobia.

DIP46 [1] also proposes and interesting alternative to the GC by creating regions at runtime.

And given the passion surrounding the GC in this community, if runtime hooks and/or a suitable API for custom memory managers were created and documented, it would invite participation and an informal, highly competitive contest for the best GC would likely ensue.

Mike

[1] http://wiki.dlang.org/DIP46
July 20, 2014
On Sunday, 20 July 2014 at 11:44:56 UTC, Mike wrote:
> Being able to specify an alternate memory manager at compile-time, link-time and/or runtime would be most advantageous, and probably put an end to the GC-phobia.

AFAIK, GC is not directly referenced in druntime, so you already should be able to link with different GC implementation. If you provide all symbols requested by the code, the linker won't link default GC module.