October 07, 2015
On Wednesday, 7 October 2015 at 17:22:49 UTC, Atila Neves wrote:
> On Wednesday, 7 October 2015 at 17:02:51 UTC, Paulo Pinto wrote:
>> On Wednesday, 7 October 2015 at 15:42:57 UTC, Ola Fosheim Grøstad wrote:
>>> On Wednesday, 7 October 2015 at 13:15:11 UTC, Paulo Pinto wrote:
>>>> In general, I advocate any form of automatic memory/resource management. With substructural type systems now being my favorite, but they still have an uphill battle for adoption.
>>>
>>> Are you thinking about Rust, or some other language?
>>
>>
>> All of the ones that explore this area. Rust, ATS, Idris, F*....
>>>
>>>> Also as a note, Microsoft will be discussing their proposed C++ solution with the Rust team.
>>>
>>> Are you thinking about more lintish tools that can give false positives, or something with guarantees that can be a language feature?
>>
>> What Herb Sutter demoed at CppCon as compiler validation to CoreC++.
>>
>> I can imagine that depending on how well the community takes those guidelines, they might become part of C++20.
>>
>> On the other hand, on Herb's talk around 1% of the audience acknowledged the use of static analysers. Pretty much in sync what I see in enterprise developers.
>
> The CppCon demos were impressive, but I'm dying to see how Microsoft's analyser works out in real life. I've seen too many tools with too many false positives to be useful, and I'm sceptical that a library solution is all it takes to make C++ safe. As I asked Bjarne after his keynote, if it were that easy, why does Rust exist?
>
> Atila

I would say the answer to that question was present in both Bjarne and Herb's talks.

Most developers don't care, specially in the enterprise space, they will keep on using what they learned until something new forces them to change their habits.

Hence why almost no one answered that they were using static analysers on Herb's talk and Bjarne had that slide about C++11 and C++14 being ignored.

I always see C with a C++ compiler or C with Classes idoms. Hence why I happily live in Java/.NET land with the occasional trip to the C++ cousin.

Still, as a language geek, I find their work quite interesting.
October 08, 2015
On Wednesday, 7 October 2015 at 13:15:11 UTC, Paulo Pinto wrote:
> On Wednesday, 7 October 2015 at 12:56:32 UTC, bitwise wrote:
>> On Wednesday, 7 October 2015 at 07:24:03 UTC, Paulo Pinto wrote:
>>> On Tuesday, 6 October 2015 at 20:43:42 UTC, bitwise wrote:
>>>>[...]
>>>
>>> That no, but this yes (at least in C#):
>>>
>>> using (LevelManager mgr = new LevelManager())
>>> {
>>>      //....
>>>      // Somewhere in the call stack
>>>      Texture text = mgr.getTexture();
>>> }
>>> --> All level resources gone that require manual management gone
>>> --> Ask the GC to collect the remaining memory right now
>>>
>>> If not level wide, than maybe scene/section wide.
>>>
>>> However I do get that not all architectures are amendable to be re-written in a GC friendly way.
>>>
>>> But the approach is similar to RAII in C++, reduce new to minimum and allocate via factory functions that work together with handle manager classes.
>>>
>>> --
>>> Paulo
>>
>> Still no ;)
>>
>> It's a Texture. It's meant to be seen on the screen for a while, not destroyed in the same scope which it was created.
>>
>> In games though, we have a scene graph. When things happen, we often chip off a large part of it while the game is running, discard it, and load something new. We need to know that what we just discarded has been destroyed completely before we start loading new stuff when we're heavily constrained by memory. And even in cases where we aren't that constrained by memory, we need to know things have been destroyed, period, for non-memory resources. Also, when using graphics APIs like OpenGL, we need control over which thread an object is destroyed in, because you can't access OpenGL resources from just any thread. Now, you could set up some complicated queue where you send textures and so on to(latently) be destroyed, but this is just complicated. Picture a Hello OpenGL app in D and the hoops some noob would have to jump through. It's bad news.
>>
>> Also, I should add, that a better example of the Texture thing would be a regular Texture and a RenderTexture. You can only draw to the RenderTexture, but you should be able to apply both to a primitive for drawing. You need polymorphism for this. A struct will not do.
>>
>>     Bit
>
> I guess you misunderstood the // Somewhere in the call stack
>
> It is meant as the logical region where that scene graph block you refer to is valid.
>
> Anyway I was just explaining what is possible when one embraces the tools GC languages offer.

I still don't think your example exists in real world applications. Typically, you don't have that kind of control over the application's control-flow. You don't really have the option of unwinding the stack when you want to clean up. Most applications these days are event-based. When things are loaded or unloaded, it's usually as a result of some event-callback originating from either an input event, or a display link callback. To clarify, on iOS, you don't have a game loop. You can register a display-link or timer which will call your 'draw' or 'update' function at a fixed interval. On top of this, you just can't rely on a strict hierarchical ownership of resources like this. large bundles of resources may be loaded/unloaded in any order, at any time.

> And both Java and .NET do offer support such type of queues as well.

I was actually thinking about this.

If D had a standard runloop of some sort(like NSRunLoop/performSelectorOnThread: for iOS/OSX) then it would make queueing things to other threads a little easier. I suppose D's receive() API could be used to make something a little more specialized. But although this would allow classes to delegate the destruction of resources to the correct thread, it wouldn't resolve the problem that those destruction commands will still only be delegated if/when a classes destructor is actually called.

> In general, I advocate any form of automatic memory/resource management.

+1 :)



October 08, 2015
On Tuesday, 6 October 2015 at 20:31:58 UTC, Jonathan M Davis wrote:
> I don't think the problem is with structs. The problem is that programmers coming from other languages default to using classes. The default in D should always be a struct. You use a class because you actually need inheritance or because you want to ensure that a type is always a reference type and don't want to go to the trouble of writing a struct that way (and even then, you should probably just write the struct that way).

Hmm... If we must emulate reference semantics manually, it feels like C++ with explicit references, pointers and all sorts of smart pointers, and obviates need for classes being reference types: just emulate reference semantics as we must do it anyway.
October 08, 2015
On Wednesday, 7 October 2015 at 00:17:37 UTC, bitwise wrote:
> If it takes long enough that C++ has reflection, modules, ranges, stackless coroutines, concepts, etc, then I gotta be honest, I'm gonna start worrying about investing too much time in D.

You manage resources with reference counting in C++? How it deals with circular references?
October 08, 2015
On Thursday, 8 October 2015 at 10:05:53 UTC, Kagamin wrote:
> On Wednesday, 7 October 2015 at 00:17:37 UTC, bitwise wrote:
>> If it takes long enough that C++ has reflection, modules, ranges, stackless coroutines, concepts, etc, then I gotta be honest, I'm gonna start worrying about investing too much time in D.
>
> You manage resources with reference counting in C++? How it deals with circular references?

You have to use weak pointers for back references.

http://en.cppreference.com/w/cpp/memory/weak_ptr

October 08, 2015
On Thursday, 8 October 2015 at 10:15:12 UTC, Ola Fosheim Grøstad wrote:
> You have to use weak pointers for back references.
>
> http://en.cppreference.com/w/cpp/memory/weak_ptr

I suspected as much and the next question is how is it better than C# solution except that one used to cope with C++ way?
October 08, 2015
On Thursday, 8 October 2015 at 10:26:22 UTC, Kagamin wrote:
> On Thursday, 8 October 2015 at 10:15:12 UTC, Ola Fosheim Grøstad wrote:
>> You have to use weak pointers for back references.
>>
>> http://en.cppreference.com/w/cpp/memory/weak_ptr
>
> I suspected as much and the next question is how is it better than C# solution except that one used to cope with C++ way?

I don't know. I don't like extensive reference counting and don't use weak_ptr. One can usually avoid cycles for resources by design.

October 08, 2015
On Wednesday, 7 October 2015 at 17:02:51 UTC, Paulo Pinto wrote:
> On Wednesday, 7 October 2015 at 15:42:57 UTC, Ola Fosheim Grøstad wrote:
>> Are you thinking about Rust, or some other language?
>
> All of the ones that explore this area. Rust, ATS, Idris, F*....

Oh, yeah, sure. I wondered more if you were looking to adopt a language with substructural typing (beyond library types like unique_ptr) for production.

I personally think that they future is with actor-based programming in combination with substructural/behavioural typing since it lends itself to distributed computing, multi core etc. The challenge is making a good language for it that is sufficiently performant and still allows breaking out actors to other computational units (computers/CPUs).

But yeah, I think there is a paradigm shift coming in ~10-15 years maybe?

>> Are you thinking about more lintish tools that can give false positives, or something with guarantees that can be a language feature?
>
> What Herb Sutter demoed at CppCon as compiler validation to CoreC++.

I've only seen the talks on youtube. I was under the impression that Microsoft had accurate and inaccurate analysers, but that the accurate ones were too slow on current C++ code bases. With more annotations to guide the analyser... yes, maybe.

I assume Microsoft use analysers based on Boogie:

http://research.microsoft.com/en-us/projects/boogie/

> I can imagine that depending on how well the community takes those guidelines, they might become part of C++20.

I think this is needed, but adoption probably won't happen without IDE benefits.

October 08, 2015
On Thursday, 8 October 2015 at 10:34:44 UTC, Ola Fosheim Grøstad wrote:
> I don't know. I don't like extensive reference counting and don't use weak_ptr. One can usually avoid cycles for resources by design.

The required design is that a resource handle must reside in a non-resource object, that object must be reference counted too to ensure correctness of resource management, and you can't guarantee reliably that those non-resource objects don't form a cyclic graph. If you must manually verify the graph and put weak references appropriately - what kind of design in that?
October 08, 2015
On Thursday, 8 October 2015 at 11:31:49 UTC, Kagamin wrote:
> cyclic graph. If you must manually verify the graph and put weak references appropriately - what kind of design in that?

It's a system programming language design... If you plan your model before coding it is rather easy to detect cycles in the model. Make the primary data structure a directed acyclic graph, then add back pointers as weak_ptr for secondary relations.

I believe you will find the same issues in Objective-C and Swift.

Other options:

- use regional allocation (free all resources at once)

- use a local scanner (trace live resources locally to a data structure, then free the ones that are not referenced).