May 03, 2022
On 5/3/22 07:57, Alain De Vos wrote:

> But at the same time be able to be sure memory is given free when a
> variable is going out of scope.

Let's expand on that please. What exactly is the worry there? Are you concerned that the program will have memory leaks, and eventually got killed by the OS?

Do you want memory to be freed all the way to the OS? Would it be possible that a call to some_library_free() puts that memory in a free list to be used for later allocations? Or do you insist that memory really goes back to the OS?

Why are you worried about how memory is managed?

The way I think is this: When I use some feature and that feature allocates memory, it is not up to me to free memory at all. I don't want to get involved in how that memory is managed.

On the other hand, if I were the party that did allocate memory, fine, then I might be involved in freeing.

Note that 'new' is not raw memory allocation. So it should not involve raw memory freeing.

Sorry for all the questions but I am really curious why. At the same time, I have a suspicion: You come from a language like C++ that thinks deterministic memory freeing is the only way to go. It took me many years to learn that C++'s insintence on that topic is wrong.

Memory can be freed altogether at some later time. Further, not every object needs to be destroyed. These are based on one of John Lakos's C++Now presentations where he shows comparisons of different destruction and freeing schemes where (paraphrasing) "no destruction whatsoever; poof the array disappears." Not surprisingly, that happens to be the fastest destruction plus free.

Ali

May 03, 2022
On Tue, May 03, 2022 at 02:57:46PM +0000, Alain De Vos via Digitalmars-d-learn wrote:
> Note, It's not i'm against GC. But my preference is to use builtin
> types and libraries if possible,
> But at the same time be able to be sure memory is given free when a
> variable is going out of scope.
> It seems not easy to combine the two with a GC which does his best
> effort but as he likes or not.

If your objects have a well-defined lifetime and you want to control when they get freed, just use malloc/free or equivalents (use emplace to initialize the object in custom-allocated memory). Don't use the GC. Using the GC means you relinquish control over when (and in what order) your objects get freed.


T

-- 
There's light at the end of the tunnel. It's the oncoming train.
May 03, 2022

On Tuesday, 3 May 2022 at 12:59:31 UTC, Alain De Vos wrote:

>

Error: array literal in @nogc function test.myfun may cause a GC allocation

@nogc void myfun(){
scope int[] i=[1,2,3];
}//myfun

May is a fuzzy word...

For this particular piece of code, you can use a static array to guarantee the usage of stack allocation

import std;
@nogc void myfun(){
/* no need to use scope now */int[3] i=[1,2,3];//this now compiles

}//myfun

void main()
{
    writeln("Hello D");
}
May 04, 2022

On Tuesday, 3 May 2022 at 14:57:46 UTC, Alain De Vos wrote:

>

Note, It's not i'm against GC. But my preference is to use builtin types and libraries if possible,
But at the same time be able to be sure memory is given free when a variable is going out of scope.
It seems not easy to combine the two with a GC which does his best effort but as he likes or not.

What I described is an optional compiler optimization. The compiler is free to avoid the GC allocation for an array literal initializer if it is possible to do so. If you were to, e.g., return the array from the function, it would 100% for sure be allocated on the GC and not the stack. In practice, I don't know if any of the compilers actually do this.

Anyway, if you care when memory is deallocated, then the GC isn't the right tool for the job. The point of the GC is that you don't have to care.

May 04, 2022
On Wednesday, 4 May 2022 at 02:42:44 UTC, Mike Parker wrote:
> On Tuesday, 3 May 2022 at 14:57:46 UTC, Alain De Vos wrote:
>> Note, It's not i'm against GC. But my preference is to use builtin types and libraries if possible,
>> But at the same time be able to be sure memory is given free when a variable is going out of scope.
>> It seems not easy to combine the two with a GC which does his best effort but as he likes or not.
>
> What I described is an optional compiler optimization. The compiler is free to avoid the GC allocation for an array literal initializer if it is possible to do so. If you were to, e.g., return the array from the function, it would 100% for sure be allocated on the GC and not the stack. In practice, I don't know if any of the compilers actually do this.
>
> Anyway, if you care when memory is deallocated, then the GC isn't the right tool for the job. The point of the GC is that you don't have to care.

GC is about reducing the complexity, cognitive load, and possible bugs - associated with manual memory management.

It is certainly *not* about you not having to care anymore (about memory management).

Why not have an option to mark an object, so that real-time garbage collection occurs on it as it exits scope?
May 04, 2022
On Wednesday, 4 May 2022 at 04:52:05 UTC, forkit wrote:

>
> It is certainly *not* about you not having to care anymore (about memory management).
>

That's not at all what I said. You don't have to care about *when* memory is deallocated, meaning you don't have to manage it yourself.
May 04, 2022
On Wednesday, 4 May 2022 at 05:13:04 UTC, Mike Parker wrote:
> On Wednesday, 4 May 2022 at 04:52:05 UTC, forkit wrote:
>
>>
>> It is certainly *not* about you not having to care anymore (about memory management).
>>
>
> That's not at all what I said. You don't have to care about *when* memory is deallocated, meaning you don't have to manage it yourself.

In any case, I disagree that caring about when memory gets deallocted means you shouldn't be using GC. (or did I get that one wrong too??)

You can have the best of both worlds, surely (and easily).

This (example from first post):

void main(){
    int[] i = new int[10000];

    import object: destroy;
    destroy(i);
    import core.memory: GC;
    GC.free(GC.addrOf(cast(void *)(i.ptr)));
}

could (in theory) be replaced with this:

void main(){
    inscope int[] i = new int[10000];

    // inscope means 2 things:
    // (1) i cannot be referenced anywhere except within this scope.
    // (2) i *will* be GC'd when this scope ends

}

May 04, 2022
On Wednesday, 4 May 2022 at 05:37:49 UTC, forkit wrote:

>> That's not at all what I said. You don't have to care about *when* memory is deallocated, meaning you don't have to manage it yourself.
>
> In any case, I disagree that caring about when memory gets deallocted means you shouldn't be using GC. (or did I get that one wrong too??)
>
> You can have the best of both worlds, surely (and easily).
>
> This (example from first post):
>
> void main(){
>     int[] i = new int[10000];
>
>     import object: destroy;
>     destroy(i);
>     import core.memory: GC;
>     GC.free(GC.addrOf(cast(void *)(i.ptr)));
> }
>

All you're doing here is putting unnecessary pressure on the GC. Just use `malloc` and then `free` on `scope(exit)`. Or if you want to append to the array without managing the memory yourself, then use `std.container.array` instead. That's made for deterministic memory management with no GC involvement.
May 04, 2022

On Wednesday, 4 May 2022 at 05:37:49 UTC, forkit wrote:

>
inscope int[] i = new int[10000];

You often see the "here's an array of ints that exists only in one scope to do one thing, should we leave it floating in memory or destroy it immediately?" as examples for these GC discussions. Not to steal OP's thread and whatever particular needs he's trying to achive, but hopefully provide another use case: I write games, and performance is the number one priority, and I stumbled heavily with the GC when I first began writing them in D. Naively, I began writing the same types of engines I always did, and probably thinking with a C/C++ mentality of "just delete anything you create", with a game loop that involved potentially hundreds of entities coming into existence or being destroyed every frame, in >=60 frame per second applications. The results were predictably disastrous, with collections running every couple seconds, causing noticeable stutters in the performance and disruptions of the game timing. It might have been my fault, but it really, really turned me off from the GC completely for a good long while.

I don't know what types of programs the majority of the D community writes. My perception, probably biased, was that D's documentation, tours, and blogs leaned heavily towards "run once, do a thing, and quit" applications that have no problem leaving every single thing up to the GC, and this wasn't necessarily a good fit for programs that run for hours at a time and are constantly changing state. Notably, an early wiki post people with GC issues were directed to revolved heavily around tweaks and suggestions to work within the GC, with the malloc approach treated as a last-resort afterthought.

Pre-allocating lists wasn't a good option as I didn't want to set an upper limit on the number of potential entities. The emergency fix at the time was inserting GC.free to forcibly deallocate things. Ultimately, the obvious correct answer is just using the malloc/emplace/free combo, but I'm just disappointed with how ugly and hacky this looks, at least until they've been wrapped in some nice NEW()/DELETE() templates.

auto foo = new Foo;
delete foo; // R.I.P.
import core.stdc.stdlib : malloc, free;
import core.lifetime : emplace;
auto foo = cast(Foo) malloc(__traits(classInstanceSize, Foo));
emplace!Foo(foo);
destroy(foo);
free(cast(void*) foo);

Can you honestly say the second one looks as clean and proper as the first? Maybe it's a purely cosmetic quibble, but one feels like I'm using the language correctly (I'm not!), and the other feels like I'm breaking it (I'm not!).

I still use the GC for simple niceties like computations and searches that don't occur every frame, though even then I've started leaning more towards std.container.array and similar solutions; additionally, if something IS going to stay in memory forever (once-loaded data files, etc), why put it in the GC at all, if that's just going to increase the area that needs to be scanned when a collection finally does occur?

I'd like to experiment more with reference counting in the future, but since it's just kind of a "cool trick" in D currently involving wrapping references in structs, there are some hangups. Consider for example:

import std.container.array;
struct RC(T : Object) {
	T obj;
	// insert postblit and refcounting magic here
}
class Farm {
	Array!(RC!Animal) animals;
}
class Animal {
	RC!Farm myFarm; // Error: struct `test.RC(T : Object)` recursive template expansion
}

Logically, this can lead to leaked memory, as a Farm and Animal that both reference each other going out of scope simultaneously would never get deallocated. But, something like this ought to at least compile (it doesn't), and leave it up to the programmer to handle logical leak problems, or so my thinking goes at least. I also really hate having to prepend RC! or RefCounted! to everything, unless I wrap it all in prettier aliases.

May 04, 2022
On Wednesday, 4 May 2022 at 08:23:33 UTC, Mike Parker wrote:
> On Wednesday, 4 May 2022 at 05:37:49 UTC, forkit wrote:
>
>>> That's not at all what I said. You don't have to care about *when* memory is deallocated, meaning you don't have to manage it yourself.
>>
>> In any case, I disagree that caring about when memory gets deallocted means you shouldn't be using GC. (or did I get that one wrong too??)
>>
>> You can have the best of both worlds, surely (and easily).
>>
>> This (example from first post):
>>
>> void main(){
>>     int[] i = new int[10000];
>>
>>     import object: destroy;
>>     destroy(i);
>>     import core.memory: GC;
>>     GC.free(GC.addrOf(cast(void *)(i.ptr)));
>> }
>>
>
> All you're doing here is putting unnecessary pressure on the GC. Just use `malloc` and then `free` on `scope(exit)`. Or if you want to append to the array without managing the memory yourself, then use `std.container.array` instead. That's made for deterministic memory management with no GC involvement.


Reverting to C style 'malloc and free' is not the solution here, since the intent is not to revert to manually managing dynamically allocated memory.

Rather, the intent was to just have 'a simple form of control' over the lifetime of the dynamically allocated memory - the object being pointed to in GC memory pool.

I understand that my idea may put uncessary pressure on the existing GC, but a GC (in theory) could surely handle this scenario..

If D had such a feature, I'd already be using it.