November 22, 2021

On Monday, 22 November 2021 at 12:33:46 UTC, Imperatorn wrote:

>

ORC seems like a pretty nice solution

Yes, since the cycle detection is automatic there is no necessity to badge references as 'weak' in order to avoid cyclic references. So it is a compromise between reference counting and tracing GC. Nice really since automatic memory management should really be automatic.

November 23, 2021

On Monday, 22 November 2021 at 10:16:28 UTC, Araq wrote:

>

On Friday, 19 November 2021 at 19:41:59 UTC, Paulo Pinto wrote:

>

On Friday, 19 November 2021 at 19:02:45 UTC, Araq wrote:

>

[...]

Different in what way, given the optimizations referred in the paper and plans for future work, which unfortunately never realised given the team's move into Olivetti, where they eventually created Modula-2+ and Modula-3.

ORC is precise, it doesn't do conservative stack marking, ORC's cycle detector uses "trial deletion", not "mark and sweep", ORC removes cycle candidates in O(1) which means it can exploit acyclic structures at runtime better than previous algorithms, ORC has a heuristic for "bulk cycle detection"...

Thanks.

November 23, 2021
On Mon, Nov 22, 2021 at 02:27:00PM +0000, IGotD- via Digitalmars-d wrote:
> On Monday, 22 November 2021 at 12:33:46 UTC, Imperatorn wrote:
> > 
> > ORC seems like a pretty nice solution
> 
> Yes, since the cycle detection is automatic there is no necessity to badge references as 'weak' in order to avoid cyclic references. So it is a compromise between reference counting and tracing GC. Nice really since automatic memory management should really be automatic.

I skimmed through the paper yesterday.  Very interesting indeed!  The nicest thing about it is that it's completely transparent: application code doesn't have to know that ORC is being used (rather than, e.g., tracing GC).  As long as the refcount is updated correctly (under the hood by the language), the rest just takes care of itself.  As far as the user is concerned, it might as well be a tracing GC instead.  This is good news, because it means that if we hypothetically implement such a scheme in D, you could literally just flip a compiler switch to switch between tracing GC and ORC, and pretty much the code would Just Work(tm).

(Unfortunately, a runtime switch isn't possible because this is still a ref-counting system, so pointer updates will have to be done differently when using ORC.)


T

-- 
GEEK = Gatherer of Extremely Enlightening Knowledge
November 23, 2021
On Tuesday, 23 November 2021 at 17:55:30 UTC, H. S. Teoh wrote:
> On Mon, Nov 22, 2021 at 02:27:00PM +0000, IGotD- via Digitalmars-d wrote:
>> On Monday, 22 November 2021 at 12:33:46 UTC, Imperatorn wrote:
>> > 
>> > ORC seems like a pretty nice solution
>> 
>> Yes, since the cycle detection is automatic there is no necessity to badge references as 'weak' in order to avoid cyclic references. So it is a compromise between reference counting and tracing GC. Nice really since automatic memory management should really be automatic.
>
> I skimmed through the paper yesterday.  Very interesting indeed!  The nicest thing about it is that it's completely transparent: application code doesn't have to know that ORC is being used (rather than, e.g., tracing GC).  As long as the refcount is updated correctly (under the hood by the language), the rest just takes care of itself.  As far as the user is concerned, it might as well be a tracing GC instead.  This is good news, because it means that if we hypothetically implement such a scheme in D, you could literally just flip a compiler switch to switch between tracing GC and ORC, and pretty much the code would Just Work(tm).
>
> (Unfortunately, a runtime switch isn't possible because this is still a ref-counting system, so pointer updates will have to be done differently when using ORC.)
>
>
> T

As long as D doesn't distinguish GC'ed pointers from non-GC'ed pointers and allows for unprincipled unions I fail to see how it's "good news". Multi-threading is also a problem, in Nim we can track global variables and pass "isolated" subgraphs between threads so that the RC ops do not have to be atomic. Copying ORC over to D is quite some work and in the end you might have a D that is just a Nim with braces. Well ... you would still have plenty of D specific quirks left I guess.
November 23, 2021

On Tuesday, 23 November 2021 at 19:22:11 UTC, Araq wrote:

>

As long as D doesn't distinguish GC'ed pointers from non-GC'ed pointers and allows for unprincipled unions I fail to see how it's "good news".

The union issue can be fixed by using a selector-function, but it sounds like Nim and its memory management solution is higher level than D. It would probably be a mistake for D to follow there given the focus on importC etc.

November 23, 2021

On Tuesday, 23 November 2021 at 21:14:39 UTC, Ola Fosheim Grøstad wrote:

>

It would probably be a mistake for D to follow there given the focus on importC etc.

Not sure if I'm interpreting your answer correctly but there is no contradiction between managed pointers and raw pointers when it comes to interoperability. Nim can simply cast raw pointers from a managed pointers and pass them to FFI. Higher level of abstraction when in comes to memory management will not hurt FFI at all. The same cautions with FFIs as we have today must of course be taken.

November 23, 2021

On Tuesday, 23 November 2021 at 22:21:51 UTC, IGotD- wrote:

>

Not sure if I'm interpreting your answer correctly but there is no contradiction between managed pointers and raw pointers when it comes to interoperability.

I don't know enough about Nim, but it would be a mistake to replace one memory management solution with another one, if it:

  1. Still does not satisfy people who want something slightly higher level than C++, but low level enough to create competitive game engines.

  2. Requires semantic changes that makes D more like Nim.

  3. Increases the complexity of the compiler unnecessarily.

I would strongly favour simple schemes. So I'd rather see actor-local GC + ARC (without cycle detection).

Complex schemes tend to go haywire when people go hard in on low level hand-optimization. Programmers need to understand what goes on. As can be seen in the forums, many have a hard time understanding how the current GC works (despite it being quite simplistic). I can only imagine how many will fail to understand ORC…

November 24, 2021

Gentlemen, good afternoon.
Let me make a couple of comments.
I think D is an excellent language for rapid development and I am trying to
popularize it here in this vein.

This year I (suddenly) resumed my education
(second grade) and suddenly found myself as old experienced dude in a young student environment.

Tasks to be solved include programming. D ideally lie on almost all occasions.
It would seem that we are going "forward and upward". But ... There are several
stoppers at once.

First of all, the GUI and huge difficulties (for a student ---
not a professional programmer, but only a programming user) to draw the GUI
elements.
The data plotter window on D is an almost insoluble problem for a
student.

There are no full-fledged signals --- slots with the ability to exchange
data between threads (in the style of Qt) => whole familiar sections
immediately drop out.

In general, the absence of bindings to Qt and the
laboriousness of creating bindings to ordinary libraries is already a huge
stopper.

Async? Oops.

Finally, the situation, that most of the packages in the
dub registry are simply not compilable by the current versions of the compilers
(at least, without the shamanic dances with "depercated". It's inaccessible to the
average student).

IMHO, I would like to draw the first attention to this, then
the spread of the language could follow the same path as Python in the recent
past. From students to experienced peoples.

Automatically and successfully.

November 24, 2021

On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:

>

In general, the absence of bindings to Qt and the
laboriousness of creating bindings to ordinary libraries is already a huge stopper.

QtE5.

November 24, 2021

On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:

>

There are no full-fledged signals --- slots with the ability to exchange
data between threads (in the style of Qt) => whole familiar sections
immediately drop out.

D has a native message system between threads. Qt signals is a special case as it also can be used in the same thread and then it is just a function call. Also there are syntax sugar for declaring Qt signals in C++.

I don't know any language that natively implements signals as Qt does.