June 12, 2021

On Saturday, 12 June 2021 at 11:35:58 UTC, Paulo Pinto wrote:

>

Rust slowness is mostly due to LLVM, with another backends like cranelift and using lld it is considerably faster.

And what about mir? Guys, every IR is a cost.

>

They are also in the process of moving lifetime resolution engine to a Datalog based one, something that D most likely will never do,
http://smallcultfollowing.com/babysteps/blog/2018/04/27/an-alias-based-formulation-of-the-borrow-checker/

Interesting, but it won't cut theoretical complexity just by using better heuristics.

June 12, 2021

On Saturday, 12 June 2021 at 12:12:51 UTC, sighoya wrote:

>

On Saturday, 12 June 2021 at 11:35:58 UTC, Paulo Pinto wrote:

>

Rust slowness is mostly due to LLVM, with another backends like cranelift and using lld it is considerably faster.

And what about mir? Guys, every IR is a cost.

Not necessarily. If you can make it independent, like verification, then it can run in parallell (even on another computing node).

If you can make the verification unit self-contained then you can also cache results.

It is a combination of well thought out language design and tooling design.

June 12, 2021

On Saturday, 12 June 2021 at 12:10:13 UTC, sighoya wrote:

>

On Saturday, 12 June 2021 at 11:29:59 UTC, Ola Fosheim Grøstad wrote:

>

Start thinking of D-tooling as a set of tools, not just DMD. So you have DMD + verifier + IDE-server + IDE of you own choice.

This is a major undertaking for DMD, why not developing a separate IDE compile just as it is the case for Java, it is slower, fur sure, but it literally enables developing code faster.

Hahaha, maintaining an evergrowing compiler is A MUCH larger undertaking.

>

I think this is true regarding GC as the evil of efficiency :).

Hi, I read an original paper for Simulas GC as part of a compiler course. It didn't differ much from D's GC.

We are talking 1960/1970s.

June 12, 2021

On Saturday, 12 June 2021 at 12:12:51 UTC, sighoya wrote:

>

And what about mir? Guys, every IR is a cost.

Besides the concurrency argument making this argument moot, generating the IR is O(N), advanced static analysis is O(N*N).

June 13, 2021
On Saturday, 12 June 2021 at 08:58:47 UTC, zjh wrote:
> On Saturday, 12 June 2021 at 08:13:42 UTC, Ola Fosheim Grøstad wrote:
>> One of the things I don't like about C++ is that signatures
>
> Life cycle is an range, should used `belonging to(∈), equal to(==), including(∈,flips)`, and the opposite`(¢,!=)`.

∈ is wrong; one lifetime isn't a _member_ of another, but a _subset_ (if it dies first) or a _superset_ (if it dies after).

That being said, I think the <= notation is appropriate.  In particular, it mirrors type theory, where T≤U means that T is a subtype of U.
June 13, 2021

On Sunday, 13 June 2021 at 01:00:11 UTC, Elronnd wrote:

>

On Saturday, 12 June 2021 at 08:58:47 UTC, zjh wrote:

See if we need the starting point of lifetime.
If you need, it is range. range relations are: include (included), equal (unequal), intersect (or not).
If you don't care starting point, just care ending point.Lifetime is of comparing relation.

June 14, 2021
On Sunday, 13 June 2021 at 01:00:11 UTC, Elronnd wrote:
> That being said, I think the <= notation is appropriate.  In particular, it mirrors type theory, where T≤U means that T is a subtype of U.

Yes, although when I think of it. You might also want an abstract data type (ADT) to claim that objects it return references has the same lifetime as itself.

Which basically means the object will be destroyed by the destruction of the ADT.

Maybe one can define the lifetime equivalent of "critical sections", basically stating that "these objects" will outlive the entry of "that critical section". Or that "these objects" will not outlive the entry of "that critical section".

Hm.


1 2
Next ›   Last »