Jump to page: 1 2
Thread overview
Can signatures be made simpler and error messages be made better?
Jun 12
zjh
Jun 13
Elronnd
Jun 13
zjh
Jun 12
sighoya
Jun 12
sighoya
Jun 12
sighoya
Jun 12
sighoya
June 12

One of the things I don't like about C++ is that signatures often end up being complex. Sadly, D2's signatures are even more complex (D1 had an edge on C++ there).

Why not simply move "linting" information to a separate line and keep the "first line" of function definitions clean.

E.g. to require the return value, and c.x to have the same lifetime:

int myfunc(A a, ref T c)
lifetime(return,c.x)
{
  body
}

Or introduce a new operator £ that takes the lifetime of an object, which I think is even better:

int myfunc(A a, B b, ref T c)
require(£return==£a.x)
{
  body
}

This would also allow specifying destruction order

require(£b < £a.x)

Or are there things that needs to be expressed about life times that cannot work with such a scheme?

I also think a scheme like this should allow library authors to give names to individual requirements and invariants.

That would allow better error messages, maybe even custom error handlers, e.g.:

…
invariant outofbounds { tests }
…
require lifetimes (…)
…
__interpret_compilation_error outofbounds (context) {
   if (context.xyz such and such)
   return "This failed because..."
}

June 12

On Saturday, 12 June 2021 at 08:13:42 UTC, Ola Fosheim Grøstad wrote:

>

One of the things I don't like about C++ is that signatures

Life cycle is an range, should used belonging to(∈), equal to(==), including(∈,flips), and the opposite(¢,!=).

June 12

On Saturday, 12 June 2021 at 08:58:47 UTC, zjh wrote:

>

On Saturday, 12 June 2021 at 08:13:42 UTC, Ola Fosheim Grøstad wrote:

>

One of the things I don't like about C++ is that signatures

Life cycle is an range, should used belonging to(∈), equal to(==), including(∈,flips), and the opposite(¢,!=).

Yes, you can think of it as "being within" too, but what if you only care about destruction order?

I am thinking that we only need "<" and "<=" for function signatures as we assume that the object is already constructed? Although maybe there are use cases where you want to be able to delay construction (as an optimization or for some other reason).

Number maximum(Number a, Number b)
require(£return <= £a && £return <= £b)
{
…
}

As a more advanced language extension:

Number maximum(Number a, Number b)
require(!(a >= b) || £return <= £a)
require(!(a >= b) || £return <= £b)
{
…
}

or simply

Number maximum(Number a, Number b)
{
   if (a >= b) {
      assert(£return <= £a)
      reuturn a
   }
   assert(£return <= £b)
   return b
}

(I have little experience with lifetimes in practice as I have not given Rust a shot yet.)

June 12

On Saturday, 12 June 2021 at 08:13:42 UTC, Ola Fosheim Grøstad wrote:

>
int myfunc(A a, B b, ref T c)
require(£return==£a.x)
{
  body
}

Why not just reusing what we have?:

int myfunc(A a, B b, ref T c) if c.lt == retval.lt

Lifetime tracking is already some form of static dependent typing, and to be useable, it requires to introduce some type state to work for that.
In Rust, lifetimes are inferred all the time, and the inferred lifetimes are propagated to infer new lifetimes.
Rust exhibits this property from the beginning on, don't know if this is possible for language to do afterwards, another downside is that type state, in special lifetime state, takes additional burden to the compiler.
Solving inequalities may be harder than simple polynomial constraint evaluation (disregarding custom computations), just as it is the case for the current if section?

I think this is part of the reason why Rust compiles slow.

To be useful, such constraints have to be preserved in ABI, otherwise you end up with the same problems as in Rust.

Here is a link to Ralf's Thesis: https://people.mpi-sws.org/~jung/phd/thesis-screen.pdf

I've never read it, but it describes how lifetimes work, at least I saw a chapter about it.

June 12

On Saturday, 12 June 2021 at 10:38:52 UTC, sighoya wrote:

>

I think this is part of the reason why Rust compiles slow.

To be useful, such constraints have to be preserved in ABI, otherwise you end up with the same problems as in Rust.

You can export the constraints as annotations.

But I think it absolutely should not be part of the type or overloading. You should be able to turn it off and still get the same executable.

That way it has no impact on compile times, meaning, you can do the "linting" in the background after compilation.

That way you have fast compilation, can start testing immediately, and while testing the linter produces a report on borrowing-problems.

June 12

On Saturday, 12 June 2021 at 10:48:12 UTC, Ola Fosheim Grøstad wrote:

>

But I think it absolutely should not be part of the type or overloading. You should be able to turn it off and still get the same executable.

That way it has no impact on compile times, meaning, you can do the "linting" in the background after compilation.

That way you have fast compilation, can start testing immediately, and while testing the linter produces a report on borrowing-problems.

Just to reiterate the essence: The general problem with static dependent typing is to track type (lifetime) state to make the lifetime constraints useful, otherwise you have hard times to auto prove your code given the life times constraints.

It is true that we would save lifetime inference if we would make it as an opt-in. But if we turn it on, then we need anyway to track the lifetime state of the world before the function call, which could include all code possibly.
Further, code written between these opt-ins can be wrong and needs to be corrected to work properly, which can become a hard undertaking. I think, this is the reason why Rust makes it mandatory all the time.

June 12

On Saturday, 12 June 2021 at 11:14:08 UTC, sighoya wrote:

>

Just to reiterate the essence: The general problem with static dependent typing is to track type (lifetime) state to make the lifetime constraints useful, otherwise you have hard times to auto prove your code given the life times constraints.

Not really sure what you mean now. You would have to make the constraints more limiting than needed, to get the verification to pass. That is certain.

However, the basic idea would be to emit the high level IR, so there is really no limit to what can be analyzed.

>

It is true that we would save lifetime inference if we would make it as an opt-in.

I don't think it should be opt-in. The code gen would optimize based on the guarantees that are stated in the code. So the verfication would run, but it could run delayed, then feed back information to the IDE.

I think it should ship with the compiler, but I think it should be a separate pass.

It can be done as a separate program for sure. Which also would allow using solvers developed for other languages.

Start thinking of D-tooling as a set of tools, not just DMD. So you have DMD + verifier + IDE-server + IDE of you own choice.

>

I think, this is the reason why Rust makes it mandatory all the time.

Maybe the mindset of Go and Rust teams are very conservative and stuck in the past?

Try to think of the compiler as just a small piece of the development environment.

June 12

On Saturday, 12 June 2021 at 10:38:52 UTC, sighoya wrote:

>

On Saturday, 12 June 2021 at 08:13:42 UTC, Ola Fosheim Grøstad wrote:

>
int myfunc(A a, B b, ref T c)
require(£return==£a.x)
{
  body
}

Why not just reusing what we have?:

int myfunc(A a, B b, ref T c) if c.lt == retval.lt

Lifetime tracking is already some form of static dependent typing, and to be useable, it requires to introduce some type state to work for that.
In Rust, lifetimes are inferred all the time, and the inferred lifetimes are propagated to infer new lifetimes.
Rust exhibits this property from the beginning on, don't know if this is possible for language to do afterwards, another downside is that type state, in special lifetime state, takes additional burden to the compiler.
Solving inequalities may be harder than simple polynomial constraint evaluation (disregarding custom computations), just as it is the case for the current if section?

I think this is part of the reason why Rust compiles slow.

To be useful, such constraints have to be preserved in ABI, otherwise you end up with the same problems as in Rust.

Here is a link to Ralf's Thesis: https://people.mpi-sws.org/~jung/phd/thesis-screen.pdf

I've never read it, but it describes how lifetimes work, at least I saw a chapter about it.

Rust slowness is mostly due to LLVM, with another backends like cranelift and using lld it is considerably faster.

They are also in the process of moving lifetime resolution engine to a Datalog based one, something that D most likely will never do,

http://smallcultfollowing.com/babysteps/blog/2018/04/27/an-alias-based-formulation-of-the-borrow-checker/

June 12

On Saturday, 12 June 2021 at 11:35:58 UTC, Paulo Pinto wrote:

>

They are also in the process of moving lifetime resolution engine to a Datalog based one, something that D most likely will never do,

Interesting. Why not though? There is a decent datalog engine in Z3, which is C++.

Just don't put everything in the compiler-executable.

June 12

On Saturday, 12 June 2021 at 11:29:59 UTC, Ola Fosheim Grøstad wrote:

>

On Saturday, 12 June 2021 at 11:14:08 UTC, sighoya wrote:

>

Just to reiterate the essence: The general problem with static dependent typing is to track type (lifetime) state to make the lifetime constraints useful, otherwise you have hard times to auto prove your code given the life times constraints.

Not really sure what you mean now. You would have to make the constraints more limiting than needed, to get the verification to pass. That is certain.

What I mean is that constrains are pointless if not static information was inferred.

>

However, the basic idea would be to emit the high level IR, so there is really no limit to what can be analyzed.

I'm not the fan of it, each IR is a cost.

>

I don't think it should be opt-in. The code gen would optimize based on the guarantees that are stated in the code. So the verfication would run, but it could run delayed, then feed back information to the IDE.

So we evaluate it anyways, then we have the same cost as in Rust?

>

I think it should ship with the compiler, but I think it should be a separate pass.

It doesn't matter who is responsible for, it is part of the compiler pipeline, so you have to wait for.

>

Start thinking of D-tooling as a set of tools, not just DMD. So you have DMD + verifier + IDE-server + IDE of you own choice.

This is a major undertaking for DMD, why not developing a separate IDE compile just as it is the case for Java, it is slower, fur sure, but it literally enables developing code faster.

>

Maybe the mindset of Go and Rust teams are very conservative and stuck in the past?

I think this is true regarding GC as the evil of efficiency :).

« First   ‹ Prev
1 2