November 08, 2019
On Friday, 8 November 2019 at 18:11:39 UTC, Doc Andrew wrote:
> It works if you explicitly mark overflow as an error. You can try to prove that x <= x.max for a given path, and if you can't, then you just flag a warning.

So you manually assert it then? I guess you could use a templated type with asserts builtin.
But then the solver have to figure out when to designate the type as an integer and when to designate it as a bitvector. I guess there is some room for that... perhaps.

Another approach is to first prove the code correct for natural numbers and then try to prove that there is some upper limit and select an integer size that can contain that upper limit (e.g. 64 bits vs 32 bits). I am not sure about Dafny, but I think Whiley backends do that.


November 08, 2019
On Friday, 8 November 2019 at 18:25:02 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 8 November 2019 at 18:11:39 UTC, Doc Andrew wrote:
>> It works if you explicitly mark overflow as an error. You can try to prove that x <= x.max for a given path, and if you can't, then you just flag a warning.
>
> So you manually assert it then? I guess you could use a templated type with asserts builtin.
> But then the solver have to figure out when to designate the type as an integer and when to designate it as a bitvector. I guess there is some room for that... perhaps.
>
> Another approach is to first prove the code correct for natural numbers and then try to prove that there is some upper limit and select an integer size that can contain that upper limit (e.g. 64 bits vs 32 bits). I am not sure about Dafny, but I think Whiley backends do that.

With SPARK you don't have to manually assert the checks, it does the overflow checks as part of it's proof automatically. The neat thing is that you define new modular types with their range of allowable values, so as you prove the code, you show that those bounds aren't exceeded. It's basically a type invariant contract as part of the type definition itself (as I understand it).
November 08, 2019
On Friday, 8 November 2019 at 18:39:25 UTC, Doc Andrew wrote:
> With SPARK you don't have to manually assert the checks, it does the overflow checks as part of it's proof automatically. The neat thing is that you define new modular types with their range of allowable values, so as you prove the code, you show that those bounds aren't exceeded. It's basically a type invariant contract as part of the type definition itself (as I understand it).

Hm, seems to me that it has signed integers with bounds that are non-modular and unsigned integers that are modular:

https://learn.adacore.com/courses/intro-to-ada/chapters/strongly_typed_language.html#integers

So, in that case a prover could model bounded integers as integers with a constraint, and modular unsigned integers as bitvectors.

November 08, 2019
On Friday, 8 November 2019 at 18:39:25 UTC, Doc Andrew wrote:
> On Friday, 8 November 2019 at 18:25:02 UTC, Ola Fosheim Grøstad wrote:
>> On Friday, 8 November 2019 at 18:11:39 UTC, Doc Andrew wrote:
>>> It works if you explicitly mark overflow as an error. You can try to prove that x <= x.max for a given path, and if you can't, then you just flag a warning.
>>
>> So you manually assert it then? I guess you could use a templated type with asserts builtin.
>> But then the solver have to figure out when to designate the type as an integer and when to designate it as a bitvector. I guess there is some room for that... perhaps.
>>
>> Another approach is to first prove the code correct for natural numbers and then try to prove that there is some upper limit and select an integer size that can contain that upper limit (e.g. 64 bits vs 32 bits). I am not sure about Dafny, but I think Whiley backends do that.
>
> With SPARK you don't have to manually assert the checks, it does the overflow checks as part of it's proof automatically. The neat thing is that you define new modular types with their range of allowable values, so as you prove the code, you show that those bounds aren't exceeded. It's basically a type invariant contract as part of the type definition itself (as I understand it).

It's probably worth noting too that SPARK just translates the type definitions it sees into Why3 ML code, and uses that in the Why3 meta-solver for the proof. It separates proof from compilation, so if the proof checks out, then you can compile into bare-metal machine-code. This is a little different than a lot of the FV languages which take an ML/FP-ish language (with GC, etc.) and then have a C backend for it (with a runtime, usually).
November 08, 2019
On Friday, 8 November 2019 at 18:53:49 UTC, Doc Andrew wrote:
> It's probably worth noting too that SPARK just translates the type definitions it sees into Why3 ML code, and uses that in the Why3 meta-solver for the proof.

Ah, got it :) I haven't looked at Why3 yet.

> This is a little different than a lot of the FV languages which take an ML/FP-ish language (with GC, etc.) and then have a C backend for it (with a runtime, usually).

Yeah, high level FV languages can just do like Python and have unbounded integers and such.




November 08, 2019
On 11/07/2019 07:34 AM, Paolo Invernizzi wrote:

>> http://plv.mpi-sws.org/rustbelt/
>
> "... Any realistic languages

On a lighter note, that's an uncharacteristically nonacademic phrase.

>  targeting this domain in the future

Is that an example of common ignorance towards D or do they accept D as being already safe, or perhaps the future is already here with D? :)

On a more serious note, I very much appreciate Rust, their taking D seriously, and any work towards software safety.

Ali

November 08, 2019
On Friday, 8 November 2019 at 18:51:42 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 8 November 2019 at 18:39:25 UTC, Doc Andrew wrote:
>> With SPARK you don't have to manually assert the checks, it does the overflow checks as part of it's proof automatically. The neat thing is that you define new modular types with their range of allowable values, so as you prove the code, you show that those bounds aren't exceeded. It's basically a type invariant contract as part of the type definition itself (as I understand it).
>
> Hm, seems to me that it has signed integers with bounds that are non-modular and unsigned integers that are modular:
>
> https://learn.adacore.com/courses/intro-to-ada/chapters/strongly_typed_language.html#integers
>
> So, in that case a prover could model bounded integers as integers with a constraint, and modular unsigned integers as bitvectors.

I'm WAAAYYY out of my pay grade here, but I _think_ the distinction may depend on the underlying logic that the solver is using. Something like Coq where your Naturals are built on successor types would probably have a hard time with the modular arithmetic, but it might just not be a big deal for the Why3 solvers which I _think_ are using Hoare logic, since that's what HOL/ML is built on. I'm not really an expert in the Hoare logic though, and know just enough Coq to embarrass myself :) The Why3 solver will also let you use Coq to do a manual proof in the event that the automated prover can't figure it out, so I can't say for sure.

November 08, 2019
On Friday, 8 November 2019 at 19:04:38 UTC, Doc Andrew wrote:
> I'm not really an expert in the Hoare logic though

Roughly; Hoare logic is a set of rules for going from the precondition to the postcondition over a single statement. Each rule specify what the post condition should be given the precondition (and vice versa).

> myself :) The Why3 solver will also let you use Coq to do a manual proof in the event that the automated prover can't figure it out, so I can't say for sure.

Hm, that's interesting. I've bookmarked Why3, now… 4 l8r… :-)

November 08, 2019
On Friday, 8 November 2019 at 19:11:52 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 8 November 2019 at 19:04:38 UTC, Doc Andrew wrote:
>> I'm not really an expert in the Hoare logic though
>
> Roughly; Hoare logic is a set of rules for going from the precondition to the postcondition over a single statement. Each rule specify what the post condition should be given the precondition (and vice versa).
>
>> myself :) The Why3 solver will also let you use Coq to do a manual proof in the event that the automated prover can't figure it out, so I can't say for sure.
>
> Hm, that's interesting. I've bookmarked Why3, now… 4 l8r… :-)

I think we've probably given the next batch of GSoC candidates something to think about at least, haha.
November 08, 2019
On Friday, 8 November 2019 at 17:41:06 UTC, Ola Fosheim Grøstad wrote:
> «Structured concurrency» reminds me of rendezvous, i.e. that threads wait for each other at certain points, but less flexible.

What I found insightful is the idea that without structured concurrency, functions can just spawn tasks with no obligation to wait or clean them up. What you find in practice is that those responsibilities are often spread over several functions. The result is that even if you have clean code, the end and start of those tasks is woven like spaghetti through your code.

Structured concurrency introduces blocks for concurrency, which are similar to those introduced by structured programming. You can always be sure that when a nursery is exited, all its tasks have properly ended.

I for one have had many occasions where I had a rogue task or a leaking websocket.

With structured concurrency writing that code is easier, but more importantly, reviewing is easier as well.

> I'm not really convinced that a "nursery" is easier to deal with than futures/promises, in the general case, though.

You still have futures with a nursery. It is just that you can't leave the nursery block without all concurrent tasks completing, timing out or being canceled.