March 03, 2017
On Friday, 3 March 2017 at 16:38:52 UTC, Kagamin wrote:
> On Friday, 3 March 2017 at 02:11:38 UTC, Moritz Maxeiner wrote:
>> My major gripe, though, is still that people tend to create "safe" wrappers around "unsafe" (mostly) C libraries, which (in the sense of safety) doesn't really help me as a developer at all
>
> Wrappers are needed because C libraries have unsafe (and underdocumented) API that's easy to get wrong. I saw it happening twice in druntime. Safety is like optimization: you can handle it one or twice, but code handles it always, that makes a difference.

And the wrappers can get it wrong just the same as if I'd done it myself, i.e. I need to either audit the wrapper's code or trust yet one more (or multiple) persons to get things right. Of course you're right about the reduction of points of failure, but that still doesn't help me have more confidence in them.
March 03, 2017
On Friday, 3 March 2017 at 16:43:05 UTC, Kagamin wrote:
> On Friday, 24 February 2017 at 19:19:57 UTC, Moritz Maxeiner wrote:
>> *Then* you have to provide conclusive (or at the very least hard to refute) proof that the reason that no one could break them were the memory safety features; and then, *finally*, you can point to all the people *still not using memory safe languages* and say "Told you so".
>
> Such proof is impossible because correct programs can be written in unsafe languages.

And you can write memory incorrect programs in what's currently called memory safe languages[1], which is we need more programs in such languages to reach a reasonable sample size for comparison and analysis against programs in classic languages such as C/C++.
A formal, mathematical proof is impossible, yes, but if you have a large enough sample size of programs in a memory safe(r) language, *and* can verify that they are indeed memory correct (and thus not open to all the usual attack vectors), then that falls what I'd categorize under "hard to refute". But you're right, I should've been more specific, my bad.

[1] https://www.x41-dsec.de/reports/Kudelski-X41-Wire-Report-phase1-20170208.pdf
March 05, 2017
On 2017-03-03 16:23, Moritz Maxeiner wrote:

> That would be a good next step from an engineering standpoint, I agree,
> to proceed to minimize the amount of trust in people you need to have vs
> verifiable safety.
> I have considered porting something like seL4[1] to Rust, but ultimately
> this would take a significant amount of time and even if done you'd then
> have the biggest problem any new kernel faces: Hardware support. Driver
> development is AFAIK mostly done by people working for the hardware
> manufacturer and you're going to have a hard (probably closer to
> impossible) time convincing them to spend money on driver development
> for you. And if they don't you'll have close to 30 years of hardware
> support to catch up on by yourself.
> But suppose you limit yourself to a single (or at most a handful of
> homogeneous) platform(s) like [2], e.g. a new AArch64 board. Suppose you
> even take one where the hardware is open so you can audit its
> schematics, then you'll *still* either have to use proprietary firmware
> for the (partially onboard) periphery (and have unsafe interfaces to
> them), or - once again - write all the device firmware yourself.
> And once you've done all of that you're still missing userspace, i.e.
> you have a nice new OS without any actual use for it (yet). So you
> either start writing your own incompatible, safe userspace, or you're
> going to decide to integrate the userspace of existing OSs (probably
> POSIX?) to your new OS, so you're going to be writing your own (safe)
> libc, (safe) pthread, etc, exposing (once again) unsafe APIs to the top.
> It will be safer than what we currently have on e.g Linux since you can
> probably make sure that unsafe use of them won't result in kernel
> exploits, though; this will, of course, take even more time.
> Finally, at the arduous end of your journey you're likely going to
> notice what - in my experience - most new OSs I've observed of the years
> experience: Essentially nobody is interested in actually switching to a
> volunteer-based OS.
> Honestly, I think you need serious corporate backing, a dedicated team,
> and like 5-10 years (low estimate) of guaranteed development time to
> have a snowballs chance in hell to pull this off and the only possible
> sponsors for this I'm both aware of and would currently trust not to cut
> you off in the middle are either already working on their own OS[3], or
> have dedicated their R&D to other things[4].

I agree. The only potential hope I see would be to port Linux to a memory safe language.

-- 
/Jacob Carlborg
March 05, 2017
On Sunday, 5 March 2017 at 11:48:23 UTC, Jacob Carlborg wrote:
> [...]
>
> I agree. The only potential hope I see would be to port Linux to a memory safe language.

That would indeed eliminate essentially all of those tasks; unfortunately porting Linux in itself would require a tremendous amount of work. The only realistic way I could this of to do this would be to follow what was done e.g. in the dmd frontend and is currently being done with remacs[1]: Iteratively translate file by file, function by function. By the time you are done doing that with the Linux kernel, however - and I'm guessing 5 years is again a low estimate for the amount of work - your version will've become horribly out of sync with upstream, and then you'll continuously have to catch up with it. Unless of course you eventually decide that from point X on forward you don't sync with upstream anymore and lose future driver support (since the Linux kernel's API changes with every minor release).
I'm not saying it shouldn't be attempted, btw, but anyone trying needs to be fully aware of what he/she gets into and assemble a sizeable, reliable group of people dedicated to the task imho.

[1] https://github.com/Wilfred/remacs
March 06, 2017
On 03.03.2017 16:52, Kagamin wrote:
> On Friday, 24 February 2017 at 20:16:28 UTC, Timon Gehr wrote:
>> No. Worse. It turns failures into UB.
>
> On the other hand disabled bounds check can result in buffer overflow,
> which is already UB enough, so asserts turned into assumes won't add
> anything new.

Not every program with a wrong assertion in it exceeds array bounds.
March 07, 2017
On Monday, 6 March 2017 at 21:05:13 UTC, Timon Gehr wrote:
> Not every program with a wrong assertion in it exceeds array bounds.

Until it does.
March 07, 2017
On Friday, 3 March 2017 at 17:33:14 UTC, Moritz Maxeiner wrote:
> And you can write memory incorrect programs in what's currently called memory safe languages[1]

Those look like mistakes in interfacing between C and Rust. So it's not really written in a safe language. And most of them are in cryptographic security rather than memory safety. Safe languages give no advantage there. But it still does demonstrate lack of safety issues.

> A formal, mathematical proof is impossible, yes, but if you have a large enough sample size of programs in a memory safe(r) language, *and* can verify that they are indeed memory correct (and thus not open to all the usual attack vectors), then that falls what I'd categorize under "hard to refute". But you're right, I should've been more specific, my bad.

Does anybody try to refute it? Safe languages are not rejected for their safety.
March 07, 2017
On Tuesday, 7 March 2017 at 15:48:12 UTC, Kagamin wrote:
> On Monday, 6 March 2017 at 21:05:13 UTC, Timon Gehr wrote:
>> Not every program with a wrong assertion in it exceeds array bounds.
>
> Until it does.

Going outside array bounds isn't necessarily the same as a contradiction.

March 07, 2017
On Tuesday, 7 March 2017 at 16:18:01 UTC, Kagamin wrote:
> On Friday, 3 March 2017 at 17:33:14 UTC, Moritz Maxeiner wrote:
>> And you can write memory incorrect programs in what's currently called memory safe languages[1]
>
> Those look like mistakes in interfacing between C and Rust. So it's not really written in a safe language. And most of them are in cryptographic security rather than memory safety. Safe languages give no advantage there. But it still does demonstrate lack of safety issues.

Then we need to define "memory safe language" a lot stricter than it's currently being used, and both D and Rust won't qualify as memory safe (since you can write unsafe code in them).

>
>> A formal, mathematical proof is impossible, yes, but if you have a large enough sample size of programs in a memory safe(r) language, *and* can verify that they are indeed memory correct (and thus not open to all the usual attack vectors), then that falls what I'd categorize under "hard to refute". But you're right, I should've been more specific, my bad.
>
> Does anybody try to refute it? Safe languages are not rejected for their safety.

Right now, of course not, since the burden of proof is on the side advocating memory safety (i.e. us).
March 07, 2017
On Tuesday, 7 March 2017 at 21:24:43 UTC, Moritz Maxeiner wrote:
> Then we need to define "memory safe language" a lot stricter than it's currently being used, and both D and Rust won't qualify as memory safe (since you can write unsafe code in them).

D does not claim to be memory-safe always. It does afaik do so within @safe environments (barring internal runtime or compiler bugs of course). Even C# has the same approach of allowing "unsafe" environments.

>>> A formal, mathematical proof is impossible, yes, but if you have a large enough sample size of programs in a memory safe(r) language, *and* can verify that they are indeed memory correct (and thus not open to all the usual attack vectors), then that falls what I'd categorize under "hard to refute". But you're right, I should've been more specific, my bad.
>>
>> Does anybody try to refute it? Safe languages are not rejected for their safety.
>
> Right now, of course not, since the burden of proof is on the side advocating memory safety (i.e. us).

I don't agree on the burden of proof. It is a safe assumption that if you increase safety checks, safety will be improved. It cannot or needn't be proven. If someone proposes installing railing in a stairway, or a fence along a railway, to decrease accidents, who would demand this to be proven? How, in a sandbox parallel universe that we control as gods and can rewind in time? Because there is no other way.

Plus statistics can prove nothing -- this logical truth cannot be overstated. Even if you invested for the sake of an experiment in setting up a race between huge teams of equally qualified programmers given the same exact tasks, nothing could be truly proven. But we're even talking about simply having more experience from completely different projects and developers among the evaluated languages or families. Actually we have quite a lot of experience already, by now Java and later .NET have been around for most of the time C++ has so far, just as an for example.