May 19, 2017
On Friday, 19 May 2017 at 15:12:20 UTC, Steven Schveighoffer wrote:
> On 5/19/17 9:46 AM, Moritz Maxeiner wrote:
>> On Friday, 19 May 2017 at 11:53:57 UTC, Steven Schveighoffer wrote:
>
>>> This provides a foundation to build completely @safe libraries.
>>
>> Agreed if you mean libraries being marked completely as @safe (which I
>> assume).
>> Disagreed if you mean libraries that are proven to never corrupt memory
>> (not possible with unsafe operating system).
>
> I mean libraries which only contain @safe and @system calls.
>
> i.e.:
>
> $ grep -R '@trusted' libsafe | wc -l
> 0

In that case, agreed. There will be no need (with regards to memory safety) to audit such libraries.

>
>>> 2. @trusted blocks in any project need to be considered red flags. You
>>> should not need to audit @safe code.
>>
>> Yes you do, because it can call into @trusted like this:
>>
>> ---
>> void foo(int[] bar) @safe
>> {
>>   () @trusted {
>>     // Exploitable code here
>>   }();
>> }
>> ---
>>
>> You *must* audit third party @safe code for such hidden @trusted code
>> (e.g. grep recursively through such third party code for @trusted and
>> verify).
>
> This is what I mean by auditing @trusted when it interacts with @safe code.

Ok, I was not sure what exactly you meant (because interaction is a broad concept), so I went explicit as I did not want to assume.

>
> Using your example, if we confirm that no matter how you call foo, the @trusted block cannot break memory safety, then foo becomes a verified @safe function. Then any @safe function that calls foo can be considered @safe without auditing.

Yes (assuming any such @safe function does not call another @trusted function that wasn't verified).

>
> This is actually a necessity, because templates can infer safety, so you may not even know the call needs to be audited. The most dangerous thing I think is to have @trusted blocks which use templated types.
>
> A real example recently was a PR that added @safe to a function, and made the following call:
>
> () @trusted { return CreateDirectoryW(pathname.tempCStringW(), null); }
>
> Where pathname was a range. The trusted block is to allow the call to CreateDirectoryW, but inadvertently, you are trusting all the range functions inside pathname, whatever that type is!
>
> The correct solution is:
>
> auto cstr = pathname.tempCStringW();
> () @trusted { return CreateDirectoryW(cstr, null); }

Interesting case. I have not encountered something like it before, since all code that I mark @trusted uses OS provided structs, primitive types, or arrays (nothing fancy like ranges).

>
> So yes, if the third party has @trusted code, you need to audit it. But once you have audited the *block* of trusted code, and how it interacts within its function, you can consider the calling @safe functions actually safe without auditing.

Indeed (again, though, only if that calling @safe function doesn't call some other @trusted you forgot to audit).

>
>>> If we get into "@safe really means @trusted" territory, we have lost.
>>
>> For code that you write yourself, @safe means @safe, of course. For code
>> other people write and you want to call, it being marked @safe does
>> really mean @trusted as long as you yourself have not looked inside it
>> and verified there either is no hidden @trusted, or verified *yourself*
>> that the hidden @trusted is memory safe.
>> I consider any other behaviour to be negligent to the degree of "you
>> don't actually care about memory safety at all".
>
> I think there will be a good market for separating libraries between @trusted-containing libraries, and only @safe-containing libraries.

I totally agree. Unfortunately, most of what I use D for is replacing C in parts where I have to directly interface with C, specifically glibc wrapped syscalls.
I considered calling the syscalls directly and completly bypass C at all, but since syscalls in Linux are often in reality available in userspace without context switch (vdso), the code is not trivial and I decided to just use interface with the C wrappers.

> This will make the auditing more focused, and more shareable.

Sure.

> I don't expect people to use Phobos and audit all the @trusted blocks personally.

As long as they don't actually call them, that's reasonable. But if your application ends up calling @trusted code and you did not audit that @trusted yourself, you have violated the @trusted requirement:
You cannot promise to the compiler that the code is memory safe since you have no knowledge of what it actually does.

> If "D is  memory safe" means "D is memory safe ONLY if you verify all of the standard library personally", we still have lost.

It is more like "D is memory safe" meaning "D is memory safe ONLY if you verify all of the @trusted code your application end up compiling in / linking against".
There is no way around that I can see without getting rid of @trusted, which is impossible for a systems PL.
May 19, 2017
On 19.05.2017 17:12, Steven Schveighoffer wrote:
>
> I mean libraries which only contain @safe and @system calls.
>
> i.e.:
>
> $ grep -R '@trusted' libsafe | wc -l
> 0

mixin("@"~"trusted void nasty(){ corruptAllTheMemory(); }");
May 19, 2017
On Friday, 19 May 2017 at 16:29:59 UTC, Timon Gehr wrote:
> On 19.05.2017 17:12, Steven Schveighoffer wrote:
>>
>> I mean libraries which only contain @safe and @system calls.
>>
>> i.e.:
>>
>> $ grep -R '@trusted' libsafe | wc -l
>> 0
>
> mixin("@"~"trusted void nasty(){ corruptAllTheMemory(); }");

dmd -vcg-ast *.d

May 19, 2017
On 5/19/17 12:29 PM, Timon Gehr wrote:
> On 19.05.2017 17:12, Steven Schveighoffer wrote:
>>
>> I mean libraries which only contain @safe and @system calls.
>>
>> i.e.:
>>
>> $ grep -R '@trusted' libsafe | wc -l
>> 0
>
> mixin("@"~"trusted void nasty(){ corruptAllTheMemory(); }");

Yeah. There's that. But I think we're OK even with that loophole :)

-Steve
May 19, 2017
On Friday, 19 May 2017 at 15:52:52 UTC, Moritz Maxeiner wrote:
> On Friday, 19 May 2017 at 15:12:20 UTC, Steven Schveighoffer
>> I don't expect people to use Phobos and audit all the @trusted blocks personally.
>
> As long as they don't actually call them, that's reasonable. But if your application ends up calling @trusted code and you did not audit that @trusted yourself, you have violated the @trusted requirement:
> You cannot promise to the compiler that the code is memory safe since you have no knowledge of what it actually does.
No. @trusted is about trust: you cannot rely on the compiler to verify it, but the code is reviewed by humans. So there is a list of reviewers and if this list contains some names you happen to trust (sic!) you don't have to audit the code yourself.
Especially basic libraries will over time become tested and audited by very many people or even organizations. So after some time they really can be trusted.

>> If "D is  memory safe" means "D is memory safe ONLY if you verify all of the standard library personally", we still have lost.
>
> It is more like "D is memory safe" meaning "D is memory safe ONLY if you verify all of the @trusted code your application end up compiling in / linking against".
> There is no way around that I can see without getting rid of @trusted, which is impossible for a systems PL.
For bigger projects you always need to trust in some previous work. But having the @trusted and @save mechanism makes the resulting code a whole lot more trustworthy than any C library can ever be - just by reducing the number of lines of code that really need be audited.
I personally would not going bejond probing some few functions within a library which I think are more complicated and fragile, and if I find them ok, my trust in what else the authors have marked @trusted increases likewise.



May 19, 2017
On Friday, 19 May 2017 at 17:21:23 UTC, Dominikus Dittes Scherkl wrote:
> On Friday, 19 May 2017 at 15:52:52 UTC, Moritz Maxeiner wrote:
>> On Friday, 19 May 2017 at 15:12:20 UTC, Steven Schveighoffer
>>> I don't expect people to use Phobos and audit all the @trusted blocks personally.
>>
>> As long as they don't actually call them, that's reasonable. But if your application ends up calling @trusted code and you did not audit that @trusted yourself, you have violated the @trusted requirement:
>> You cannot promise to the compiler that the code is memory safe since you have no knowledge of what it actually does.
> No. @trusted is about trust: you cannot rely on the compiler to verify it, but the code is reviewed by humans.

Precisely. It is about trust the compiler extends to you, the programmer, instead of a mechanical proof (@safe):

"Trusted functions are guaranteed by the programmer to not exhibit any undefined behavior if called by a safe function. Generally, trusted functions should be kept small so that they are easier to manually verify." [1]

If you write an application that uses @trusted code - even from a third party library - *you* are the programmer that the compiler extends the trust to.


> So there is a  list of reviewers and if this list contains some names you happen to trust (sic!) you don't have to audit the code yourself.

Trust, but verify: Considering the damages already caused via memory corruption, I would argue that even if you have a list of people you trust to both write @trusted and review @trusted code (both of which is fine imho), reviewing them yourself (when writing an application) is the prudent (and sane) course of action.


> Especially basic libraries will over time become tested and audited by very many people or even organizations. So after some time they really can be trusted.

Absolutely not. This kind of mentality is what allowed bugs like heartbleed to rot for years[2], or even decades[3]. Unsafe code can never be *inherently* trusted.

>
>>> If "D is  memory safe" means "D is memory safe ONLY if you verify all of the standard library personally", we still have lost.
>>
>> It is more like "D is memory safe" meaning "D is memory safe ONLY if you verify all of the @trusted code your application end up compiling in / linking against".
>> There is no way around that I can see without getting rid of @trusted, which is impossible for a systems PL.
> For bigger projects you always need to trust in some previous work.

Not really. You can always verify any @trusted code (and if the amount of @trusted code you have to verify is large, then I argue that you are using the wrong previous work with regards to memory safety).

> But having the @trusted and @save mechanism makes the resulting code a whole lot more trustworthy than any C library can ever be - just by reducing the number of lines of code that really need be audited.

I agree with that viewpoint (and wrote about the reduced auditing work previously in this conversation), but the quote you responded to here was about using D in general being memory safe (which is binary "yes/no"), not any particular library's degree of trustworthyness with regards to memory safety (which is a continuous scale).

> I personally would not going bejond probing some few functions within a library which I think are more complicated and fragile, and if I find them ok, my trust in what else the authors have marked @trusted increases likewise.

That is your choice, but the general track record of trusting others to get it right without verifying it yourself remains atrocious and I would still consider you negligent for doing so, because while in C one has had little other choice historically - since without a @safe concept the amount of code one would have to verify reaches gargantuan size - in D we can (and should imho) only have small amounts of @trusted code.

[1] https://dlang.org/spec/function.html#trusted-functions
[2] https://www.theguardian.com/technology/2014/apr/11/heartbleed-developer-error-regrets-oversight
[3] https://www.theguardian.com/technology/2014/jun/06/heartbleed-openssl-bug-security-vulnerabilities
May 19, 2017
On Friday, 19 May 2017 at 20:19:46 UTC, Moritz Maxeiner wrote:
> On Friday, 19 May 2017 at 17:21:23 UTC, Dominikus Dittes Scherkl wrote:
>>> You cannot promise to the compiler that the code is memory safe since you have no knowledge of what it actually does.
>> No. @trusted is about trust: you cannot rely on the compiler to verify it, but the code is reviewed by humans.
>
> Precisely. It is about trust the compiler extends to you, the programmer, instead of a mechanical proof (@safe):
>
> "Trusted functions are guaranteed by the programmer to not exhibit any undefined behavior if called by a safe function. Generally, trusted functions should be kept small so that they are easier to manually verify." [1]
I take this to mean the programmer who wrote the library, not every user of the library. Ok, it's better the more people checked it but it need not be always me.
Hm - we should have some mechanism to add to some list of people who already trust the code because they checked it.
>
> If you write an application that uses @trusted code - even from a third party library - *you* are the programmer that the compiler extends the trust to.
This is not my point of view. Especially if I had payed for some library, even legally it's not my fault if it fails. For public domain ok, the end user is theoretically responsible for everything that goes wrong but even there nobody can check everything or even a relevant portion of it.

> Trust, but verify: Considering the damages already caused via memory corruption, I would argue that even if you have a list of people you trust to both write @trusted and review @trusted code (both of which is fine imho), reviewing them yourself (when writing an application) is the prudent (and sane) course of action.
This is infeasable even if @safe and @trusted reduce the Herculic task by large.

>> Especially basic libraries will over time become tested and audited by very many people or even organizations. So after some time they really can be trusted.
>
> Absolutely not. This kind of mentality is what allowed bugs like heartbleed to rot for years[2], or even decades[3]. Unsafe code can never be *inherently* trusted.
In addition to trusted, D has unittests that - in harsh contrast to C - are run by most users. And especially @trusted functions have extensive tests - even more so if they ever showed some untrustworthy behaviour.
This increasing unittest blocks make older and more used libraries indeed more reliable, even if a function is changed (also in contrast to C where a changed function start again at zero trust while a D function has to pass all the old unittests and therefore start with high trust level)

>> For bigger projects you always need to trust in some previous work.
>
> Not really. You can always verify any @trusted code (and if the amount of @trusted code you have to verify is large, then I argue that you are using the wrong previous work with regards to memory safety).
Sorry. Reviewing everything you use is impossible. I just can't believe you if you claim to do so.

>> But having the @trusted and @save mechanism makes the resulting code a whole lot more trustworthy than any C library can ever be - just by reducing the number of lines of code that really need be audited.
>
> I agree with that viewpoint (and wrote about the reduced auditing work previously in this conversation), but the quote you responded to here was about using D in general being memory safe (which is binary "yes/no"), not any particular library's degree of trustworthyness with regards to memory safety (which is a continuous scale).
No. Declaring a function @safe is still no binary "yes". I don't believe in such absolute values. Clearly the likelyhood of memory corruption will be orders of magnitude lower, but never zero. The compiler may have bugs, the system a SW is running on will have bugs, even hardware failures are possible. Everything is about trust.

>> I personally would not going bejond probing some few functions within a library which I think are more complicated and fragile, and if I find them ok, my trust in what else the authors have marked @trusted increases likewise.
>
> That is your choice, but the general track record of trusting others to get it right without verifying it yourself remains atrocious and I would still consider you negligent for doing so, because while in C one has had little other choice historically - since without a @safe concept the amount of code one would have to verify reaches gargantuan size - in D we can (and should imho) only have small amounts of @trusted code.
Of course. And an decreasing amount. But what we have is already a huge step in the right direction.
We should live in the reality. Everybodies time is spare. So you can always spent your time for checking code only for the parts which are most important for you and which you suspect the most. Claiming otherwise is - believe it or not - making you less trustworthy to me.

May 19, 2017
On Friday, 19 May 2017 at 20:54:40 UTC, Dominikus Dittes Scherkl wrote:
> On Friday, 19 May 2017 at 20:19:46 UTC, Moritz Maxeiner wrote:
>> On Friday, 19 May 2017 at 17:21:23 UTC, Dominikus Dittes Scherkl wrote:
>>>> You cannot promise to the compiler that the code is memory safe since you have no knowledge of what it actually does.
>>> No. @trusted is about trust: you cannot rely on the compiler to verify it, but the code is reviewed by humans.
>>
>> Precisely. It is about trust the compiler extends to you, the programmer, instead of a mechanical proof (@safe):
>>
>> "Trusted functions are guaranteed by the programmer to not exhibit any undefined behavior if called by a safe function. Generally, trusted functions should be kept small so that they are easier to manually verify." [1]
> I take this to mean the programmer who wrote the library, not every user of the library.

I take this to mean any programmer that ends up compiling it (if you use a precompiled version that you only link against that would be different).

> Hm - we should have some mechanism to add to some list of people who already trust the code because they checked it.

Are you talking about inside the code itself?
If so, I imagine digitally signing the functions source code (ignoring whitespace) and adding the signature to the docstring would do it.

>>
>> If you write an application that uses @trusted code - even from a third party library - *you* are the programmer that the compiler extends the trust to.
> This is not my point of view. Especially if I had payed for some library, even legally it's not my fault if it fails.

It is mine, because even if you payed for the library, when you compile it, the compiler cannot know where the library came from. It only knows you (the programmer who invoked it), as the one it extends trust to. I am specifically not talking about what is legally your fault or not, because I consider that an entirely different matter.

> For public domain ok, the end user is theoretically responsible for everything that goes wrong but even there nobody can check everything or even a relevant portion of it.

That entirely depends on how much @trusted code you have.

>
>> Trust, but verify: Considering the damages already caused via memory corruption, I would argue that even if you have a list of people you trust to both write @trusted and review @trusted code (both of which is fine imho), reviewing them yourself (when writing an application) is the prudent (and sane) course of action.
> This is infeasable even if @safe and @trusted reduce the Herculic task by large.

I disagree.

>
>>> Especially basic libraries will over time become tested and audited by very many people or even organizations. So after some time they really can be trusted.
>>
>> Absolutely not. This kind of mentality is what allowed bugs like heartbleed to rot for years[2], or even decades[3]. Unsafe code can never be *inherently* trusted.
> In addition to trusted, D has unittests that - in harsh contrast to C - are run by most users. And especially @trusted functions have extensive tests - even more so if they ever showed some untrustworthy behaviour.
> This increasing unittest blocks make older and more used libraries indeed more reliable, even if a function is changed (also in contrast to C where a changed function start again at zero trust while a D function has to pass all the old unittests and therefore start with high trust level)

Those are valuable tools, but they do not make any piece of code *inherently* trustworthy.

>
>>> For bigger projects you always need to trust in some previous work.
>>
>> Not really. You can always verify any @trusted code (and if the amount of @trusted code you have to verify is large, then I argue that you are using the wrong previous work with regards to memory safety).
> Sorry. Reviewing everything you use is impossible. I just can't believe you if you claim to do so.

I specifically stated reviewing any @trusted code, not all code.
And so far I have made no claim about not being negligent myself with regards to memory safety.

>
>>> But having the @trusted and @save mechanism makes the resulting code a whole lot more trustworthy than any C library can ever be - just by reducing the number of lines of code that really need be audited.
>>
>> I agree with that viewpoint (and wrote about the reduced auditing work previously in this conversation), but the quote you responded to here was about using D in general being memory safe (which is binary "yes/no"), not any particular library's degree of trustworthyness with regards to memory safety (which is a continuous scale).
> No. Declaring a function @safe is still no binary "yes".

Again, this was not about any particular function (or library), but about using D in general.

> I don't believe in such absolute values. Clearly the likelyhood of memory corruption will be orders of magnitude lower, but never zero. The compiler may have bugs, the system a SW is running on will have bugs, even hardware failures are possible. Everything is about trust.

I agree in principal, but the statement I responded to was "D is memory safe", which either does or does not hold. I also believe that considering the statement's truthfulness only makes sense under the assumption that nothing *below* it violates that, since the statement is about language theory.

>
>>> I personally would not going bejond probing some few functions within a library which I think are more complicated and fragile, and if I find them ok, my trust in what else the authors have marked @trusted increases likewise.
>>
>> That is your choice, but the general track record of trusting others to get it right without verifying it yourself remains atrocious and I would still consider you negligent for doing so, because while in C one has had little other choice historically - since without a @safe concept the amount of code one would have to verify reaches gargantuan size - in D we can (and should imho) only have small amounts of @trusted code.
> Of course. And an decreasing amount. But what we have is already a huge step in the right direction.

Yes, it is.

> We should live in the reality. Everybodies time is spare. So you can always spent your time for checking code only for the parts which are most important for you and which you suspect the most.

Of course anyone can choose to check whatever they wish. That does not change what *I* consider negligent.

> Claiming otherwise is - believe it or not - making you less trustworthy to me.

Where did I claim otherwise? Errare humanum est, sed in errare perseverare diabolicum.
In this context: It is one thing to be negligent (and I explicitly do not claim *not* to be negligent myself), but something completely different to pretend that being negligent is OK.
May 19, 2017
On Friday, 19 May 2017 at 22:06:59 UTC, Moritz Maxeiner wrote:
> On Friday, 19 May 2017 at 20:54:40 UTC, Dominikus Dittes Scherkl wrote:

>> I take this to mean the programmer who wrote the library, not every user of the library.
>
> I take this to mean any programmer that ends up compiling it (if you use a precompiled version that you only link against that would be different).
Why? Because you don't have the source? Go, get the source - at least for open source projects this should be possible. I can's see the difference.

>> Hm - we should have some mechanism to add to some list of people who already trust the code because they checked it.
>
> Are you talking about inside the code itself?
> If so, I imagine digitally signing the functions source code (ignoring whitespace) and adding the signature to the docstring would do it.
Yeah. But I mean: we need such a mechanism in the D review process. It would be nice to have something standardized so that if I checked something to be really trustworthy, I want to make it public, so that
everybody can see who already checked the code - and maybe concentrate on reviewing something that was yet not reviewed by many people or not by anybody they trust most.

>> This is not my point of view. Especially if I had payed for some library, even legally it's not my fault if it fails.
>
> It is mine, because even if you payed for the library, when you compile it, the compiler cannot know where the library came from.
Yeah, but you (should) do. For me it doesn't matter who actually compiled the code - if anything my trust would be higher if I compiled it myself, because I don't know what compiler or what settings has been used for a pre-compiled library.

>[the compiler] only knows you (the programmer who invoked it), as the
> one it extends trust to.
The compiler "trusts" anybody using it. This is of no value. The important thing is who YOU trust. Or who you want the user of your program to trust.
Oftentimes it may be more convincing to the user of your program if you want them to trust company X where you bought some library from than trusting in your own ability to prove the memory safety of the code build upon this - no matter if you compiled the library yourself or have it be done by company X.

> I am specifically not talking about what is legally your fault or not, because I consider that an entirely different matter.
Different matter, but same chain of trust.

>> nobody can check everything or even a relevant portion of it.
> That entirely depends on how much @trusted code you have.
Of course.
But no matter how glad I would be to be able to check e.g. my operating system for memory safety, and even if it would be only 1% of its code that is merely @trusted instead of @safe, it would still be too much for me.
This is only feasible if you shrink you view far enough.

And reverse: the more code is @safe the further I can expand my checking activities - but I still don't believe to ever being able to check everything.

> I specifically stated reviewing any @trusted code, not all code.
Yes. Still too much, I think.

> I agree in principal, but the statement I responded to was "D is memory safe", which either does or does not hold.
And I say: No, D is not memory safe. In practice. Good, but no 100%.

> I also believe that considering the statement's truthfulness only makes sense under the assumption that nothing *below* it violates that, since the statement is about language theory.
Ok, this is what I mean by "shrinking your view until it's possible
to check everything" - or being able to prove something in this case.
but by doing so you also neglect things. Many things.

> Of course anyone can choose to check whatever they wish. That does not change what *I* consider negligent.
But neglecting is a necessity. Your view is reduced to a D specification to make statements about it in language theory where you check everything - and decide thereby to neglect everything else, below that. Including the buggy implementation of that spec running on a buggy OS on buggy hardware.

> In this context: It is one thing to be negligent (and I explicitly do not claim *not* to be negligent myself), but something completely different to pretend that being negligent is OK.
It's not only ok. It's a necessity. The necessity of a limited being in an infinite universe.
We can only hope to not neglect the important things - and trust in others is one way to increase the number of things we have hope to be ok. Making things @safe instead of only @trusted is another way. Both increase the view we have largely. But trying to check everything yourself is still in vain.

May 20, 2017
On Friday, 19 May 2017 at 23:56:55 UTC, Dominikus Dittes Scherkl wrote:
> On Friday, 19 May 2017 at 22:06:59 UTC, Moritz Maxeiner wrote:
>> On Friday, 19 May 2017 at 20:54:40 UTC, Dominikus Dittes Scherkl wrote:
>
>>> I take this to mean the programmer who wrote the library, not every user of the library.
>>
>> I take this to mean any programmer that ends up compiling it (if you use a precompiled version that you only link against that would be different).
> Why? Because you don't have the source? Go, get the source - at least for open source projects this should be possible. I can's see the difference.

Because imo the specification statement covers compiling @trusted code, not linking to already compiled @trusted code; so I excluded it from my statement.
Whether you can get the source is not pertinent to the statement.

>
>>> Hm - we should have some mechanism to add to some list of people who already trust the code because they checked it.
>>
>> Are you talking about inside the code itself?
>> If so, I imagine digitally signing the functions source code (ignoring whitespace) and adding the signature to the docstring would do it.
> Yeah. But I mean: we need such a mechanism in the D review process. It would be nice to have something standardized so that if I checked something to be really trustworthy, I want to make it public, so that
> everybody can see who already checked the code - and maybe concentrate on reviewing something that was yet not reviewed by many people or not by anybody they trust most.

Well, are you self-electing yourself to be the champion of this? Because I don't think it will happen without one.

>
>>[the compiler] only knows you (the programmer who invoked it), as the
>> one it extends trust to.
> The compiler "trusts" anybody using it. This is of no value.

The compiler extends trust to whoever invokes it, that is correct (and what I wrote).
That person then either explicitly, or implicitly manages that trust further.
You can, obviously, manage that trust however you see fit, but *I* will still consider it negligence if you - as the author of some application - have not verified all @trusted code you use.

> The important thing is who YOU trust. Or who you want the user of your program to trust.
> Oftentimes it may be more convincing to the user of your program if you want them to trust company X where you bought some library from than trusting in your own ability to prove the memory safety of the code build upon this - no matter if you compiled the library yourself or have it be done by company X.

And MY trust is not transitive. If I trust person A, and A trusts person B, I would NOT implicitly trust person B. As such, if A wrote me a @safe application that uses @trusted code written by B and A would tell me that he/she/it had not verified B's code, I would consider A to be negligent.

>
>> I am specifically not talking about what is legally your fault or not, because I consider that an entirely different matter.
> Different matter, but same chain of trust.
>
>>> nobody can check everything or even a relevant portion of it.
>> That entirely depends on how much @trusted code you have.
> Of course.
> But no matter how glad I would be to be able to check e.g. my operating system for memory safety, and even if it would be only 1% of its code that is merely @trusted instead of @safe, it would still be too much for me.
> This is only feasible if you shrink you view far enough.

What you call shrinking your view information sciences call using the appropriate abstraction for the problem domain: It makes absolutely no sense whatsoever to even talk about the memory safety of a programming language if the infrastructure below it is not excluded from the view:

If you do not use the appropriate abstraction you can always say: High enough radiation can always just randomly flip bits in your computer and you're screwed, memory safe does not exist. That, while true in practice, is of no help when designing applications that will run on computers not exposed to excessive radiation so you exclude it.

>
> And reverse: the more code is @safe the further I can expand my checking activities - but I still don't believe to ever being able to check everything.

Again: I specifically wrote about @trusted code, not all code.

>
>> I specifically stated reviewing any @trusted code, not all code.
> Yes. Still too much, I think.

I do not.

>
>> I agree in principal, but the statement I responded to was "D is memory safe", which either does or does not hold.
> And I say: No, D is not memory safe. In practice. Good, but no 100%.
>
>> I also believe that considering the statement's truthfulness only makes sense under the assumption that nothing *below* it violates that, since the statement is about language theory.
> Ok, this is what I mean by "shrinking your view until it's possible
> to check everything" - or being able to prove something in this case.
> but by doing so you also neglect things. Many things.

As stated above, this is choosing the appropriate abstraction for the problem domain. I know what can go wrong in the lower layers, but all of that is irrelevant to the problem domain of a programming language. It only becomes relevant again when you ask "Will my @safe D program (which is memory safe, I verified all @trusted myself!) be memory safe when running on this specific system?", to which the generic answer is "it depends".

>
>> Of course anyone can choose to check whatever they wish. That does not change what *I* consider negligent.
> But neglecting is a necessity.

It may be a necessity for you (and I personally assume probably even most programmers), but that does not make it generally true.

>
>> In this context: It is one thing to be negligent (and I explicitly do not claim *not* to be negligent myself), but something completely different to pretend that being negligent is OK.
> It's not only ok. It's a necessity.

First, something being a necessity does not make it OK. Second: Whether it is a necessity is once more use case dependent.

> The necessity of a limited being in an infinite universe.

The amount of @trusted code is limited and thus the time needed to review it is as well.
2 3 4 5 6 7 8 9 10 11 12
Next ›   Last »