March 08, 2017
On 07.03.2017 16:48, Kagamin wrote:
> On Monday, 6 March 2017 at 21:05:13 UTC, Timon Gehr wrote:
>> Not every program with a wrong assertion in it exceeds array bounds.
>
> Until it does.

Not necessarily so. With -release, it will be able to both exceed and not exceed array bounds at the same time in some circumstances.

What I'm not buying is that the existence of UB in some circumstances justifies introducing more cases where UB is unexpectedly introduced. It's a continuum. Generally, if you add more failure modes, you will have more exploits.

I might need to point out that -release does not disable bounds checking in @safe code while it has been stated that -release introduces UB for assertion failures in @safe code.

There is no flag for disabling assertion/contract checking without potentially introducing new UB.

Why is this the best possible situation?
March 08, 2017
On 3/8/2017 5:56 AM, Moritz Maxeiner via Digitalmars-d wrote:
> On Wednesday, 8 March 2017 at 13:30:42 UTC, XavierAP wrote:
>> On Wednesday, 8 March 2017 at 12:42:37 UTC, Moritz Maxeiner wrote:
>>> Doing anything else is reckless endangerment since it gives you the feeling of being safe without actually being safe. Like using @safe in D, or Rust, and being unaware of unsafe code hidden from you behind "safe" facades.
>>
>> Safe code should be unable to call unsafe code -- including interop with any non-D or binary code, here I agree. I was supposing this is already the case in D but I'm not really sure.
>
> You can hide unsafe code in D by annotating a function with @trusted the same way you can hide unsafe code in Rust with unsafe blocks.

Clearly marked is an interesting definition of hidden.
March 08, 2017
On Wednesday, 8 March 2017 at 15:48:47 UTC, Timon Gehr wrote:
> On 07.03.2017 16:48, Kagamin wrote:
> I might need to point out that -release does not disable bounds checking in @safe code while it has been stated that -release introduces UB for assertion failures in @safe code.
>
> There is no flag for disabling assertion/contract checking without potentially introducing new UB.
>
> Why is this the best possible situation?

Even with a failed assertion, I believe @safe does still guarantee that no memory violations will happen. The program will go awry, but it will just misbehave. It won't stomp memory that might be of another type or even executable code. I believe that's why it's done how it is.


March 08, 2017
On Wednesday, 8 March 2017 at 19:21:58 UTC, Dukc wrote:
> On Wednesday, 8 March 2017 at 15:48:47 UTC, Timon Gehr wrote:
>> On 07.03.2017 16:48, Kagamin wrote:
>> [snip]

Sorry, accidently accounted that quote to a wrong person.


March 08, 2017
On Wednesday, 8 March 2017 at 17:40:29 UTC, Brad Roberts wrote:
> [...]
>>
>> You can hide unsafe code in D by annotating a function with @trusted the same way you can hide unsafe code in Rust with unsafe blocks.
>
> Clearly marked is an interesting definition of hidden.

---
module mymemorysafexyzlibrary;

struct Context { /* ... */ }

@safe
Context* createContextSafely()
{
    return () @trusted {
        // What's gonna happen if you use this?
        // Ask your memory allocation professional
        void* foo = malloc(Context.sizeof-1);
  	return cast(Data*) foo;
    }();
}
---

The operating word here being "can". The above is semantically equivalent (assuming the delegate gets optimized out)  to an unsafe block inside a Rust function. And yes, that's what I consider hidden unsafe code, and it means that if you call function `bar` from a @safe function `foo`, `bar` being marked as @safe does not save you from auditing `bar`'s source code.
March 08, 2017
On Wednesday, 8 March 2017 at 21:02:23 UTC, Moritz Maxeiner wrote:
> On Wednesday, 8 March 2017 at 17:40:29 UTC, Brad Roberts wrote:
>> [...]
>>>
>>> You can hide unsafe code in D by annotating a function with @trusted the same way you can hide unsafe code in Rust with unsafe blocks.
>>
>> Clearly marked is an interesting definition of hidden.
>
> The operating word here being "can". The above is semantically equivalent (assuming the delegate gets optimized out)  to an unsafe block inside a Rust function. And yes, that's what I consider hidden unsafe code, and it means that if you call function `bar` from a @safe function `foo`, `bar` being marked as @safe does not save you from auditing `bar`'s source code.

Indeed safety isn't transitive as I thought. @safe may call @trusted, which may include any unsafe implementation as long as the external interface does not. I suppose it was decided back at the time that the opposite would be too restrictive. Then truly safe client code can rely on simple trust established from the bottom up originating from systems unsafe code that is at least hopefully long lasting and stable and more tested (even if manually lol).

If client code, often rapidly updated, scarcely tested and under pressure of feature creep, is written in @safe D, this can still reduce the amount of failure modes.

Also at least as of 2010 Andrei's book stated that "At the time of this writing, SafeD is of alpha quality -- meaning that there may be unsafe programs [@safe code blocks] that pass compilation, and safe programs that don't -- but is an area of active development." And 7 years later in this forum I'm hearing many screams for @nogc but little love for @safe...
March 08, 2017
On Wednesday, 8 March 2017 at 22:38:24 UTC, XavierAP wrote:
> On Wednesday, 8 March 2017 at 21:02:23 UTC, Moritz Maxeiner wrote:
>> [...]
>>
>> The operating word here being "can". The above is semantically equivalent (assuming the delegate gets optimized out)  to an unsafe block inside a Rust function. And yes, that's what I consider hidden unsafe code, and it means that if you call function `bar` from a @safe function `foo`, `bar` being marked as @safe does not save you from auditing `bar`'s source code.
>
> Indeed safety isn't transitive as I thought. @safe may call @trusted, which may include any unsafe implementation as long as the external interface does not. I suppose it was decided back at the time that the opposite would be too restrictive. Then truly safe client code can rely on simple trust established from the bottom up originating from systems unsafe code that is at least hopefully long lasting and stable and more tested (even if manually lol).

If the use case has no problem with that kind of trust, indeed. Unfortunately even already long established, and presumably stable C APIs have tended to turn into horrible nightmares on many an occasion. *cough* openssl *cough*, so this will need to be something to evaluate on a project by project, dependency by dependency basis imho.

>
> If client code, often rapidly updated, scarcely tested and under pressure of feature creep, is written in @safe D, this can still reduce the amount of failure modes.

I don't disagree with that. Writing your own code in @safe has considerable advantages (first and foremost personal peace of mind :) ). It's just that other people writing their code in @safe does not provide you as a potential user of their code with any guarantees. You need to either extend those people the exact kind of trust you would if they had written their code in @system, or audit their code. It does make auditing considerably faster, though, since you can search for all instances of @trusted and evaluate their internals and how they're being interfaced with (i.e. you can omit auditing @safe functions that don't call @trusted functions).

>
> Also at least as of 2010 Andrei's book stated that "At the time of this writing, SafeD is of alpha quality -- meaning that there may be unsafe programs [@safe code blocks] that pass compilation, and safe programs that don't -- but is an area of active development." And 7 years later in this forum I'm hearing many screams for @nogc but little love for @safe...

Well, I can't speak for others, but I generally just use the GC for most things (which is by definition memory safe sans any bugs) and when I do need to step outside of it I use scope guards, refcounting, and have valgrind run (the only annoying part about valgrind with D is that there are some 96 bytes that it always reports as possibly lost and you have to suppress that). Also, when I look at the list of things forbidden in @safe[1] I don't see anything I actually do, anyway, so the current implementation status of safe has so far not been a particular concern of mine.

[1] https://dlang.org/spec/function.html#safe-functions
March 08, 2017
On Wed, Mar 08, 2017 at 10:38:24PM +0000, XavierAP via Digitalmars-d wrote: [...]
> Also at least as of 2010 Andrei's book stated that "At the time of this writing, SafeD is of alpha quality -- meaning that there may be unsafe programs [@safe code blocks] that pass compilation, and safe programs that don't -- but is an area of active development." And 7 years later in this forum I'm hearing many screams for @nogc but little love for @safe...

To be fair, though, in the past several months Walter has merged quite a number of PRs to dmd that close many of the holes found in @safe.  I don't think we can say @safe is bulletproof yet, but it would be unfair to say that no progress has been made.


T

-- 
Leather is waterproof.  Ever see a cow with an umbrella?
March 09, 2017
On Wednesday, 8 March 2017 at 15:48:47 UTC, Timon Gehr wrote:
> What I'm not buying is that the existence of UB in some circumstances justifies introducing more cases where UB is unexpectedly introduced. It's a continuum. Generally, if you add more failure modes, you will have more exploits.

With buffer overflows you're already sort of screwed, so assumes don't really change the picture. If you chose UB yourself, why would you care? Performance obviously took precedence.

> I might need to point out that -release does not disable bounds checking in @safe code while it has been stated that -release introduces UB for assertion failures in @safe code.

UB in safe code doesn't sound good no matter the cause.
1 2 3 4 5 6 7 8
Next ›   Last »