February 06, 2015
> I was referring to a hypothetical "untrusted" block that might be used something like this:
>
> ---
> void foo(Range)(Range r) @trusted {
>   // ...
>
>   untrusted {
>     r.front;
>   }
>
>   // Your manually checked code.
>
>   untrusted {
>     r.popFront;
>   }
>
>   // …
> }
> ---

Using current semantics we must not mark foo @trusted, if r.front and
r.popFront aren't. Using the proposed @safe-blocks (are those untrusted blocks the same?) we could guarantee that, by wrapping the use of r.front and r.popFront in @safe-blocks.

This is limiting because now we cannot provide an foo marked @system for all @system ranges without boiler plate or duplicating the function.

Correct?




February 06, 2015
On Friday, 6 February 2015 at 17:36:27 UTC, Atila Neves wrote:
>> I'm trying to promote suggesting '@system' blocks instead of '@trusted'. '@trusted' functions, but '@system' blocks - which can only go in @trusted functions (@system block in @system functions are redundant). It's the same semantics, but it might win the day because the intent is to isolate the @system code, while still presenting a @trusted interface, as seems so important to the leadership.
>
> That might be better than using @safe inside @trusted:
>
> @trusted void myfunc() {
> //implicitly safe
> ...
> @system { //wouldn't compile otherwise.
>    auto ptr = cast(ubyte*)4;
> }
>
> //implicitly safe again
> }

Exactly. I think this addresses the concerns. If I read Walter's OP correctly, it's precisely the use of the word '@trusted' that he opposes, unless it's built into an interface like a function signature. Also, a @system block could be one statement long, if I'm not mistaken, in which case the above could look like:

@trusted void myfunc() {
//implicitly safe
...
@system auto ptr = cast(ubyte*)4;

//implicitly safe again
}
February 06, 2015
On Friday, 6 February 2015 at 18:30:03 UTC, Zach the Mystic wrote:
>> That might be better than using @safe inside @trusted:
>>
>> @trusted void myfunc() {
>> //implicitly safe
>> ...
>> @system { //wouldn't compile otherwise.
>>   auto ptr = cast(ubyte*)4;
>> }
>>
>> //implicitly safe again
>> }
>
> Exactly. I think this addresses the concerns. If I read Walter's OP correctly, it's precisely the use of the word '@trusted' that he opposes, unless it's built into an interface like a function signature. Also, a @system block could be one statement long, if I'm not mistaken, in which case the above could look like:
>
> @trusted void myfunc() {
> //implicitly safe
> ...
> @system auto ptr = cast(ubyte*)4;
>
> //implicitly safe again
> }

If this is a real solution, it's kind of exciting! :-)
February 06, 2015
On 2/6/15 9:28 AM, David Nadlinger wrote:
> On Friday, 6 February 2015 at 17:16:19 UTC, Andrei Alexandrescu wrote:
>> On 2/6/15 8:40 AM, David Nadlinger wrote:
>>> This still does not solve the template inference problem
>>
>> What is that?
>
> See my reply to Tobias [1]. This seems to be the crux of our
> disagreement and/or misunderstanding, and is precisely the reason why I
> recommended you to try your hand at rewriting some of the std.array
> range algorithms yourself in the Bugzilla discussion. Let me know if the
> explanation in said post is not clear enough.

It's clear. I just don't think it's a good point. -- Andrei

February 06, 2015
On 2/6/15 9:41 AM, Vladimir Panteleev wrote:
> On Friday, 6 February 2015 at 16:19:00 UTC, Andrei Alexandrescu wrote:
>> On 2/6/15 5:13 AM, Vladimir Panteleev wrote:
>>> That doesn't answer my question.
>>>
>>> A few years ago, I recall, you were arguing that for functions which are
>>> or may be exported to a DLL, thus all Phobos functions, it is impossible
>>> to predict how the functions will be used. Thus, you argued, the
>>> functions' input has to be validated, even if invalid parameters can
>>> only be passed to the function as a result of a program bug, and never
>>> user input.
>>>
>>> So, to repeat my question: which one is it? Have you changed your mind,
>>> or are there exceptions to the rules in the post you quoted?
>>
>> Could you all please grant me this wish - let's not get into that
>> vortex again? It renders everyone involved unproductive for days on
>> end. Thanks. -- Andrei
>
> What is the problem? Sorry if my post sounded confrontational, but I
> wasn't going to argue - I just want to understand the language
> designers' current position on this topic.

I was joking - whenever Walter gets into the assert vs. enforce distinction, the world economy is decreasing by 1%. -- Andrei

February 06, 2015
On Friday, 6 February 2015 at 18:39:28 UTC, Andrei Alexandrescu wrote:
> It's clear. I just don't think it's a good point. -- Andrei

I'm not making a point; I'm posing a problem. What is your solution?

David
February 06, 2015
On Friday, 6 February 2015 at 17:50:05 UTC, Tobias Pankrath wrote:
>> I was referring to a hypothetical "untrusted" block that might be used something like this:
>>
>> ---
>> void foo(Range)(Range r) @trusted {
>>  // ...
>>
>>  untrusted {
>>    r.front;
>>  }
>>
>>  // Your manually checked code.
>>
>>  untrusted {
>>    r.popFront;
>>  }
>>
>>  // …
>> }
>> ---
>
> Using current semantics we must not mark foo @trusted, if r.front and
> r.popFront aren't. Using the proposed @safe-blocks (are those untrusted blocks the same?) we could guarantee that, by wrapping the use of r.front and r.popFront in @safe-blocks.
>
> This is limiting because now we cannot provide an foo marked @system for all @system ranges without boiler plate or duplicating the function.
>
> Correct?

Yes.

The "untrusted" blocks were an ad-hoc invention to show how @safe blocks could be modified to actually work in this case (for template functions, don't require @safe, but mark the function @system if the contents are unsafe), but that this modification results in a much less desirable design than just straight up having @trusted blocks.

David
February 06, 2015
On 2/6/15 10:36 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang@gmail.com>" wrote:
> On Friday, 6 February 2015 at 15:10:18 UTC, Steven Schveighoffer wrote:
>> into suspect the whole function. So marking a function @safe, and
>> having it mean "this function has NO TRUSTED OR SYSTEM CODE in it
>> whatsoever, is probably the right move, regardless of any other changes.
>
> But that would break if you want to call a @safe function with a
> @trusted function reference as a parameter? Or did I misunderstand what
> you meant here?

The whole point of marking a function trusted instead of a block is that you have to follow the rules of function calling to get into that block, and your separate function only has access to variables you give it.

My point was that if you have @trusted escapes inside a function, whether it's marked @safe or not, you still have to review the whole function. If the compiler disallowed this outright, then you don't have that issue.

Separating the trusted code from the safe code via an API barrier has merits when it comes to code review.

Now, @trusted static nested functions that stand on their own are fine, they are no different than public ones, just not public.

@trusted static nested functions that are ONLY OK when called in certain ways, that is where we run into issues. At that point, you have to make a choice -- add (somewhat unnecessary) machinery to make sure the function is always called in a @safe way, or expand the scope of the @trusted portion, possibly even to the whole @safe function.

I see the point now that making sure @safe functions don't have escapes has the advantage of not requiring *as much* review as a @system or @trusted function. I am leaning so much towards H.S. Teoh's solution of making @trusted safe by default, and allowing escapes into @system code. That seems like the right abstraction.

> And... what happens if you bring in a new architecture that requires
> @trusted implementation of a library function that is @safe on other
> architectures?

Then you create a @trusted wrapper around that API, ensuring when called from @safe code it can't corrupt memory.

>
>> 1. A way to say "this function needs extra scrutiny"
>> 2. Mechanical verification as MUCH AS POSSIBLE, and especially for
>> changes to said function.
>>
>> Yes, we can do 2 manually if necessary. But having a compiler that
>> never misses on pointing out certain bad things is so much better than
>> not having it.
>
> I am not sure if it is worth the trouble. If you are gonna conduct a
> semi formal proof, then you should not have a mechanical sleeping pillow
> that makes you sloppy. ;-)

I see what you mean, but there are also really dumb things that people miss that a compiler won't. Having a mechanical set of eyes in addition to human eyes is still more eyes ;)

> Also if you do safety reviews they should be separate from the
> functional review and only focus on safety.
>
> Maybe it would be interesting to have an annotation for @notprovenyet,
> so that you could have regular reviews during development and then scan
> the source code for @trusted functions that need a safety review before
> you a release is permitted? That way you don't have to do the safety
> review for every single mutation of the @trusted function.

The way reviews are done isn't anything the language can require. Certainly we can provide guidelines, and we can require such review processes for phobos and druntime.

-Steve

February 06, 2015
On 2/6/15 10:42 AM, David Nadlinger wrote:
> On Friday, 6 February 2015 at 18:39:28 UTC, Andrei Alexandrescu wrote:
>> It's clear. I just don't think it's a good point. -- Andrei
>
> I'm not making a point; I'm posing a problem. What is your solution?

I think the problem is overstated. -- Andrei

February 06, 2015
On Friday, 6 February 2015 at 17:12:40 UTC, David Nadlinger wrote:
> It seems obvious that explicitly whitelisting a small number of potentially dangerous but safe operations is much less error-prone approach than disabling compiler checks for everything and then having to remember to blacklist all unverified external dependencies.
>
> David

That seems obvious to me too. Isn't the whole purpose of having '@trusted' in the first place to direct a programmer who's having memory safety problems to the potential sources those problems? But why have this and then stop at the function level? Why not force the programmer to tag precisely those portions of his code which cause him to tag his function @trusted to begin with? Why help him get to the function, and then leave him hanging out to dry once inside the function?