March 27, 2020
On 3/26/20 11:44 PM, Mathias Lang wrote:
> On Friday, 27 March 2020 at 02:35:56 UTC, Vladimir Panteleev wrote:
>> On Friday, 27 March 2020 at 02:31:13 UTC, Vladimir Panteleev wrote:
>>> This applies to extern(D) declarations, too. Effectively, this would mean that functions with a body won't need a @safe annotation, but functions without a body will require it. However, I think this is a more correct solution despite this issue.
>>
>> An afterthought. Because safety is part of D mangling, body-less extern(D) declarations could be assumed to be @safe. A mismatch (assumption of @safe but a @system implementation) will result in a linker error.
>>
>> This applies to all mangling schemes which can represent @safe, which is only extern(D) at the moment. So, if any calling convention would be special in this regard, it would be extern(D).
> 
> Linker error are some of the most unfriendly way to do diagnostic.
> Having all declarations without definitions needing to be explicitly annotated is IMO much better.
> There aren't that many places out there where such a pattern is used anyway.

How does this fix the problem though? Today, if you use a wrong type, you get a linker error. I don't see why that's less confusing than not marking a function @system. And what if you mark it @system, but it's really @safe? Linker error. Just another thing to mess up.

I would say extern(D) functions are fine as prototypes without extra markings.

-Steve
March 27, 2020
On Thursday, 26 March 2020 at 10:55:44 UTC, Atila Neves wrote:
> On Wednesday, 25 March 2020 at 21:58:40 UTC, Jonathan Marler wrote:
>> Has the benefit warranted the cost to manage these tags throughout your code?
>
> Yes. Especially since the cost is trivial.
>
>> Do we have any projects that are already using this behavior by putting "@safe:" at the top of every file?  Does anyone have any pointers to projects that have done this?
>
> All my projects that aren't called reggae. The only reason for that exception is that it's ancient and I didn't know any better then.
>
> I don't know how we've managed, but we've utterly failed at marketing @safe to the community. Writing @safe code is easy unless you're doing manual memory management or trying to abstract it with a library. *Especially* with DIP1000.

There's a huge difference between correctly using `@safe` and having things compile.

I have *never* seen non-trivial library that manage to do the former without imposing strong requirements on the user. Let me repeat that: I haven't seen a *single* non-trivial library out there that does it correctly. It either forces user code to be `@safe`, or bypasses `@safe`ty checks completely.

And since exceptional claims calls for exceptional proof, I wanted to check whether or not your libraries would be any different. It took me less than 5 minutes to find this: https://github.com/atilaneves/unit-threaded/issues/176

We didn't fail to market `@safe`. We failed to provide a construct that allows users to write libraries that take attributes (`@safe`, `nothrow`, `@nogc`, same fight) into account without incredible amounts of boilerplate.

Take any library that accepts a delegate: Either one has to restrict what the delegate can do, or make a certain attribute un-enforceable. There's no way to write an interface that concisely expresses that its `@safe`ty, `@nogc`-ness or `nothrow`-ness depends on the user-provided delegate, unless you template absolutely everything, which is not a viable (and sometimes, not possible) solution.

We need something similar to `inout` for attributes. Or at the very least, a way to express this dependency. Only then will changing the default be a remotely viable possibility, in my opinion. At the moment, all this is doing is ignoring the problem and pushing the complexity from one demographic to the other.
March 26, 2020
On 3/26/2020 7:24 AM, Adam D. Ruppe wrote:
> I suspect 95+% of C's problems already are extremely rare in D, yet the @safe advocates never seem to consider this at all.

Unfortunately, that other 5% can cost companies millions of dollars. 95% safe isn't good enough anymore. @safe also saves myself (and companies) time because certain types of C errors no longer have to be manually checked for.

D will not prevent you from doing anything you want in the code, just add @system annotation.

March 26, 2020
On 3/26/2020 8:14 AM, Steven Schveighoffer wrote:
> My point is, putting in the effort to migrate to @safe is the best way to determine where the sticking points are (and there are definitely sticking points). Hand-wavy statistics aren't persuasive.

Yah, you never know if the paper airplane design will actually fly until you build it, gas it up, and convince your test pilot to risk his life.

March 27, 2020
On Thursday, 26 March 2020 at 23:10:17 UTC, Jonathan M Davis wrote:
> Except that in @system code, the bounds checking gets turned off with -release. So, with @system as the default, a lot less bounds checking is going on than I think many people realize.

Type system protects against involuntary mistakes, the -release switch is voluntary.
March 27, 2020
On 27/03/2020 12:16 PM, Jonathan M Davis wrote:
> On Thursday, March 26, 2020 4:54:53 PM MDT rikki cattermole via Digitalmars-
> d wrote:
>> And then there is .di files which complicates the matters further.
>>
>> But I agree with you.
>>
>> If the compiler _cannot_ or has _not_ confirmed it is @safe, a function
>> should not be marked as @safe and be callable.
> 
> Actually, .di files don't complicate things much. non-extern(D) declarations
> have to be treated the same no matter where they are, and anything with a
> function body in a .di file is the same as if it were in a .d file. The only
> real difference is that you then have extern(D) declarations added into the
> mix, and because whether they're @safe, @trusted, or @system is part of
> their name mangling, they cannot link if they don't have the same @safety
> level as the corresponding function definition (which was verified by the
> compiler if it's @safe). So, the compiler can treat extern(D) function
> declarations as @safe by default just like it does function definitions
> without there being a problem. The only real issue is if the declaration and
> definition don't match, which is a problem regardless of what the default
> @safety level is.
> 
> - Jonathan M Davis

The problem that concerns me is not its linkage or where that is occurring, its the fact that we treat .di files as trusted. We as a community have long said don't modify these files generated by the compiler.

If it is generated by the compiler then the checks that have been said to have occurred should have occurred. For example @safe on a function declaration without a body.

But on a regular D file, these checks have not been checked yet. As these files come from a human being and not a compiler. Hence if a body isn't present, it cannot be @safe. It has not been verified.

Which means that in a .d file what is allowed is less than a .di file.
If we follow this line of thought it creates a problematic situation.
March 27, 2020
On Friday, March 27, 2020 1:04:49 AM MDT rikki cattermole via Digitalmars-d wrote:
> On 27/03/2020 12:16 PM, Jonathan M Davis wrote:
> > On Thursday, March 26, 2020 4:54:53 PM MDT rikki cattermole via
> > Digitalmars->
> > d wrote:
> >> And then there is .di files which complicates the matters further.
> >>
> >> But I agree with you.
> >>
> >> If the compiler _cannot_ or has _not_ confirmed it is @safe, a function should not be marked as @safe and be callable.
> >
> > Actually, .di files don't complicate things much. non-extern(D) declarations have to be treated the same no matter where they are, and anything with a function body in a .di file is the same as if it were in a .d file. The only real difference is that you then have extern(D) declarations added into the mix, and because whether they're @safe, @trusted, or @system is part of their name mangling, they cannot link if they don't have the same @safety level as the corresponding function definition (which was verified by the compiler if it's @safe). So, the compiler can treat extern(D) function declarations as @safe by default just like it does function definitions without there being a problem. The only real issue is if the declaration and definition don't match, which is a problem regardless of what the default @safety level is.
> >
> > - Jonathan M Davis
>
> The problem that concerns me is not its linkage or where that is occurring, its the fact that we treat .di files as trusted. We as a community have long said don't modify these files generated by the compiler.
>
> If it is generated by the compiler then the checks that have been said to have occurred should have occurred. For example @safe on a function declaration without a body.
>
> But on a regular D file, these checks have not been checked yet. As these files come from a human being and not a compiler. Hence if a body isn't present, it cannot be @safe. It has not been verified.
>
> Which means that in a .d file what is allowed is less than a .di file. If we follow this line of thought it creates a problematic situation.

I really don't get what you're trying to say.

Regardless of how a .di file was created, if an extern(D) function declaration in it is @safe (be it because the compiler implicitly treats it as @safe or it's explicitly marked with @safe), then it will only link if the corresponding function definition is @safe. So, any attribute mismatches will be caught when you link your program. As such, there is no way for there to be a hole in @safe due to using a .di file with an extern(D) function. Because the function definition will have been verified by the compiler, and it must have the exact same set of attributes as the function declaration for the program to link, the function _was_ verified even if your program imports the .di file rather than the .d file.

You can of course have linker errors due to the .di and .d files not matching, but that happens with _anything_ that affects the function signature. It's not at all unique to @safe.

- Jonathan M Davis



March 27, 2020
On 3/26/2020 9:20 PM, Mathias Lang wrote:
> Take any library that accepts a delegate: Either one has to restrict what the delegate can do, or make a certain attribute un-enforceable. There's no way to write an interface that concisely expresses that its `@safe`ty, `@nogc`-ness or `nothrow`-ness depends on the user-provided delegate, unless you template absolutely everything, which is not a viable (and sometimes, not possible) solution.

Making @safe the default will substantially reduce this problem for the simple reason that the vast bulk of code should be @safe.

BTW, I have an upcoming DIP that changes the default attributes for delegate parameter types to match the function they appear in.
March 27, 2020
On 3/25/2020 3:40 PM, H. S. Teoh wrote:
> I wouldn't say this is a big impact, but it did catch a couple of bugs
> that would've been a pain to track down.

Even if escaping references to the stack are rare, it is very very important to catch them as they are hard to track down and cause silent data corruption. They're some of the worst bugs.

@safe recently found a data corruption bug in the druntime stack unwinder that had been there, latent, for years. (It hadn't caught the bug before because I had overlooked checking the msg argument to assert() for @safe errors.)
March 27, 2020
On 3/25/2020 4:33 PM, H. S. Teoh wrote:
> I think the intention is that most D code will be @safe, and things like
> DIP1000 will apply to enforce memory safety, and only when you need to
> "go under the hood" you'd have a @system function as the escape hatch to
> do the low-level hacks.  The way I read it, this is all part of a
> general grand plan, which includes this DIP, to move towards having most
> D code be @safe and only occasionally drop down to @system for
> operations that cannot be mechanically verified.  Hence, @safe by
> default.

That's right.