June 07, 2016
On 6/7/2016 3:10 PM, Timon Gehr wrote:
> If you think progress on this matters, why are you arguing against it?

I don't believe it is worth the effort. You do. You need to make a better case for it, and the best way to do that is to actually write a spec. Demanding that someone (i.e. me) who doesn't believe in it, has a poor track record of writing specs, and has no academic background in writing proofs, means you aren't going to get what you want that route.

You've made some valuable contributions to D, in the form of finding problems. Why not contribute something more substantial, like a spec? You have a good idea in mind what it should be, just write it.
June 07, 2016
On 6/7/2016 1:48 PM, Jonathan M Davis via Digitalmars-d wrote:
> So, while mass applying something like @safe temporarily to check stuff
> makes some sense, I really don't think that it's a good idea to do it in any
> code that you'd ever commit.

The downsides you listed do not apply to @safe.

June 07, 2016
On 6/7/2016 2:28 PM, Steven Schveighoffer wrote:
> I can attest that figuring out why something isn't inferred @safe isn't always
> easy, and the "slap a @safe: tag at the top" isn't always going to help.

Having a -safe compiler switch to make @safe the default won't improve that in the slightest.

June 08, 2016
On 6/8/16 12:53 AM, Timon Gehr wrote:
> On 08.06.2016 00:47, Walter Bright wrote:
>> On 6/7/2016 3:23 PM, Timon Gehr wrote:
>>> Obviously they proved the virtual machine itself memory safe,
>>
>> As I recall, the proof was broken, not the implementation.
>
> Which time?

That is an old result that has essentially expired and should not be generalized. See http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00849.html. I assume the matter has been long fixed by now, do you happen to know?

>> People do
>> make mistakes and overlook cases with proofs. There's nothing magical
>> about them.
>>
>
> Obviously, but there are reliable systems that check proofs automatically.

It is my opinion that writing off formal proofs of safety is a mistake. Clearly we don't have the capability on the core team to work on such. However, I am very interested if you'd want to lead such an effort.


Andrei

June 07, 2016
On 6/7/16 7:05 PM, Walter Bright wrote:
> On 6/7/2016 2:28 PM, Steven Schveighoffer wrote:
>> I can attest that figuring out why something isn't inferred @safe
>> isn't always
>> easy, and the "slap a @safe: tag at the top" isn't always going to help.
>
> Having a -safe compiler switch to make @safe the default won't improve
> that in the slightest.
>

No, of course not. I don't think anyone has said this.

In my experience, finding the reasons something isn't inferred safe is an iterative process with the compiler and temporarily marking targeted code. I don't think grep helps here at all, and neither do global safe attributes.

-Steve
June 07, 2016
On 6/7/2016 4:01 PM, Jonathan M Davis via Digitalmars-d wrote:
> Yeah. I recall an article by Joel Spoelsky where he talks about deciding
> that proofs of correctness weren't worth much, because they were even harder
> to get right than the software.
>
> I do think that there are situations where proofs are valuable, but they do
> tend to be very difficult to get right, and their application is ultimately
> fairly limited.

My understanding is that academic researchers who need to prove a theory use a subset of Java, because the smaller the language, the more practical it is to write proofs about it. I also remember bearophile bringing up the Spec# language which was supposed to be able to formally prove things, but it turned out not much. I fed it some one liners with bit masking which it threw in the towel on.

I suspect D has long since passed point where it is too complicated for the rather limited ability of mathematicians to prove things about it.
June 07, 2016
On Tuesday, 7 June 2016 at 20:41:08 UTC, Walter Bright wrote:
> The point being that a culture of "best practices" does arise and evolve over time, and professional programmers know it.

Sure, that's one of the big advantages C++ has over D: people have that institutional knowledge.

But, two important questions:

1) You criticize C++ because it isn't good enough - programmers are lazy and just because you can do it right doesn't mean they will. The right thing also needs to be the easy thing.

D's attribute spam is not the easy thing. And there's no apparent punishment for doing it "wrong" - everything works equally well for the author. It is only when a third party comes in and tries to slap the attribute on that it becomes an issue.

2) What makes you so sure @nogc will actually be part of the best practice? I haven't done a comprehensive study, but my impression so far is that it is very rarely used: the biggest win is being inferred in templates... which seems to imply that people aren't caring enough to actually write it out.

> Would you want to use a library where the maintainers refuse to use @nogc even if they aren't using the gc?

I think you underestimate the cost of @nogc (and @safe, and pure, and nothrow) to the library author. It is littering their code with something they don't use themselves (and thus easy to forget to put there on new functions) and don't derive any direct benefit from.

Moreover, it limits their flexibility in the future: once a function is published with one of those attributes, it is part of the public interface so the implementation cannot change its mind in the future. That might be a good thing to the user begging for verified @nogc or whatever, but to the library author it is another cost for them to maintain.


Though, the apathy factor I think is bigger than the maintenance factor: I don't write it in my code because I just don't care. I have had only one user ever complain too... and he seems to have changed his mind by now and no longer cares either (probably because a critical mass of library authors still just don't care, so you can't realistically slap @nogc @safe on that much anyway).
June 07, 2016
On Tuesday, June 07, 2016 16:04:05 Walter Bright via Digitalmars-d wrote:
> On 6/7/2016 1:48 PM, Jonathan M Davis via Digitalmars-d wrote:
> > So, while mass applying something like @safe temporarily to check stuff makes some sense, I really don't think that it's a good idea to do it in any code that you'd ever commit.
>
> The downsides you listed do not apply to @safe.

Sure they do. Regardless of the attribute, if it can be inferred, and templates are involved, you can't mass apply it, because because you almost always need the attribute to be inferred. And regardless of whether an attribute can be inferred, mass applying it tends to mean that it's harder to figure out which attributes a function is actually marked with. It's easier when it's just a label at the top of the file, but we've already had PRs in Phobos where an attribute got applied locally as part the PR, because the person doing the work did not realize that it was already in effect. And personally, it always throws me off when attribute labels or blocks are used, because it looks like the attribute is not being applied to a function when it actually is. I don't think that it matters what the attribute is. All of the same downsides apply. The primary difference with @safe over some of the others is that you can reverse it, whereas you can't with most of them. But even then, you can't tell a template to infer @safe when you've marked the whole file with @safe, so while you can change which level of trust you're applying to a function, you can remove the trust attributes entirely once one of them has been applied.

Personally, I think that it's almost always a mistake to mass apply attributes - especially those that can be inferred in templated code. It does not play well with templates, and it causes maintenance problems.

- Jonathan M Davis

June 07, 2016
On 6/7/2016 4:07 PM, Andrei Alexandrescu wrote:
> It is my opinion that writing off formal proofs of safety is a mistake. Clearly
> we don't have the capability on the core team to work on such. However, I am
> very interested if you'd want to lead such an effort.

On the contrary, I think a formal proof would be very valuable. I am just skeptical of the notion that a proof is automatically correct. I've read about mistakes being found in many published mathematical proofs. I read somewhere that Hilbert made many mistakes in his proofs, even though the end results turned out correct.


June 07, 2016

On 06/05/2016 09:17 PM, Adam D. Ruppe via Digitalmars-d wrote:
> On Monday, 6 June 2016 at 02:30:55 UTC, Pie? wrote:
>> Duh! The claim is made that D can work without the GC... but that's a red herring... If you take about the GC what do you have?
>
> Like 90% of the language, still generally nicer than most the competition.
>
> Though, I wish D would just own its decision instead of bowing to Reddit pressure. GC is a proven success in the real world with a long and impressive track record. Yes, there are times when you need to optimize your code, but even then you aren't really worse off with it than without it.
>
Usually correct, but there are times when you want to suspend the garbage collection.  The problem is this should always be a scoped decision, because it's easy to accidentally leave it turned off, and then it's MUCH worse than not having it.