January 17, 2020
On Fri, Jan 17, 2020 at 05:28:09PM +0000, IGotD- via Digitalmars-d wrote:
> On Friday, 17 January 2020 at 16:33:14 UTC, jmh530 wrote:
[...]
> > [1] https://words.steveklabnik.com/a-sad-day-for-rust
> 
> I think this is pretty funny and also predictable. There is an overuse of unsafe in Rust because the programmers want/must escape their Gulag in order to do the things they want, it's human behaviour.

The other human behaviour is that people form habits and then resist changing said habits.

See, the thing is that there's a lot to be said about defaults that incentivize people to do things the Right Way(tm).  You provide the option to do things differently, there's an escape hatch for when you need it, but you also nudge them in the right direction, so that if they're undecided or not paying attention, they automatically default to doing it the right way.  One thing that D did quite well IMO is that the default way to do things often coincides with the best way.  As opposed to say C++, where the most obvious way to write a piece of code is almost certainly the wrong way, due to any number of potential problems (built-in arrays are unsafe, avoid raw pointers, avoid new, avoid writing loops, avoid mutation, the list goes on).

But you have to start out with the right defaults, because once people form habits around those defaults, they will resist change. Inertia is a powerful force.  One of the areas D didn't incentivize in the right way is being @system by default.  DIP 1028 is trying to change that, but you see the consequences of not starting out that way in the first place: people are resisting it because they have become accustomed to @system by default, and dislike changing their habits.


[...]
> This is one of the reasons I'm a bit skeptical against DIP 1028, https://forum.dlang.org/thread/ejaxvwklkyfnksjkldux@forum.dlang.org. That people will not value it that much. I have nothing against a safe subset, but I'm not sure making it default is the right way to go.
[...]

IMO it would have worked had it been the default from the very beginning.  This is why language decisions are so hard to make, because you don't really know what's the best design except in retrospect, but wrong decisions are hard to change after the fact because of inertia. By the time you accumulate enough experience to know what would have worked better, you may already be stuck with the previous decision.


T

-- 
Heuristics are bug-ridden by definition. If they didn't have bugs, they'd be algorithms.
January 17, 2020
On Friday, 17 January 2020 at 19:21:14 UTC, H. S. Teoh wrote:
> See, the thing is that there's a lot to be said about defaults that incentivize people to do things the Right Way(tm).

This is overblown. Rather than go by secondary sources go straight to the author:

«Actix always will be “shit full of UB” and “benchmark cheater”. (Btw, with tfb benchmark I just wanted to push rust to the limits, I wanted it to be on the top, I didn’t want to push other rust frameworks down.) »

https://github.com/actix/actix-web

He was clearly exploring a terrain where it makes sense to go low level, that is, he tried to get ahead in benchmarks.

You get what you pay for...

January 18, 2020
On Friday, 17 January 2020 at 08:10:48 UTC, Johannes Pfau wrote:
> I'm curious, what do you think would be the ideal scheme if we could redesign it from scratch? Only @safe/@system as function attributes and @trusted (or @system) blocks which can be used in @safe functions?

For the record, this is also exactly what I argued for in the thread linked in the original post, from more than 7 years ago.

No redesigning from scratch is needed for this. Just add @trusted blocks and discourage (and perhaps slowly deprecate, over years) the use of @trusted as a function attribute.

 — David
January 18, 2020
On Thursday, 16 January 2020 at 00:21:21 UTC, Joseph Rushton Wakeling wrote:
> The fact that in a similar situation D forces you to annotate the function with `@trusted`, and alert users to the _possibility_ that memory safety bugs could exist within the code of this function, is useful information even if you can't access the source code.

Detail the scenario where this would be useful, please.

If you want to audit a program to make sure there are no uses of potentially memory-unsafe code, you need access to all the source code: Even @safe functions can contain arbitrary amounts of potentially unsafe code, as they can call into @trusted functions. You make this point yourself in the quoted post; any information conveyed by @trusted is necessarily incomplete, by virtue of it being intransitive.

In other words, this "alert", as you put it, has next to no information content on account of its arbitrarily high false-negative rate (@safe functions calling into unsafe code), and is thus worse than useless.

If you don't have access to the source code, you don't know anything about what is used to implement a @safe function. If you do, you don't care where exactly the keyword you are grepping for in an audit is, as long as it is proximate to the potentially-unsafe code.

@trusted has no place on the API level.

 — David
January 23, 2020
On Saturday, 18 January 2020 at 21:44:17 UTC, David Nadlinger wrote:
> On Thursday, 16 January 2020 at 00:21:21 UTC, Joseph Rushton Wakeling wrote:
>> The fact that in a similar situation D forces you to annotate the function with `@trusted`, and alert users to the _possibility_ that memory safety bugs could exist within the code of this function, is useful information even if you can't access the source code.
>
> Detail the scenario where this would be useful, please.

I'm honestly not sure what is useful to add to the existing discussion.  But I think the problem is that most folks are looking for certainties, whereas I'm prepared to entertain a certain amount of probability in certain contexts.  (Please read on to understand what I mean by that, rather than assuming that I'm prepared to allow memory safety to be a roll of the dice:-)

Put it like this: for any given @safe function you see, odds are that in practice it uses no @trusted code outside the standard library/runtime.  Is that a guarantee?  No.  But it's a reasonable heuristic to use on a day-to-day "How concerned do I have to be that this function might do something scary?" basis.  (Depending on what the function does, I might be able to make an educated guess of the likelihood there's something @trusted closer to home.)

OTOH if I see a function marked as @trusted I have a cast-iron guarantee that the compiler did not do anything to verify the memory safety of _this particular function_.  Which gives me a nudge that on the balance of probability, I might want to give a bit more up-front scrutiny to exactly what it's doing -- either by reading the source code if I can, or by playing with it a bit to see if I can trip it up with some unexpected input.

Is that an _audit_?  No.  But it doesn't seem vastly different to the kind of day to day trust-versus-verify judgement calls that we all have to make, on a day to day basis, about all sorts of matters of code correctness in the APIs that we use.

> If you want to audit a program to make sure there are no uses of potentially memory-unsafe code, you need access to all the source code

Yes, agreed.  But that's the difference between doing a full safety audit versus the typical day-to-day "Does it seem reasonable to use this function for my use-case?" judgement calls that we all make when writing code.
1 2 3 4 5 6 7 8
Next ›   Last »