Jump to page: 1 28  
Page
Thread overview
If you needed any more evidence that memory safety is the future...
Feb 24, 2017
Jack Stouffer
Feb 24, 2017
ketmar
Feb 24, 2017
Chris Wright
Feb 24, 2017
Timon Gehr
Feb 25, 2017
Chris Wright
Feb 25, 2017
Timon Gehr
Feb 25, 2017
Chris Wright
Feb 25, 2017
Timon Gehr
Feb 25, 2017
Stefan Koch
Feb 26, 2017
Johannes Pfau
Feb 25, 2017
Chris Wright
Feb 25, 2017
Chris Wright
Feb 25, 2017
Chris M
Feb 25, 2017
Timon Gehr
Mar 03, 2017
Kagamin
Mar 06, 2017
Timon Gehr
Mar 07, 2017
Kagamin
Mar 08, 2017
Timon Gehr
Mar 08, 2017
Dukc
Mar 08, 2017
Dukc
Mar 09, 2017
Kagamin
Feb 24, 2017
Moritz Maxeiner
Feb 24, 2017
Jack Stouffer
Feb 24, 2017
Jack Stouffer
Mar 03, 2017
Kagamin
Feb 24, 2017
Dukc
Mar 03, 2017
Kagamin
Feb 24, 2017
Moritz Maxeiner
Mar 03, 2017
Kagamin
Mar 03, 2017
Moritz Maxeiner
Mar 07, 2017
Kagamin
Mar 07, 2017
Moritz Maxeiner
Mar 07, 2017
XavierAP
Mar 08, 2017
Moritz Maxeiner
Mar 08, 2017
XavierAP
Mar 08, 2017
Moritz Maxeiner
Re: [OT] If you needed any more evidence that memory safety is the future...
Mar 08, 2017
XavierAP
Mar 08, 2017
Moritz Maxeiner
Mar 08, 2017
XavierAP
Mar 08, 2017
Moritz Maxeiner
Mar 08, 2017
Brad Roberts
Mar 08, 2017
Moritz Maxeiner
Mar 08, 2017
XavierAP
Mar 08, 2017
Moritz Maxeiner
Mar 08, 2017
H. S. Teoh
Feb 25, 2017
Jerry
Mar 02, 2017
Guillaume Piolat
Mar 03, 2017
Moritz Maxeiner
Mar 03, 2017
Jacob Carlborg
Mar 03, 2017
Moritz Maxeiner
Mar 05, 2017
Jacob Carlborg
Mar 05, 2017
Moritz Maxeiner
Mar 08, 2017
Minty Fresh
Mar 08, 2017
Paulo Pinto
Mar 08, 2017
Moritz Maxeiner
Mar 03, 2017
Kagamin
Mar 03, 2017
Moritz Maxeiner
Mar 03, 2017
Guillaume Piolat
Mar 03, 2017
Moritz Maxeiner
Feb 24, 2017
H. S. Teoh
Mar 02, 2017
H. S. Teoh
Mar 02, 2017
ketmar
Mar 02, 2017
jmh530
Mar 02, 2017
H. S. Teoh
Mar 02, 2017
jmh530
Mar 03, 2017
Moritz Maxeiner
February 24, 2017
https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

A buffer overflow bug caused heartblead 2.0 for hundreds of thousands of sites. Here we are 57 years after ALGOL 60 which had bounds checking, and we're still dealing with bugs from C's massive mistake.

This is something that valgrind could have easily picked up, but the devs just didn't use it for some reason. Runtime checking of this stuff is important, so please, don't disable safety checks with DMD if you're dealing with personal info.

If you use a site on this list https://github.com/pirate/sites-using-cloudflare and you're not using two factor auth, please change your password ASAP.
February 24, 2017
Jack Stouffer wrote:

> This is something that valgrind could have easily picked up, but the devs just didn't use it for some reason. Runtime checking of this stuff is important, so please, don't disable safety checks with DMD if you're dealing with personal info.

or, even better: don't disable bounds checking at all. never.

if you are *absolutely* sure that bounds checking *IS* the bottleneck (you *did* used your profiler to find this out, did you?), you can selectively avoid bounds checking by using `arr.ptr[i]` instead of `arr[i]` (and yes, this is unsafe; but what would you expect by removing safety checks?).

forget about "-release" dmd arg. forget about "-boundscheck=off". no, really, they won't do you any good. after all, catching a bug in your program when it doesn't run in controlled environment is even more important than catching a bug in debugging session! don't hate your users by giving 'em software with all safety measures removed! please.
February 24, 2017
On Friday, 24 February 2017 at 06:59:16 UTC, Jack Stouffer wrote:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=1139
>
> [...]

This isn't evidence that memory safety is "the future", though.
This is evidence that people do not follow basic engineering practices (for whatever seemingly valid reasons - such as a project deadline - at the time).
Writing a program (with manual memory management) that does not have dangerous memory issues is not an intrinsically hard task. It does, however, require you to *design* your program, not *grow* it (which, btw, is what a software *engineer* should do anyway).
Systems such as memory ownership+borrowing, garbage collection, (automatic) reference counting can mitigate the symptoms (and I happily use any or all of them when they are the best tool for the task at hand), but none of them will solve the real issue: The person in front of the screen (which includes you and me).
February 24, 2017
On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner wrote:
> This isn't evidence that memory safety is "the future", though.
> This is evidence that people do not follow basic engineering practices (for whatever seemingly valid reasons - such as a project deadline - at the time).
>
> Writing a program (with manual memory management) that does not have dangerous memory issues is not an intrinsically hard task. It does, however, require you to *design* your program, not *grow* it (which, btw, is what a software *engineer* should do anyway).

If the system in practice does not bear any resemblance to the system in theory, then one cannot defend the theory. If, in practice, programming languages without safety checks produces very common bugs which have caused millions of dollars in damage, then defending the language on the theory that you might be able to make it safe with the right effort is untenable.

Why is it that test CIs catch bugs when people should be running tests locally? Why is it that adding unittest blocks to the language made unit tests in D way more popular when people should always be writing tests? Because we're human. We make mistakes. We put things off that shouldn't be put off.

It's like the new safety features on handheld buzzsaws which make it basically impossible to cut yourself. Should people be using these things safely? Yes. But, accidents happen, so the tool's design takes human behavior into account and we're all the better for it.

Using a programing language which doesn't take human error into account is a recipe for disaster.
February 24, 2017
On Friday, 24 February 2017 at 14:35:44 UTC, Jack Stouffer wrote:
> It's like the new safety features on handheld buzzsaws which make it basically impossible to cut yourself. Should people be using these things safely? Yes. But, accidents happen, so the tool's design takes human behavior into account and we're all the better for it.

Chainsaws are effective, but dangerous. So you should have both training and use safety equipment. Training and safety equipment is available for C-like languages (to the level of provable correctness), and such that it doesn't change the runtime performance.

But at the end of the day it all depends, for some context it matters less if program occasionally fails than others. It is easier to get a small module correct than a big application with many interdependencies etc.

If you don't want to max out performance you might as well consider Go, Java, C#, Swift etc. I don't really buy into the idea that a single language has to cover all bases.


February 24, 2017
On Fri, 24 Feb 2017 09:14:24 +0200, ketmar wrote:
> forget about "-release" dmd arg. forget about "-boundscheck=off". no, really, they won't do you any good. after all, catching a bug in your program when it doesn't run in controlled environment is even more important than catching a bug in debugging session! don't hate your users by giving 'em software with all safety measures removed! please.

Especially since -release disables assertions and contracts.

If you really want extra validation that's too expensive in the general case, you can use `version(ExpensiveValidation)` or the like.
February 24, 2017
On Friday, 24 February 2017 at 15:15:00 UTC, Ola Fosheim Grøstad wrote:
> I don't really buy into the idea that a single language has to cover all bases.

Neither do I. But, the progenitor of that idea is that languages have understood use-cases, and that using them outside of those areas is non-optimal.

I've come to believe that any program that handles personal user data made in a language without memory safety features is not only non-optimal, but irresponsible.
February 24, 2017
On Fri, Feb 24, 2017 at 06:59:16AM +0000, Jack Stouffer via Digitalmars-d wrote:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=1139
> 
> A buffer overflow bug caused heartblead 2.0 for hundreds of thousands of sites. Here we are 57 years after ALGOL 60 which had bounds checking, and we're still dealing with bugs from C's massive mistake.

Walter was right that the biggest mistake of C was conflating pointers and arrays.  That single decision, which seemed like a clever idea in a day and age where saving a couple of bytes seemed so important (how times have changed!), has cost the industry who knows how much as a consequence.

More scarily yet, this particular pointer bug was obscured because it occurred in *generated* code.  The language it was generated from (Ragel) appears not to have any safety checks in this respect, but "blindly" generated C code that simply followed whatever the source code said.  As if pointer bugs aren't already too easy to inadvertently write, now we have an additional layer of abstraction to make them even less obvious to the programmer, who now has to mentally translate the higher-level constructs into low-level pointer manipulations in order to even realize something may have gone wrong.  Talk about leaky(!) abstractions...


> This is something that valgrind could have easily picked up, but the devs just didn't use it for some reason. Runtime checking of this stuff is important, so please, don't disable safety checks with DMD if you're dealing with personal info.
[...]

The elephant in the room is that the recent craze surrounding the "cloud" has conveniently collected large numbers of online services under a small number of umbrellas, thereby greatly expanding the impact of any bug that occurs in the umbrella.  Instead of a nasty bug that impacts merely one or two domains, we now have a nasty bug that singlehandedly affects 4 *million* domains.  Way to go, "cloud" technology!


T

-- 
Spaghetti code may be tangly, but lasagna code is just cheesy.
February 24, 2017
On Friday, 24 February 2017 at 15:15:00 UTC, Ola Fosheim Grøstad wrote:
> Chainsaws are effective, but dangerous. So you should have both training and use safety equipment. Training and safety equipment is available for C-like languages (to the level of provable correctness), and such that it doesn't change the runtime performance.

With chainsaws, those are probably provided if you use one professionally. But an average Joe getting his firewood from his small personal wood plantation is somewhat unlikely to have both. I don't how common chainsaws and their usage are among non-professionals elsewhere, but here they are common.

The same thing applies for programming languages. A pro might be able to verify safety of C with some LLVM advanced tools or whatever, but not all coders are experienced nor skillful. For a team with lots of such members, using a language in such manner is too elitist. Too many things to learn and care about, and thus won't be done. And you can't have code being done only or even primarily by the best only, because the less advanced need experience too.

That's not to say chainsaws or C should be banned. But it's to say that the less extra effort safety requres, the more effective it is.
February 24, 2017
On Friday, 24 February 2017 at 14:35:44 UTC, Jack Stouffer wrote:
> On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner wrote:
>> This isn't evidence that memory safety is "the future", though.
>> This is evidence that people do not follow basic engineering practices (for whatever seemingly valid reasons - such as a project deadline - at the time).
>>
>> Writing a program (with manual memory management) that does not have dangerous memory issues is not an intrinsically hard task. It does, however, require you to *design* your program, not *grow* it (which, btw, is what a software *engineer* should do anyway).
>
> If the system in practice does not bear any resemblance to the system in theory, then one cannot defend the theory. If, in practice, programming languages without safety checks produces very common bugs which have caused millions of dollars in damage, then defending the language on the theory that you might be able to make it safe with the right effort is untenable.

Since I have not defended anything, this is missing the point.

>
> Why is it that test CIs catch bugs when people should be running tests locally? Why is it that adding unittest blocks to the language made unit tests in D way more popular when people should always be writing tests?

These are fallacies of presupposition.

> Because we're human. We make mistakes.

I agree, but still missing the point I made.

> We put things off that shouldn't be put off.

Assumption, but I won't dispute it in my personal case.

>
> It's like the new safety features on handheld buzzsaws which make it basically impossible to cut yourself. Should people be using these things safely? Yes. But, accidents happen, so the tool's design takes human behavior into account and we're all the better for it.

Quite, but that's not exclusive to memory bugs (though they are usually the ones with the most serious implications) and still misses the point of my argument. If you want *evidence of memory safety being the future*, you have to write programs making use of *memory safety*, put them out into the wild and let people try to break them for at least 10-15 years (test of time).
*Then* you have to provide conclusive (or at the very least hard to refute) proof that the reason that no one could break them were the memory safety features; and then, *finally*, you can point to all the people *still not using memory safe languages* and say "Told you so". I know it sucks, but that's the price as far as I'm concerned; and it's one *I'm* trying to help pay by using a language like D with a GC, automatic reference counting, and scope guards for memory safety.
You *cannot* appropriate one (or even a handful) examples of someone doing something wrong in language A as evidence for language feature C (still missing from A) being *the future*, just because feature C is *supposed* to make doing those things wrong harder.
They are evidence that there's something wrong and it needs fixing.
I personally think memory safety might be one viable option for that (even if it only addresses one symptom), but I've only ever witnessed over-promises such as "X is the future" in anything engineering related play out to less than what was promised.

>
> Using a programing language which doesn't take human error into account is a recipe for disaster.

Since you're going for extreme generalization, I'll bite: Humans are a recipe for disaster.
« First   ‹ Prev
1 2 3 4 5 6 7 8