October 31, 2013
On 10/30/2013 6:30 AM, Russel Winder wrote:
> On Tue, 2013-10-29 at 14:38 -0700, Walter Bright wrote:
> […]
>> I wrote one for DDJ a few years back, "Safe Systems from Unreliable Parts". It's
>> probably scrolled off their system.
>
> Update it and republish somewhere. Remember the cool hipsters think if
> it is over a year old it doesn't exist. And the rest of us could always
> do with a good reminder of quality principles.
>

The cool hipsters are never going to accept that reliability is more important than their own personal time. Hell, they refuse to even accept that their user's time is more important than their own. They'll never go for safety even if you cram it down their throats. Better to just drum them out of the industry. Or better yet, out of society as a whole.

But a repost of the article is a good idea :)

October 31, 2013
On 10/30/2013 5:18 PM, Timon Gehr wrote:
> On 10/30/2013 11:01 AM, Chris wrote:
>> "Poorly designed firmware caused unintended operation, lack of driver
>>  training made it fatal."
>> So it's the driver's fault, who couldn't possibly know what was going on
>> in that car-gone-mad? To put the blame on the driver is cynicism of
>> the worst kind.
>> Unfortunately, that's a common (and dangerous) attitude I've come across
>> among programmers and engineers.
>
> There are also misguided end users who believe there cannot be any other
> way (and sometimes even believe that the big players in the industry are
> infallible, and hence the user is to blame for any failure).
>

I have a deep hatred for such people. (I've come across far too many.)

>> The user has to adapt to anything they
>> fail to implement or didn't think of. However, machines have to adapt to
>> humans not the other way around (realizing this was part of Apple's
>> success in UI design,
>
> AFAIK Apple designs are not meant to be adapted. It seems to be mostly
> marketing.
>

This is very true (at least for Apple's "Return of Jobs" era). And it's not surprising: Steve Jobs had a notoriously heavy hand in Apple's designs and yet Jobs himself has never, realistically, been much of anything more than a glorified salesman. The company was literally being run by a salesman. And that easily explains both the popularity and the prevalence of bad design.

>> Ubuntu is very good now too).
>
> The distribution is not really indicative of the UI/window manager
> you'll end up using, so what do you mean?

Ordinarily, yes, but I would think there'd be an uncommonly strong correlation between Ubuntu users and Unity users.

October 31, 2013
On Wednesday, 30 October 2013 at 20:06:19 UTC, Walter Bright wrote:
> On 10/30/2013 12:24 PM, H. S. Teoh wrote:
>> On Tue, Oct 29, 2013 at 07:14:50PM -0700, Walter Bright wrote:
> There's still plenty of reason to improve software quality.
>
> I just want to emphasize that failsafe system design is not about improving quality.

I did follow this thread for a while and it happens that I am working on this kind of software.

I won't really know to say why it works, but several elements help with that:

-some quite strict code style guidelines (it also helps a lot when working on some legacy code)
-a small team of "safety" whose sole job is to question the code produced by us (I am among those developing) on the basis: "here, here, what happens if this fails?"
-some analysis and traceability (you know: the "process" thing) tools that help both with code (MISRA-C, clang etc.) and documentation
-good bug tracking and thorough discussion of the problem at hand before and after implementation
-the developers themselves questioning their own every LOC they write

No code is accepted unless it has a way to fail graciously. For this reason, unrolling changes in case of errors is a great proportion in code, so having that scope() statement would be pure gold for me.

Basically, I think that critical code is almost always developed as if being transaction-based. It succeeds or it leaves no trace.

OTOH, things that I would really like to work better are:

-greater flexibility of the management when a developer tells: "I think this code part could be improved and some refactoring will help"
-an incremental process, that is the management should assume that the first shipped version is not perfect instead of assuming that it is perfect and not being prepared for change requests
October 31, 2013
On 10/31/2013 9:00 AM, eles wrote:
> Basically, I think that critical code is almost always developed as if being
> transaction-based. It succeeds or it leaves no trace.

That's great for the software.

What if the hardware fails? Such as a bad memory bit that flips a bit in the perfect software, and now it decides to launch nuclear missiles?
October 31, 2013
On 31.10.2013 19:46, Walter Bright wrote:
> On 10/31/2013 9:00 AM, eles wrote:
>> Basically, I think that critical code is almost always developed as if
>> being
>> transaction-based. It succeeds or it leaves no trace.
> 
> That's great for the software.
> 
> What if the hardware fails? Such as a bad memory bit that flips a bit in the perfect software, and now it decides to launch nuclear missiles?

Three different pieces of software (written by different teams) that should do the same thing and then have a consensual voting on the correct action? Or even more pieces, depending on the clusterfuck that can be caused by flipped bit...

The interaction with hardware can be a bit tricky and afterall anything can go wrong in the right circumstances, no matter how hard you try. It is up to you to decide cost/benefit.
October 31, 2013
On 10/31/2013 7:57 AM, H. S. Teoh wrote:
> You don't know how thankful I am for having learnt the concept of
> pumping the brakes, ABS or not. I'm afraid too many driving instructors
> nowadays just advise slamming the brakes and relying on the ABS to do
> the job. It doesn't *always* work!

Pumping the brakes is not how to get the max braking effect.

The way to do it is to push on the pedal to about 70-80% of braking force. This causes the car to push its weight onto the front tires and load them up. Then go 100%. You'll stop a lot faster, because with more weight on the front tires they have more grip. (I think this is called 2 step braking.)

You lose about 30% of braking force when the tires break loose. The trick is to press the pedal just short of that happening, which can be found with a bit of practice.

The downside of just slamming the brakes on and letting the ABS take care of it is you lose the 2-step effect.

There are also cases where you *want* to lock the tires. That case is when you're in a skid and the car is at a large angle relative to its velocity vector. This will cause the car to slide in a straight line, meaning that other cars can avoid you. If you don't lock the wheels, the wheels can arbitrarily "grab" and shoot the car off in an unexpected direction - like over the embankment, or into the car that was dodging you. The car will also stop faster than if the wheels suddenly grab when you're at a 30 degree angle.

But yeah, I'd guess less than 1% of drivers know this stuff. And even if you know it, you have to practice it now and then to be proficient at it.

October 31, 2013
On Thursday, 31 October 2013 at 19:45:17 UTC, Walter Bright wrote:
> But yeah, I'd guess less than 1% of drivers know this stuff. And even if you know it, you have to practice it now and then to be proficient at it.

On a lot of cars, there's a hidden switch to turn off ABS. It's recommended (AFAIK) to do so if you expect you'll be driving on an unsound surface (ice, snow, pebbles...)
October 31, 2013
Am Thu, 31 Oct 2013 17:00:59 +0100
schrieb "eles" <eles@eles.com>:

> -an incremental process, that is the management should assume that the first shipped version is not perfect instead of assuming that it is perfect and not being prepared for change requests

I would discriminate between change requests and bug reports. You should be responsible for any bugs and fix them, but changes resulting from unclear specifications are entirely different. I wouldn't do any more real work on the project than is written down in the contract. (Meaning: Be prepared for the changes you allowed for, not random feature requests.)

-- 
Marco

October 31, 2013
On Thursday, 31 October 2013 at 18:46:07 UTC, Walter Bright wrote:
> On 10/31/2013 9:00 AM, eles wrote:
> What if the hardware fails? Such as a bad memory bit that flips a bit in the perfect software, and now it decides to launch nuclear missiles?

If that happens, any software verification could become useless. On the latest project that I'm working on, we simply went with two identical (but not independently-developed, just identical) hardwares, embedded software on them.

A comparator compares the two outputs. Any difference results in an emergency procedure (either a hardware reboot through a watchdog, either a controlled shutdown - to avoid infinite loop reboot).
October 31, 2013
On Thursday, 31 October 2013 at 20:32:49 UTC, Marco Leise wrote:
> Am Thu, 31 Oct 2013 17:00:59 +0100
> schrieb "eles" <eles@eles.com>:
> I would discriminate between change requests and bug reports.
> You should be responsible for any bugs and fix them, but
> changes resulting from unclear specifications are entirely
> different. I wouldn't do any more real work on the project
> than is written down in the contract. (Meaning: Be prepared
> for the changes you allowed for, not random feature requests.)

Yeah, maybe is a corporation culture to avoid the term "bug", but we always use the term "change request". Maybe it has a better image :)

Normally, it is assumed that passing the tests proves that specifications are accomplished, so the software is perfect.

This, of course, if the tests themselves would be correct 100% and *really* extensive.

Or, some things like race conditions and other heisenbugs occur only rarely. So, you still need to conceptualize and so on.

In practice, is not really different to fix a bug or to make an evolution of the code, except that for the former the urgency is greater. Anyway, in the end, it is the guy with the budget that decides.

It is an iterative process, however: you start with some ideas, you implement some code, you go back to the architecture description and change it a bit, in the meantime you receive a request to add some new specification or functionality so back to square one and so on.

But this is development. What really ensures the quality is that, at the end, before shipping, all the steps are once again checked, this time in the normal, forward mode: requirements, architecture, code review, tests. *Only then* it is compiled and finally passed to... well, not to the production, but to the dedicated Validation team.