October 30, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On Wednesday, 30 October 2013 at 21:18:16 UTC, Timon Gehr wrote: > On 10/30/2013 11:01 AM, Chris wrote: >> "Poorly designed firmware caused unintended operation, lack of driver >> training made it fatal." >> So it's the driver's fault, who couldn't possibly know what was going on >> in that car-gone-mad? To put the blame on the driver is cynicism of the worst kind. >> Unfortunately, that's a common (and dangerous) attitude I've come across >> among programmers and engineers. > > There are also misguided end users who believe there cannot be any other way (and sometimes even believe that the big players in the industry are infallible, and hence the user is to blame for any failure). > >> The user has to adapt to anything they >> fail to implement or didn't think of. However, machines have to adapt to >> humans not the other way around (realizing this was part of Apple's >> success in UI design, > > AFAIK Apple designs are not meant to be adapted. It seems to be mostly marketing. Forget about the marketing campaigns for a moment. Xerox (back in the day) started to develop GUIs (as we know them). The developers later went to Apple. Apple was one of the first companies to go for user experience and try to design things in a more intuitive way, i.e. how humans work and think, what they expect (I know the command line crowd hates GUIs). I'd say that Windows is at the other end of the scale. Try to find info "about this computer" on a Mac and try to find it in (each new version) of Windows. Don't forget that you shut Windows down where it says "Start". Ha ha ha! That said, Apple is going down the wrong way now too, IMO. 10.8 is just annoying in many ways. Too intrusive, too patronising, too much like a prison. >> Ubuntu is very good now too). > > The distribution is not really indicative of the UI/window manager you'll end up using, so what do you mean? Ubuntu is quite good now UI wise. I recently had a user that found everything almost immediately, although she had used Ubuntu before nor does she know much about computers, nor does she like computers. That's what I mean. Intuitive, i.e. the computer is arranged in the way the human mind works. Things are easy to find and use. |
October 30, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On Wednesday, 30 October 2013 at 21:18:16 UTC, Timon Gehr wrote:
> On 10/30/2013 11:01 AM, Chris wrote:
>> "Poorly designed firmware caused unintended operation, lack of driver
>> training made it fatal."
>> So it's the driver's fault, who couldn't possibly know what was going on
>> in that car-gone-mad? To put the blame on the driver is cynicism of the worst kind.
>> Unfortunately, that's a common (and dangerous) attitude I've come across
>> among programmers and engineers.
>
> There are also misguided end users who believe there cannot be any other way (and sometimes even believe that the big players in the industry are infallible, and hence the user is to blame for any failure).
>
I know. A lot of people are like that. But who (mis)guides them? The big PR campaigns by big companies who talk about "safety" and "precision" and give users a false sense of security. Now that I think of it, maybe the fact that they don't have a simple mechanical backup is not because of the engineering culture. Maybe it is to do with the fact that a product might seem less attractive, if the company admits that it can fail by including a backup mechanism.
|
October 30, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to H. S. Teoh | On Wednesday, 30 October 2013 at 19:25:45 UTC, H. S. Teoh wrote: > On Tue, Oct 29, 2013 at 07:14:50PM -0700, Walter Bright wrote: > [...] > For automated testing to be practical, of course, requires that the > system be designed to be tested in that way in the first place -- which > unfortunately very few programmers have been trained to do. "Whaddya > mean, make my code modular and independently testable? I've a deadline > by 12am tonight, and I don't have time for that! Just hardcode the data > into the global variables and get the product out the door before the > midnight bell strikes; who cares if this thing is testable, as long as > the customer thinks it looks like it works!" > > Sigh. > > > T Agree 100%. I read a book way back in the late 1990's, "Rapid Development" by Steve McConnell I think it was called. I remember it was a great read and filled with case studies where development best practices are dissolved by poor management. This Toyota story reads very much like the examples in that book. |
October 30, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to growler | On Wednesday, 30 October 2013 at 22:31:45 UTC, growler wrote:
> On Wednesday, 30 October 2013 at 19:25:45 UTC, H. S. Teoh wrote:
>> On Tue, Oct 29, 2013 at 07:14:50PM -0700, Walter Bright wrote:
>
>> [...]
>> For automated testing to be practical, of course, requires that the
>> system be designed to be tested in that way in the first place -- which
>> unfortunately very few programmers have been trained to do. "Whaddya
>> mean, make my code modular and independently testable? I've a deadline
>> by 12am tonight, and I don't have time for that! Just hardcode the data
>> into the global variables and get the product out the door before the
>> midnight bell strikes; who cares if this thing is testable, as long as
>> the customer thinks it looks like it works!"
>>
>> Sigh.
>>
>>
>> T
>
> Agree 100%.
>
> I read a book way back in the late 1990's, "Rapid Development" by Steve McConnell I think it was called. I remember it was a great read and filled with case studies where development best practices are dissolved by poor management. This Toyota story reads very much like the examples in that book.
Mind you that corporate ideology might be just as harmful as bad engineering. I'm sure there is the odd engineer who points out a thing or two to the management, but they won't have none of that. German troops in Russia were not provided with winter gear, because the ideology of the leadership dictated (this is the right word) that Moscow be taken before winter. I wouldn't rule it out that "switch-off-engine-buttons" are a taboo in certain companies for purely ideological reasons.
|
October 31, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to H. S. Teoh | On Wednesday, 30 October 2013 at 19:25:45 UTC, H. S. Teoh wrote: > "This piece of code is so trivial, and so obviously, blatantly correct, > that it serves as its own proof of correctness." (Later...) "What do you > *mean* the unit tests are failing?!" > I have quite a lot of horror stories about this kind of code :D Now I do not try to argue with people coming with this, simply write a test. Usually you don't need to get very far : absurdly high volume, malformed input, contrived memory, run the thing is a thread and kill the thread in the middle, etc . . . Hopefully, it is much less common for me now to have to do so. A programming school in France, which is well known for having uncommon practices (but form great people at the end) do run every program submitted by the student in an environment with 8ko of RAM. The program is not expected to do its job, but to at least fail properly. > Most software companies have bug trackers, I used to work in a company with a culture strongly opposed to the use of such tool for some reason I still do not understand. At some point I simply answered to people that bugs didn't existed when they weren't in the bug tracker. > For automated testing to be practical, of course, requires that the > system be designed to be tested in that way in the first place -- which > unfortunately very few programmers have been trained to do. "Whaddya > mean, make my code modular and independently testable? I've a deadline > by 12am tonight, and I don't have time for that! Just hardcode the data > into the global variables and get the product out the door before the > midnight bell strikes; who cares if this thing is testable, as long as > the customer thinks it looks like it works!" > My experience tells me that this pay off in matter of days. Days as in less than a week. Doing the hacky stuff feel like it is faster, but measurement says otherwise. |
October 31, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to deadalnix | On Thu, Oct 31, 2013 at 02:17:59AM +0100, deadalnix wrote: > On Wednesday, 30 October 2013 at 19:25:45 UTC, H. S. Teoh wrote: > >"This piece of code is so trivial, and so obviously, blatantly correct, that it serves as its own proof of correctness." (Later...) "What do you *mean* the unit tests are failing?!" > > > > I have quite a lot of horror stories about this kind of code :D Now I do not try to argue with people coming with this, simply write a test. Usually you don't need to get very far : absurdly high volume, malformed input, contrived memory, run the thing is a thread and kill the thread in the middle, etc . . . A frighteningly high percentage of regular code already fails for trivial boundary conditions (like pass in an empty list, or NULL, or empty string, etc.), not even getting to unusual input or stress tests. > Hopefully, it is much less common for me now to have to do so. > > A programming school in France, which is well known for having uncommon practices (but form great people at the end) do run every program submitted by the student in an environment with 8ko of RAM. The program is not expected to do its job, but to at least fail properly. Ha. I should go to that school and write programs that don't need more than 8KB of RAM to work. :) I used to pride myself on programs that require the absolute minimum of resources to work. (Unfortunately, I can't speak well of the quality of the code though! :P) > >Most software companies have bug trackers, > > I used to work in a company with a culture strongly opposed to the use of such tool for some reason I still do not understand. At some point I simply answered to people that bugs didn't existed when they weren't in the bug tracker. Wow. No bug tracker?? That's just insane. How do they keep track of anything?? At my current job, we actually use the bug tracker not just for actual bugs but for tracking project discussions (via bug notes that serve as good reference later when we need to review why a particular decision was made). > >For automated testing to be practical, of course, requires that the system be designed to be tested in that way in the first place -- which unfortunately very few programmers have been trained to do. "Whaddya mean, make my code modular and independently testable? I've a deadline by 12am tonight, and I don't have time for that! Just hardcode the data into the global variables and get the product out the door before the midnight bell strikes; who cares if this thing is testable, as long as the customer thinks it looks like it works!" > > > > My experience tells me that this pay off in matter of days. Days as in less than a week. Doing the hacky stuff feel like it is faster, but measurement says otherwise. Days? It pays off in *minutes* IME. When I first started using unittest blocks in D, the quality of my code improved *instantly*. Nasty bugs (caused by careless mistakes) were caught immediately rather than the next day after ad hoc manual testing (that also misses 15 other bugs that automated testing would've caught). This is the point I was trying to get at: manual testing is tedious, error-prone, because humans are no good at repetitive processes. It's too boring, and causes us to take shortcuts, thus missing out on retesting critical bits of code that may just happen to have acquired bugs since the last code change. But you *need* repetitive testing to ensure the new code didn't break the old, so some kind of unittesting framework is mandatory. Otherwise tons of bugs get introduced silently and bite you at the most inopportune time (like when a major customer just deployed it in their production environment). D's unittests may have their warts, but the fact that they are (1) written in D, and thus encourage copious tests and *up-to-date* tests, (2) are automated when compiling with -unittest (which I'd recommend to be a default flag during development), singlehandedly addresses the major points of automated testing already. I've seen codebases where unittests were in a pariah class of "run it if you dare, don't pay attention to the failures 'cos we think they're irrelevant, 'cos the test cases are outdated", or "that's QA's job, it's not our department". Totally defeats the purpose. Tests should be (1) automatically run *by default* during development, and (2) kept up-to-date. Point (2) is especially hard when the unittesting framework isn't built into the language, because nobody wants to shift gears to write tests when they could be "more productive" cranking out code (or at least, that's the perception). The result is that the tests are outdated, and the programmers stop paying attention to failing tests just like they ignore compiler warnings. D does it right for both points, even if people complain about issues with selective testing, etc.. T -- The fact that anyone still uses AOL shows that even the presence of options doesn't stop some people from picking the pessimal one. - Mike Ellis |
October 31, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Tuesday, October 29, 2013 19:14:50 Walter Bright wrote:
> 3. Beating out of engineers the hubris that "this part I designed will never fail!" Jeepers, how often I've heard that.
It makes me think of a manager where I work who was happy that one of the projects had no bugs reported on it by the testers, whereas we thought that it was horrible. We _knew_ that there were bugs (there's no way that they're weren't), but they weren't being reported. So, we thought that the lack of bug reports was a horrible sign, whereas he thought that it meant that the product was in good shape.
Going to the extreme of assuming that something that you wrote won't fail is even worse. I don't trust even the stuff that I tested to death to be bug-free, and that's not even taking into account the possibility of the assumptions that it's using falling apart for some reason (e.g. the underlying system calls ceasing to function properly for some reason) or hardware failures (which will happen eventually). No program will run forever or perfectly (especially one of any real complexity), and no hardware will never die. That's a given, and it's sad to see a trained engineer thinking otherwise.
- Jonathan M Davis
|
October 31, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wed, 30 Oct 2013 11:12:48 -0700, Walter Bright <newshound2@digitalmars.com> wrote: > On 10/30/2013 3:01 AM, Chris wrote: >> On Wednesday, 30 October 2013 at 03:24:54 UTC, Walter Bright wrote: >>> Take a look at the reddit thread on this: >>> >>> http://www.reddit.com/r/programming/comments/1pgyaa/toyotas_killer_firmware_bad_design_and_its/ >>> >>> >>> Do a search for "failsafe". Sigh. >> >> One of the comments under the original article you posted says >> >> "Poorly designed firmware caused unintended operation, lack of driver training >> made it fatal." >> >> So it's the driver's fault, who couldn't possibly know what was going on in that >> car-gone-mad? To put the blame on the driver is cynicism of the worst kind. > > Much effort in cockpit design goes into trying to figure out what the pilot would do "intuitively" and ensuring that that is the right thing to do. > > Of course, we try to do that with programming language design, too, with varying degrees of success. > >> Unfortunately, that's a common (and dangerous) attitude I've come across among >> programmers and engineers. The user has to adapt to anything they fail to >> implement or didn't think of. However, machines have to adapt to humans not the >> other way around (realizing this was part of Apple's success in UI design, >> Ubuntu is very good now too). >> >> I warmly recommend the book "Architect or Bee": >> >> http://www.amazon.com/Architect-Bee-Human-Technology-Relationship/dp/0896081311/ref=sr_1_1?ie=UTF8&qid=1383127030&sr=8-1&keywords=architect+or+bee >> > Having experience with a 737 flight deck and Cessna 172/G1000 flight deck. I can personally say that if even one of the devs on both of those (very different) flight information systems had a clue about HCI he was physically beaten for bringing it up. Yes, the absolute fundamentals might be intuitive (AI, DG, etc,). But if you need anything advanced ... God Help You. I did eventually figure it out (and started helping the instructors at my FBO), but intuitive is NOT the word I would use... There is also a story floating around about the boys (I'll not deign to call the programmers...) at Honeywell FINALLY called in a group of pilots for HCI analysis/critique of the 787 flight management systems months after they had shipped the code to the FAA for certification... And lastly, although it got buried because France needs to protect EADS, there was a "By Design" bug that caused the Angle of Attack indicator to NOT show when AF447 was in deep stall, overridden by the faulty airspeed indication, never mind that this is the ONLY indicator a pilot needs to recover from a stall... If the pilots had seen this when the plane went into it's unusual attitude, the pilots could've seen it and corrected immediately. Sorry Airbus, but the computer does NOT always know best, it's only as good as the [non-pilot] programmers feeding it code... :-) -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/ |
October 31, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to Adam Wilson | On Thursday, 31 October 2013 at 06:32:41 UTC, Adam Wilson wrote: > > Having experience with a 737 flight deck and Cessna 172/G1000 flight deck. I can personally say that if even one of the devs on both of those (very different) flight information systems had a clue about HCI he was physically beaten for bringing it up. Yes, the absolute fundamentals might be intuitive (AI, DG, etc,). But if you need anything advanced ... God Help You. I did eventually figure it out (and started helping the instructors at my FBO), but intuitive is NOT the word I would use... > > There is also a story floating around about the boys (I'll not deign to call the programmers...) at Honeywell FINALLY called in a group of pilots for HCI analysis/critique of the 787 flight management systems months after they had shipped the code to the FAA for certification... > > And lastly, although it got buried because France needs to protect EADS, there was a "By Design" bug that caused the Angle of Attack indicator to NOT show when AF447 was in deep stall, overridden by the faulty airspeed indication, never mind that this is the ONLY indicator a pilot needs to recover from a stall... If the pilots had seen this when the plane went into it's unusual attitude, the pilots could've seen it and corrected immediately. > Sorry Airbus, but the computer does NOT always know best, it's only as good as the [non-pilot] programmers feeding it code... :-) I'm still waiting for the day when people will realize this! I always hear users say "Ah, it's been calculated by a computer! It must be correct.", assuming that machines are perfect. I always ask the question "But who builds and programs machines?" Humans, of course. And we are not perfect, far from it. Another story I've heard is that the German revenue had a clever program that could find out, if a shop or pub owner was cheating. The program assumed that if a certain threshold of round numbers (in the colloquial sense of the word) in his/her balances was exceeded the business owner was cheating. Now, there was one pub owner who only had prices with round numbers (and I know others too), simply because he couldn't be bothered to deal with stupid prices like €4.27 and to always have the right change. This is not uncommon. The programmers of the revenue had based their stats on all retailers of the country, including supermarkets, department stores etc. |
October 31, 2013 Re: Everyone who writes safety critical software should read this | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Tue, 29 Oct 2013 20:38:08 -0000, Walter Bright <newshound2@digitalmars.com> wrote: > https://news.ycombinator.com/item?id=6636811 > > I know that everyone is tired of hearing my airframe design stories, but it's obvious to me that few engineers understand the principles of failsafe design. This article makes that abundantly clear - and the consequences of paying no attention to it. > > You can add in Fukishima and Deepwater Horizon as more costly examples of ignorance of basic failsafe design principles. > > Yeah, I feel strongly about this. One safety mechanism was all that saved North Carolina: www.youtube.com/watch?v=SHZAaGidUbg&t=2m58s R -- Using Opera's revolutionary email client: http://www.opera.com/mail/ |
Copyright © 1999-2021 by the D Language Foundation