September 01, 2018
On Saturday, 1 September 2018 at 20:48:27 UTC, Walter Bright wrote:
> On 9/1/2018 5:25 AM, tide wrote:
>> and that all bugs can be solved with asserts
>
> I never said that, not even close.
>
> But I will maintain that DVD players still hanging on a scratched DVD after 20 years of development means there's some cowboy engineering going on, and an obvious lack of concern about that from the manufacturer.

Firstly, you have to take into account the context around why that bug exists and why it is not fixed and it comes does to a risk-cost trade off.

Product managers are totally driven by budget and in consumer goods they dictate the engineering resources. I think you'll find most large DVD manufactures have discovered that it is not cost effective to give engineers the budget to fix these annoying bugs. This is because most consumers will be annoyed but then go out and purchase som other product by the same manufacturer. I.e. these bugs do not harm their brand enough.

This leads to the situation where the engineering is shoddy not because the programmers are bad engineers, but because they don't even get the chance to engineer due to time constraints.

Secondly, DVD players and critical flight systems are apples and oranges in terms of engineering rigor required. One will mildly annoy the odd consumer, who 9 times of 10 will still purchase <Insert Brand Here> products again and the other will likely kill 100s of people.

To put it another way; one will give the engineers *zero* resources to work on non-blocking bugs and the other must have *zero* non-blocking bugs.

Cheers,
Norm
September 01, 2018
On 08/31/2018 03:50 PM, Walter Bright wrote:
> https://news.ycombinator.com/item?id=17880722
> 
> Typical comments:
> 
> "`assertAndContinue` crashes in dev and logs an error and keeps going in prod. Each time we want to verify a runtime assumption, we decide which type of assert to use. We prefer `assertAndContinue` (and I push for it in code review),"
> 

Yea, that one makes me cringe. I could at least understand "unwind the stack 'till you're at least out of this subsystem, and THEN MAYBE abort/retry (but not ignore)", though I know you disagree on that. But to just...continue as if nothing happened...Ugh. Just reminds me of common dynamic scripting language design and why I never liked those languages: If the programmer wrote something nonsensical, best to do something completely random instead of giving them an error message!


> "Stopping all executing may not be the correct 'safe state' for an airplane though!"

Honestly, comments like this suggest to me someone who's operating under the false assumption that "stop all executing" means "permanently stop all of the software running on all components of the plane" rather than "stop (and possibly restart) one of several redundant versions of one particular subsystem". Which suggests they only read comments and not the article.

Interestingly, the same user also said:

"Software development often does seem like a struggle between reliability/robustness and safety/correctness."

WAT?!

That's like saying, "Speaker design often seems like a struggle between loudness versus volume." Each one *requires* the other.

Scary.


> "One faction believed you should never intentionally crash the app"

I can understand how people may naively come to that conclusion: "Duh, crashing is bad, so why would you do it intentionally?" But, of course, the reasoning is faulty.

There's also the "It depends on your industry/audience. You're talking airplanes, but my software isn't critical enough to bother with the same principles." I wonder if it might help to remind such people that's *exactly* how MS ended up with Windows Me:

This is well-known:

After Win3.11, MS decided that businesses required more reliability from their OS than the home users needed. So they split Windows into two product lines: WinNT for business (more focus on reliability) and Win95 for home (speed and features were more important).

Things started out mostly ok. Win95 wasn't quite as reliable as NT, but not a gigantic difference, and it was expected. Then Win98...some more issues, while NT stayed more or less as-was. Then WinMe hit. BOOM!

By that point, the latest in the WinNT line was "Win2k", which was STILL regarded as pretty well stable, so MS did what's probably the smartest move they've ever made: Killed off the 9x/Me codebase, added DirectX to Win2k and called it "WinXP". And it spent a whole decade widely hailed as the first ever home version of Windows to not be horrible.

So yea, I don't care how non-critical you think your software is. If it's worth using, then it's important enough.

> And on and on. It's unbelievable. The conventional wisdom in software for how to deal with programming bugs simply does not exist.

In my observation, there doesn't seem to be much conventional wisdom in software in general. Everything, no matter how basic or seemingly obvious, is up for big, major debate. (Actually, not even restricted to programming.)


[From your comment in that thread]
> fill up your system disk to near capacity, then try to run various apps and system utilities.

I've had that happen on accident once or twice recently. KDE does NOT handle it well: *Everything* immediately either hangs or dies as soon as it gains focus. Well, I guess could be worse, but it still really irks me: "Seriously, KDE? You can't even DO NOTHING without trying to write to the disk? And you, you other app specifically designed for dealing with large numbers of large files, why in the world would you attempt to write GB+ files without ever checking available space?"

Seriously, nothing in tech ever improves. Every step forward comes with a badly-rationalized step back. Things just get shuffled around, rubble gets bounced, trends get obsessively chased in circles, and ultimately there's little, if any, overall progress. "What Andy giveth, Bill taketh away." Replace Andy/Bill with any one of thousands of different pairings, it still holds.

And there's no motivation for any of it to change. Capitalism rewards those who make more money by selling more flashy garbage that's bad enough to create more need for more garbage to deal with the flaws from the last round of garbage. It doesn't reward those who make a better product that actually reduces need for more. Sometimes something decent will come along, and briefly succeed by virtue of being good. But it's temporary and inevitably gets killed off by the next positive feedback loop of inferiority. Incompetence DRIVES capitalism; competence threatens it.
September 01, 2018
On 08/31/2018 05:09 PM, H. S. Teoh wrote:
> 
> It's precisely for this reason that the title "software engineer" makes
> me cringe on the one hand, and snicker on the other hand.  I honestly
> cannot keep a straight face when using the word "engineering" to
> describe what a typical programmer does in the industry these days.
> 

Science is the reverse-engineering of reality to understand how it works. Engineering is the practical application of scientific knowledge.

I don't know, maybe those are simplified, naive definitions. But I've long been of the opinion that programming is engineering...*if* and only if...you're doing it right.

Of course, my background is primarily from programming itself, not from an existing engineering field, so I certainly won't claim that what I do necessarily qualifies as "engineering", but it is something I try to aspire to, FWIW.
September 01, 2018
On 09/01/2018 05:06 PM, Ola Fosheim Grøstad wrote:
> 
> If you have a specific context (like banking) then you can develop a software method that specifies how to build banking software, and repeat it, assuming that the banks you develop the method for are similar
> 
> Of course, banking has changed quite a lot over the past 15 years (online + mobile). Software often operates in contexts that are critically different and that change in somewhat unpredictable manners.
> 

Speaking of, that always really gets me:

The average ATM is 24/7. Sure, there may be some downtime, but what, how much? For the most part, these things were more or less reliable decades ago, from a time with *considerably* less of the "best practices" and accumulated experience, know-how, and tooling we have today. And over the years, they still don't seem to have screwed ATMs up too badly.

But contrast that to my bank's phone "app": This thing *is* rooted firmly in modern technology, modern experience, modern collective knowledge, modern hardware and...The servers it relies on *regularly* go down for several hours at a time during the night. That's been going on for the entire 2.5 years I've been using it.

And for about an hour the other day, despite using the latest update, most of the the buttons on the main page were *completely* unresponsive. Zero acknowledgement of presses whatsoever. But I could tell the app wasn't frozen: The custom-designed text entry boxes still handled focus events just fine.

Tech from 1970's: Still working fine. Tech from 2010's: Pfffbbttt!!!

Clearly something's gone horribly, horribly wrong with modern software development.
September 01, 2018
On Saturday, September 1, 2018 9:18:17 PM MDT Nick Sabalausky (Abscissa) via Digitalmars-d wrote:
> On 08/31/2018 03:50 PM, Walter Bright wrote:
> [From your comment in that thread]
>
>  > fill up your system disk to near capacity, then try to run various
>
> apps and system utilities.
>
> I've had that happen on accident once or twice recently. KDE does NOT handle it well: *Everything* immediately either hangs or dies as soon as it gains focus. Well, I guess could be worse, but it still really irks me: "Seriously, KDE? You can't even DO NOTHING without trying to write to the disk? And you, you other app specifically designed for dealing with large numbers of large files, why in the world would you attempt to write GB+ files without ever checking available space?"

I suspect that if KDE is choking, it's due to issues with files in /tmp, since they like to use temp files for stuff, and I _think_ that some of it is using unix sockets, in which case they're using the socket API to talk between components, and I wouldn't ever expect anyone to check disk space with that - though I _would_ expect them to check for failed commands and handling it appropriately, even if the best that they can do is close the program with a pop-up.

I think that what it ultimately comes down to though is that a lot of applications treat disk space like they treat memory. You don't usually check whether you have enough memory. At best, you check whether a particular memory allocation succeeded and then try to handle it sanely if it failed. With D, we usually outright kill the program if we fail to allocate memory - and really, if you're using std.stdio and std.file for all of your file operations, you'll probably get the same thing, since an exception would be thrown on write failure, and if you didn't catch it, then it will kill your program (though if you do catch it, it obviously can vary considerably what happens). The C APIs on the other hand require that you check the return value, and some of the C++ APIs require the same. So, if you're not doing that right, you can quickly get your program into a weird state if functions that you expect to always succeed start failing.

So honestly, I don't find it at all surprising when an application can't handle not being able to write to disk. Ideally, it _would_ handle it (even if it's simply by shutting down, because it can't handle not having enough disk space), but for most applications, it really is thought of like running out of memory. So, isn't tested for, and no attempt is made to make it sane.

I would have hoped that something like KDE would have sorted it out by now given that it's been around long enough that more than one person would have run into the problem and complained about it, but given that it's a suite of applications developed in someone's free time, it wouldn't surprise me at all if the response was to just get more disk space.

Honestly, for some of this stuff, I think that the only way that it's ever going to work sanely is if extreme failure conditions result in Errors or Exceptions being thrown, and the program being killed. Most code simply isn't ever going to be written to handle such situations, and a for a _lot_ of programs, they really can't continue without those resources - which is presumably, why the way D's GC works is to throw an OutOfMemoryError when it can't allocate anything. Anything C-based (and plenty of C++-based programs too) is going to have serious problems though thanks to the fact that C/C++ programs often use APIs where you have to check a return code, and if it's a function that never fails under normal conditions, most programs aren't going to check it. Even diligent programmers are bound to miss some of them.

- Jonathan M Davis



September 02, 2018
On 08/31/2018 07:20 PM, H. S. Teoh wrote:
> 
> The problem is that there is a disconnect between academia and the
> industry.
> 
> The goal in academia is to produce new research, to find ground-breaking
> new theories that bring a lot of recognition and fame to the institution
> when published. It's the research that will bring in the grants and
> enable the institution to continue existing. As a result, there is heavy
> focus on the theoretical concepts, which are the basis for further
> research, rather than pragmatic tedium like how to debug a program.
> 

I don't know where you've been but it doesn't match anything I've ever seen.

Everything I've ever seen: The goal in academia is to advertise impressive-looking rates for graduation and job placement. This maximizes the size of the application pool which, depending on the school, means one of two things:

1. More students paying ungodly tuition rates. Thus making the schools and their administrators even richer. (Pretty much any public liberal arts school.)

or

2. Higher quality students (defined by the school as "more likely to graduate and more likely to be placed directly in a job"), thus earning the school the right to demand an even MORE ungodly tuition from the fixed-size pool of students they accept. Thus making the schools and their administrators even richer. (Pretty much any private liberal arts school.)

Achieving the coveted prize of "We look attractive to applicants" involves:

First: As much of a revolving-door system as they can get away with without jeopardizing their accreditation.

And secondly: Supplementing the basic Computer Science theory with awkward, stumbling, half-informed attempts at placating the industry's brain-dead, know-nothing HR monkeys[1] with the latest hot trends. For me, at the time, that meant Java 2 and the "Thou must OOP, for OOP is all" religion.

[1] "I don't know anything about programming, but I'm good at recognizing people who are good at it."  <-- A real quote from a real HR monkey I once made the mistake of attempting basic communication with.

But then, let's not forget that schools have HR, too. Which leads to really fun teachers like the professor I had for a Computer Networking class:

He had a PhD in Computer Science. He would openly admit that C was the only language he knew. Ok, fair enough so far. But...upon my explaining to him how he made a mistake grading my program, I found *myself* forced to teach the *Computer Science professor* how strings (remember...C...null-terminated) worked in the *only* language he knew. He had NO freaking clue! A freakin' CS PhD! Forget "theory vs practical" - if you do not know the *fundamental basics* of EVEN ONE language, then you *CANNOT* function in even the theoretical or research realms, or teach it. Computer science doesn't even *exist* without computer programming! Oh, and this, BTW, was a school that pretty much any Clevelander will tell you "Oh! Yea, that's a really good school, it has a fantastic reputation!" Compared to what? Ohio State Football University?
September 02, 2018
On 09/01/2018 01:51 AM, rikki cattermole wrote:
> 
> But in saying that, we had third year students starting out not understanding how cli arguments work so...
> 

How I wish that sort of thing surprised me ;)

As part of the generation that grew up with BASIC on 80's home computers, part of my spare time in high school involved some PalmOS development (man I miss PalmOS). Wasn't exactly anything special - you pony up a little $ for Watcom (or was it Borland?), open the IDE, follow the docs, do everything you normally do. Read a book. Yawn. After that, in college, had a job working on Palm and WAP websites (anyone remember those? Bonus points if you remember the Palm version - without WiFi or telephony it was an interesting semi-mobile experience).

Imagine my shock another year after that when I saw the college I was attending bragging how their computer science *graduate* students...with the help and guidance of a professor...had gotten a hello world "running on an actual Palm Pilot!" Wow! Can your grad students also tie their own shoes and wipe their own noses with nothing more than their own wits and somebody else to help them do it??? Because, gee golly, that would be one swell accomplishment! Wow! Hold your hat, Mr. Dean, because Ivy League, here you come!!
September 01, 2018
On Saturday, September 1, 2018 10:44:57 PM MDT Nick Sabalausky (Abscissa) via Digitalmars-d wrote:
> On 09/01/2018 01:51 AM, rikki cattermole wrote:
> > But in saying that, we had third year students starting out not understanding how cli arguments work so...
>
> How I wish that sort of thing surprised me ;)
>
> As part of the generation that grew up with BASIC on 80's home computers, part of my spare time in high school involved some PalmOS development (man I miss PalmOS). Wasn't exactly anything special - you pony up a little $ for Watcom (or was it Borland?), open the IDE, follow the docs, do everything you normally do. Read a book. Yawn. After that, in college, had a job working on Palm and WAP websites (anyone remember those? Bonus points if you remember the Palm version - without WiFi or telephony it was an interesting semi-mobile experience).
>
> Imagine my shock another year after that when I saw the college I was attending bragging how their computer science *graduate* students...with the help and guidance of a professor...had gotten a hello world "running on an actual Palm Pilot!" Wow! Can your grad students also tie their own shoes and wipe their own noses with nothing more than their own wits and somebody else to help them do it??? Because, gee golly, that would be one swell accomplishment! Wow! Hold your hat, Mr. Dean, because Ivy League, here you come!!

Ouch. Seriously, seriously ouch.

- Jonathan M Davis



September 02, 2018
On 09/01/2018 02:15 AM, Ola Fosheim Grøstad wrote:
> 
> The root cause of bad software is that many programmers don't even have an education in CS or software engineering, or didn't do a good job while getting it!
> 

Meh, no. The root cause trifecta is:

A. People not caring enough about their own craft to actually TRY to learn how to do it right.

B. HR people who know nothing about the domain they're hiring for.

C. Overall societal reliance on schooling systems that:

    - Know little about teaching and learning,

    - Even less about software development,

    - And can't even decide whether their priorities should be "pure theory *without* sufficient practical" or "catering to the above-mentioned HR folk's swing-and-miss, armchair-expert attempts at defining criteria for identifying good programmers" (Hint: The answer is "neither").
September 02, 2018
On 09/02/2018 12:53 AM, Jonathan M Davis wrote:
> 
> Ouch. Seriously, seriously ouch.
> 

Heh, yea, well...that particular one was state party school, so, what y'gonna do? *shrug*

Smug as I may have been at the at the time, it wasn't until later I realized the REAL smart ones were the ones out partying, not the grads or the nerds like me. Ah, young hubris ;)  (Oh, the computer art students, BTW, were actually really fun to hang out with! I think they probably managed to hit the best balance of work & play.)