September 03, 2018
On Sunday, September 2, 2018 11:54:57 PM MDT Nick Sabalausky (Abscissa) via Digitalmars-d wrote:
> On 09/03/2018 12:46 AM, H. S. Teoh wrote:
> > Anything less is unsafe, because being
> > in an invalid state means you cannot predict what the program will do
> > when you try to recover it.  Your state graph may look nothing like what
> > you thought it should look like, so an action that you thought would
> > bring the program into a known state may in fact bring it into a
> > different, unknown state, which can exhibit any arbitrary behaviour.
>
> You mean attempting to doing things, like say, generate a stack trace or format/display the name of the Error class and a diagnostic message? ;)
>
> Not to say it's all-or-nothing of course, but suppose it IS memory corruption and trying to continue WILL cause some bigger problem like arbitrary code execution. In that case, won't the standard Error class stuff still just trigger that bigger problem, anyway?

Throwing an Error is a lot less likely to cause problems than actually trying to recover. However, personally, I'm increasingly of the opinion that the best thing to do would be to not have Errors but to kill the program at the point of failure. That way, you could get a coredump at the point of failure, with all of the state that goes with it, making it easier to debug, and it would be that much less likely to cause any more problems before the program actually exits. You might still have it print an error message and stack trace before triggering a HLT or whatever, but I think that that's the most that I would have it do. And while doing that would still potentially open up problems, unless someone hijacked that specific piece of code, it would likely be fine, and it would _really_ help on systems that don't have coredumps enabled - not to mention seeing that in the log could make bringing up the coredump in the debugger unnecessary in some cases. Regardless, getting a coredump at the point of failure would be far better IMHO than what we currently have with Errors.

- Jonathan M Davis



September 03, 2018
On Sunday, 2 September 2018 at 19:30:58 UTC, Nick Sabalausky (Abscissa) wrote:
> On 09/02/2018 05:43 AM, Joakim wrote:
>> Most will be out of business within a decade or two, as online learning takes their place.
>
> I kinda wish I could agree with that, but schools are too much of a sacred cow to be going anywhere anytime soon. And for that matter, the online ones still have to tackle many of the same challenges anyway, WRT successful and effective teaching.
>
> Really the only difference is "physical classroom vs no physical classroom". Well, that and maybe price, but the community colleges have had the uni's well beat on price for a long time (even manage to do a good job teaching certain things, depending on the instructor), but they haven't made the uni's budge: The best they've been able to do is establish themselves as a supplement to the uni's, where people start out with some of their gen-ed classes at the (comparatively) cheap community colleges for the specific purpose of later transferring to a uni.

That's because what the current online efforts do is simply slap the in-class curricula online, whereas what really needs to be done is completely change what's taught, away from the incoherent mix of theory and Java that basically describes every degree (non-CS too), and how it's tested and certified. When that happens, the unis will collapse, because online learning will be so much better at a fraction of the cost.

As for sacred cows, the newspaper business was one of them, ie Journalism, but it's on death's door, as I pointed out in this forum years ago:

https://en.m.wikipedia.org/wiki/File:Naa_newspaper_ad_revenue.svg

There are a lot of sacred cows getting butchered by the internet, college will be one of the easier ones to get rid of.

On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky (Abscissa) wrote:
> On 09/01/2018 03:47 PM, Everlast wrote:
>> 
>> It's because programming is done completely wrong. All we do is program like it's 1952 all wrapped up in a nice box and bow tie. WE should have tools and a compiler design that all work interconnected with complete graphical interfaces that aren't based in the text gui world(an IDE is just a fancy text editor). I'm talking about 3D code representation using graphics so projects can be navigated  visually in a dynamic way and many other things.
>
> There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface:
>
> A. Visual representation:
> -------------------------
>
> By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc".
>
> What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used.
>
> If you're going to make a diagram-based or VR-based programming tool, it will still be using the same fundamental concepts that are already established in text-based programming: Imperative loops, conditionals and variables. Functional/declarative immutability, purity and high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And indeed, all GUI based programming tools have worked this way. Because how *else* are they going to work?
>
> If what you're really looking for is something that replaces or transcends all of those existing, fundamental programming concepts, then what you're *really* looking for is a new fundamental programming concept, not a visual representation. And ance you DO invent a new fundamental programming concept, being abstract, it will again be applicable to a variety of possible visual representations.
>
> That said, it is true some concepts may be more readily amenable to certain visual representations than others. But, at least for all the currently-known concepts, any combination of concept and representation can certainly be made to work.
>
> B. Physical interface:
> ----------------------
>
> By this I mean both actual input devices (keyboards, controllers, pointing devices) and also the mappings from their affordances (ie, what you can do with them: push button x, tilt stick's axis Y, point, move, rotate...) to specific actions taken on the visual representation (navigate, modify, etc.)
>
> The mappings, of course, tend to be highly dependant on the visual representation (although, theoretically, they don't strictly HAVE to be). The devices themselves, less so: For example, many of us use a pointing device to help us navigate text. Meanwhile, 3D modelers/animators find it's MUCH more efficient to deal with their 3D models and environments by including heavy use of the keyboard in their workflow instead of *just* a mouse and/or wacom alone.
>
> An important point here, is that using a keyboard has a tendency to be much more efficient for a much wider range of interactions than, say, a pointing device, like a mouse or touchscreen. There are some things a mouse or touchscreen is better at (ie, pointing and learning curve), but even on a touchscreen, pointing takes more time than pushing a button and is somewhat less composable with additional actions than, again, pushing/holding a key on a keyboard.
>
> This means that while pointing, and indeed, direct manipulation in general, can be very beneficial in an interface, placing too much reliance on it will actually make the user LESS productive.
>
> The result:
> -----------
>
> For programming to transcend the current text/language model, *without* harming either productivity or programming power (as all attempts so far have done), we will first need to invent entirely new high-level concepts which are simultaneously both simple/high-level enough AND powerful enough to obsolete most of the nitty-gritty lower-level concepts we programmers still need to deal with on a regular basis.
>
> And once we do that, those new super-programming concepts (being the abstract concepts that they inherently are) will still be independent of visual representation. They might finally be sufficiently powerful AND simple that they *CAN* be used productively with graphical non-text-language representation...but they still will not *require* such a graphical representation.
>
> That's why programming is still "stuck" in last century's text-based model: Because it's not actually stuck: It still has significant deal-winning benefits over newer developments. And that's because, even when "newer" does provide improvements, newer still isn't *inherently* superior on *all* counts. That's a fact of life that is easily, and frequently, forgotten in fast-moving domains.

Ironically, you're taking a way too theoretical approach to this. ;) Simply think of the basic advances a graphical debugger like the one in Visual Studio provides and advance that out several levels.

For example, one visualization I was talking about on IRC a decade ago and which I still haven't seen anybody doing, though I haven't really searched for it, is to have a high-level graphical visualization of the data flowing through a program. Just as dmd/ldc generate timing profile data for D functions by instrumenting the function call timings, you could instrument the function parameter data too (you're not using globals much, right? ;) ) and then record and save the data stream generated by some acceptance testing. Then, you periodically run those automated acceptance tests and look at the data stream differences as a color-coded flow visualization through the functions, with the data that stays the same shown as green, whereas the data that changed between different versions of the software as red. Think of something like a buildbot console, but where you could zoom in on different colors till you see the actual data stream:

https://ci.chromium.org/p/chromium/g/main/console

You'd then verify that the data differences are what you intend- for example, if you refactored a function to change what parameters it accepts, the data differences for the same external user input may be valid- and either accept the new data stream as the baseline or make changes till it's okay. If you refactor your code a lot, you could waste time with a lot of useless churn, but that's the same problem unit or other tests have with refactoring.

This high-level approach would benefit most average software much more than unit testing, as you usually don't care about individidual components or hitting all their edge cases. That's why most software doesn't use unit tests in the first place.

Note that I'm not saying unit tests are not worthwhile, particularly for libraries, only that realistically it'd be easier to get programmers to use this high-level view I'm outlining than writing a ton of unit tests, much less effort too. Ideally, you do both and they complement each other, along with integration testing and the rest.

In other words, we don't have to get rid of text representations altogether: there's a lot of scope for much better graphical visualizations of the textual data or code we're using now. You could do everything I just described by modifying the compiler to instrument your functions, dumping log files, and running them through diff, but that's the kind of low-level approach we're stuck with now. Ideally, we'd realize that _certain workflows are so common that we need good graphical interfaces for them_ and standardize on those, but I think Everlast's point is that's happening much slower than it should, which I agree with.

On Sunday, 2 September 2018 at 21:19:38 UTC, Ola Fosheim Grøstad wrote:
> So there are a lot of dysfunctional aspects at the very foundation of software development processes in many real world businesses.
>
> I wouldn't expect anything great to come out of this...

One of the root causes of that dysfunction is there's way too much software written. Open source has actually helped alleviate this, because instead of every embedded or server developer who needs an OS kernel convincing management that they should write their own, they now have a hard time justifying it when a free, OSS kernel like linux is out there, which is why so many of those places use linux now. Of course, you'd often like to modify the kernel and linux may not be stripped down or modular enough for some, but there's always other OSS kernels like Minix or Contiki for them.

Software is still in the early stages like the automative industry in the US, when there were hundreds of car manufacturers, most of them producing as low-quality product as software companies now:

https://en.m.wikipedia.org/wiki/Automotive_industry_in_the_United_States

Rather than whittling down to three large manufacturers like the US car industry did, open source provides a way for thousands of software outfits, individual devs even, to work on commonly used code as open source, while still being able to specialize when needed, particularly with permissive licenses. So, much of that dysfunction will be solved by consolidation, but a completely different kind of consolidation than was done for cars, because software is completely different than physical goods.
September 03, 2018
On 03/09/2018 7:05 PM, Joakim wrote:
> One of the root causes of that dysfunction is there's way too much software written. Open source has actually helped alleviate this, because instead of every embedded or server developer who needs an OS kernel convincing management that they should write their own, they now have a hard time justifying it when a free, OSS kernel like linux is out there, which is why so many of those places use linux now. Of course, you'd often like to modify the kernel and linux may not be stripped down or modular enough for some, but there's always other OSS kernels like Minix or Contiki for them.

Yes but 30 years ago it was actually realistic and even best practice to write your own OS as part of deployment of e.g. game. Can't say that now even in the micro controller space.
September 03, 2018
On Saturday, 1 September 2018 at 11:32:32 UTC, Jonathan M Davis wrote:
> I think that his point was more that it's sometimes argued that software engineering really isn't engineering in the classical sense. If you're talking about someone like a civil engineer for instance, the engineer applies well-known and established principles to everything they do in a disciplined way.

If they are asked to do so. In an attempt to be fancy, the sewage system in my apartment doesn't have a hydraulic seal, but has a workaround: one pipe is flexible. How physical is that?

> The engineering aspects of civil engineering aren't subjective at all. They're completely based in the physical sciences. Software engineering on the other hand isn't based on the physical sciences at all, and there really isn't general agreement on what good software engineering principles are.

Like in science, ones based on previous experience.

> https://en.wikipedia.org/wiki/Software_engineering
> One of the core issues in software engineering is that its approaches are not empirical enough because a real-world validation of approaches is usually absent

That criticism isn't very informed. Also is the problem really in how it's called?

> Issues with management cause other problems on top of all of that, but even if you have a group of software engineers doing their absolute best to follow good software engineering principles without any kind of management interference, what they're doing is still very different from most engineering disciplines

Because hardware engineers want to pass certification. Never heard of what they do when they are not constrained by that? And even then there's a lot of funny stuff that passes certification like that x-ray machine and Intel processors.

> and it likely wouldn't be hard for another group of competent software engineers to make solid arguments about why the good software engineering practices that they're following actually aren't all that good.

Anything created by humans has flaws and can be criticized.
September 03, 2018
On Saturday, 1 September 2018 at 20:48:27 UTC, Walter Bright wrote:
> On 9/1/2018 5:25 AM, tide wrote:
>> and that all bugs can be solved with asserts
>
> I never said that, not even close.

Are you in large implying it.

> But I will maintain that DVD players still hanging on a scratched DVD after 20 years of development means there's some cowboy engineering going on, and an obvious lack of concern about that from the manufacturer.

Yes why wouldn't a company want to fix a "feature" where by, if you have a scratch on a DVD you have to go buy another one in order to play it. It's obviously not that big of a deal breaker, even for you, considering you are still buying them 20 years on.
September 03, 2018
I have to delete some quoted text to make this manageable.

On 9/2/2018 5:07 PM, Nick Sabalausky (Abscissa) wrote:
> [...]
> GUI programming has been attempted a lot. (See Scratch for one of the
> latest, possibly most successful attempts). But there are real,
> practical reasons it's never made significant in-roads (yet).
>
> There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface:
>
> A. Visual representation:
> -------------------------
>
> By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc".
>
> What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used.

> If you're going to make a diagram-based or VR-based programming tool, it
> will still be using the same fundamental concepts that are already
> established in text-based programming: Imperative loops, conditionals
> and variables. Functional/declarative immutability, purity and
> high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And
> indeed, all GUI based programming tools have worked this way. Because
> how *else* are they going to work?>
> If what you're really looking for is something that replaces or
> transcends all of those existing, fundamental programming concepts, then
> what you're *really* looking for is a new fundamental programming
> concept, not a visual representation. And ance you DO invent a new
> fundamental programming concept, being abstract, it will again be
> applicable to a variety of possible visual representations.

Well, there are quite a few programming approaches that bypass the concepts you've listed. For example, production (rule-based) systems and agent-oriented programming. I've became interested in stuff like this recently, because it looks like a legitimate way out of the mess we're in. Among other things, I found this really interesting Ph.D. thesis about system called LiveWorld:

http://alumni.media.mit.edu/~mt/thesis/mt-thesis.html

Interesting stuff. I believe it would work very well in VR, if visualized properly.

> That said, it is true some concepts may be more readily amenable to certain visual representations than others. But, at least for all the currently-known concepts, any combination of concept and representation can certainly be made to work.
>
> B. Physical interface:
> ----------------------
>
> By this I mean both actual input devices (keyboards, controllers,
> pointing devices) and also the mappings from their affordances (ie, what
> you can do with them: push button x, tilt stick's axis Y, point, move,
> rotate...) to specific actions taken on the visual representation
> (navigate, modify, etc.)
>
> The mappings, of course, tend to be highly dependant on the visual representation (although, theoretically, they don't strictly HAVE to be). The devices themselves, less so: For example, many of us use a pointing device to help us navigate text. Meanwhile, 3D modelers/animators find it's MUCH more efficient to deal with their 3D models and environments by including heavy use of the keyboard in their workflow instead of *just* a mouse and/or wacom alone.

That depends on the the editor design. Wings 3D (http://www.wings3d.com), for example, uses mouse for most operations. It's done well and it's much easier to get started with than something like Blender (which I personally hate). Designers use Wings 3D for serious work and the interface doesn't seem to become a limitation even for advanced use cases.

> An important point here, is that using a keyboard has a tendency to be
> much more efficient for a much wider range of interactions than, say, a
> pointing device, like a mouse or touchscreen. There are some things a
> mouse or touchscreen is better at (ie, pointing and learning curve), but
> even on a touchscreen, pointing takes more time than pushing a button
> and is somewhat less composable with additional actions than, again,
> pushing/holding a key on a keyboard.>
> This means that while pointing, and indeed, direct manipulation in
> general, can be very beneficial in an interface, placing too much
> reliance on it will actually make the user LESS productive.

I don't believe this is necessarily true. It's just that programmers and designers today are really bad at utilizing the mouse. Most of them aren't even aware of how the device came to be. They have no idea about NLS or Doug Engelbart's research.

http://www.dougengelbart.org/firsts/mouse.html

They've never looked at subsequent research by Xerox and Apple.

https://www.youtube.com/watch?v=Cn4vC80Pv6Q

That last video blew my mind when I saw it. Partly because it was the first time I realized that the five most common UI operations (cut, copy, paste, undo, redo) have no dedicated keys on the keyboard today, while similar operation on Xerox Star did. Partly because I finally understood the underlying idea behind icon-based UIs and realized it's almost entirely forgotten now. It all ties together. Icons represent objects. Objects interact through messages. Mouse and command keys allow you to direct those interactions.

There were other reasons too. That one video is a treasure-trove of forgotten good ideas.

> The result:
> -----------
>
> For programming to transcend the current text/language model, *without* harming either productivity or programming power (as all attempts so far have done), we will first need to invent entirely new high-level concepts which are simultaneously both simple/high-level enough AND powerful enough to obsolete most of the nitty-gritty lower-level concepts we programmers still need to deal with on a regular basis.

That's one other thing worth thinking about. Are we dealing with the right concepts in the first place? Most of my time as a programmer was spent integrating badly designed systems. Is that actually necessary? I don't think so. It's busy-work created by developers for developers. Maybe better tooling would free up all that time to deal with real-life problems.

There is this VR game called Fantastic Contraption. Its interface is light-years ahead of anything else I've seen in VR. The point of the game is to design animated 3D structures that solve the problem of traversing various obstacles while moving from point A to point B. Is that not "real" programming? You make a structure to interact with an environment to solve a problem.

> And once we do that, those new super-programming concepts (being the abstract concepts that they inherently are) will still be independent of visual representation. They might finally be sufficiently powerful AND simple that they *CAN* be used productively with graphical non-text-language representation...but they still will not *require* such a graphical representation.
>
> That's why programming is still "stuck" in last century's text-based model: Because it's not actually stuck: It still has significant deal-winning benefits over newer developments. And that's because, even when "newer" does provide improvements, newer still isn't *inherently* superior on *all* counts. That's a fact of life that is easily, and frequently, forgotten in fast-moving domains.

Have you seen any of Bret Viktor's talks? He addresses a lot of these points.

https://vimeo.com/71278954
https://vimeo.com/64895205
https://vimeo.com/97903574

September 03, 2018
On 9/3/2018 1:55 PM, Gambler wrote:
> There is this VR game called Fantastic Contraption. Its interface is light-years ahead of anything else I've seen in VR. The point of the game is to design animated 3D structures that solve the problem of traversing various obstacles while moving from point A to point B. Is that not "real" programming? You make a structure to interact with an environment to solve a problem.

I posted this without any context, I guess. I brought this game (http://fantasticcontraption.com) up, because:

1. It's pretty close to "programming" something useful. Sort of virtual
robotics, minus sensors.
2. The interface is intuitive, fluid and fun to use.
3. While you designs may fail, they fail in a predictable manner,
without breaking the world.
4. It's completely alien to all those horrible wire diagram environments.

Seems like we can learn a lot from it when designing future programming environments.
September 03, 2018
On 9/3/2018 8:33 AM, tide wrote:
> Yes why wouldn't a company want to fix a "feature" where by, if you have a scratch on a DVD you have to go buy another one in order to play it.

Not playing it with an appropriate message is fine. Hanging the machine is not.


> It's obviously not that big of a deal breaker, even for you, considering you are still buying them 20 years on.

Or more likely, I buy another one now and then that will hopefully behave better.

I've found that different DVD players will play different damaged DVDs, i.e. one that will play one DVD won't play another, and vice versa.

I can get DVD players from the thrift store for $5, it's cheaper than buying a replacement DVD :-)

September 03, 2018
On Saturday, 1 September 2018 at 11:36:52 UTC, Walter Bright wrote:
>
> I'm rather sad that I've never seen these ideas outside of the aerospace industry. Added to that is all the pushback on them I get here, on reddit, and on hackernews.
>

Just chiming in to say you're certainly not ignored, it's an article on d-idioms https://p0nce.github.io/d-idioms/#Unrecoverable-vs-recoverable-errors

I tend to follow your advice and let some assertions in production, even for a b2c product I was surprised to see it works well, as it's a lot better than not having bug reports and stay with silent failure in the wild. None of that would happen with "recovered" bugs.
September 04, 2018
On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky (Abscissa) wrote:
> GUI programming has been attempted a lot. (See Scratch for one of the latest, possibly most successful attempts). But there are real, practical reasons it's never made significant in-roads (yet).
>
> There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface:
>
> A. Visual representation:
> -------------------------
>
> By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc".
>
> What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used.
>
> If you're going to make a diagram-based or VR-based programming tool, it will still be using the same fundamental concepts that are already established in text-based programming: Imperative loops, conditionals and variables. Functional/declarative immutability, purity and high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And indeed, all GUI based programming tools have worked this way. Because how *else* are they going to work?

They say the main difficulty for non-programmers is control flow, not type system, one system was reported usable where control flow was represented visually, but sequential statements were left as plain C. E.g. we have a system administrator here who has no problem with powershell, but has absolutely no idea how to start with C#.

> B. Physical interface:
> ----------------------
>
> By this I mean both actual input devices (keyboards, controllers, pointing devices) and also the mappings from their affordances (ie, what you can do with them: push button x, tilt stick's axis Y, point, move, rotate...) to specific actions taken on the visual representation (navigate, modify, etc.)

Hardware engineers are like the primary target audience for visual programming :)
https://en.wikipedia.org/wiki/Labview