September 02, 2018
On 09/01/2018 03:47 PM, Everlast wrote:
> 
> It's because programming is done completely wrong. All we do is program like it's 1952 all wrapped up in a nice box and bow tie. WE should have tools and a compiler design that all work interconnected with complete graphical interfaces that aren't based in the text gui world(an IDE is just a fancy text editor). I'm talking about 3D code representation using graphics so projects can be navigated  visually in a dynamic way and many other things.
> 
> The current programming model is reaching diminishing returns. Programs cannot get much more complicated because the environment in which they are written cannot support them(complexity != size).
> 
> We have amazing tools available to do amazing things but programming is still treated like punch cards, just on acid. I'd like to get totally away from punch cards.
> 
> I total rewrite of all aspects of programming should be done(from "object" files(no more, they are not needed, at least not in the form they are), the IDE(it should be more like a video game(in the sense of graphical use) and provide extensive information and debugging support all at a finger tip away), from the tools, to the design of applications, etc.
> 
> One day we will get there...
> 

GUI programming has been attempted a lot. (See Scratch for one of the latest, possibly most successful attempts). But there are real, practical reasons it's never made significant in-roads (yet).

There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface:

A. Visual representation:
-------------------------

By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc".

What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used.

If you're going to make a diagram-based or VR-based programming tool, it will still be using the same fundamental concepts that are already established in text-based programming: Imperative loops, conditionals and variables. Functional/declarative immutability, purity and high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And indeed, all GUI based programming tools have worked this way. Because how *else* are they going to work?

If what you're really looking for is something that replaces or transcends all of those existing, fundamental programming concepts, then what you're *really* looking for is a new fundamental programming concept, not a visual representation. And ance you DO invent a new fundamental programming concept, being abstract, it will again be applicable to a variety of possible visual representations.

That said, it is true some concepts may be more readily amenable to certain visual representations than others. But, at least for all the currently-known concepts, any combination of concept and representation can certainly be made to work.

B. Physical interface:
----------------------

By this I mean both actual input devices (keyboards, controllers, pointing devices) and also the mappings from their affordances (ie, what you can do with them: push button x, tilt stick's axis Y, point, move, rotate...) to specific actions taken on the visual representation (navigate, modify, etc.)

The mappings, of course, tend to be highly dependant on the visual representation (although, theoretically, they don't strictly HAVE to be). The devices themselves, less so: For example, many of us use a pointing device to help us navigate text. Meanwhile, 3D modelers/animators find it's MUCH more efficient to deal with their 3D models and environments by including heavy use of the keyboard in their workflow instead of *just* a mouse and/or wacom alone.

An important point here, is that using a keyboard has a tendency to be much more efficient for a much wider range of interactions than, say, a pointing device, like a mouse or touchscreen. There are some things a mouse or touchscreen is better at (ie, pointing and learning curve), but even on a touchscreen, pointing takes more time than pushing a button and is somewhat less composable with additional actions than, again, pushing/holding a key on a keyboard.

This means that while pointing, and indeed, direct manipulation in general, can be very beneficial in an interface, placing too much reliance on it will actually make the user LESS productive.

The result:
-----------

For programming to transcend the current text/language model, *without* harming either productivity or programming power (as all attempts so far have done), we will first need to invent entirely new high-level concepts which are simultaneously both simple/high-level enough AND powerful enough to obsolete most of the nitty-gritty lower-level concepts we programmers still need to deal with on a regular basis.

And once we do that, those new super-programming concepts (being the abstract concepts that they inherently are) will still be independent of visual representation. They might finally be sufficiently powerful AND simple that they *CAN* be used productively with graphical non-text-language representation...but they still will not *require* such a graphical representation.

That's why programming is still "stuck" in last century's text-based model: Because it's not actually stuck: It still has significant deal-winning benefits over newer developments. And that's because, even when "newer" does provide improvements, newer still isn't *inherently* superior on *all* counts. That's a fact of life that is easily, and frequently, forgotten in fast-moving domains.
September 02, 2018
On Sunday, 2 September 2018 at 04:59:49 UTC, Nick Sabalausky (Abscissa) wrote:
> A. People not caring enough about their own craft to actually TRY to learn how to do it right.

Well, that is an issue. That many students enroll into programming courses, not because they take pride in writing good programs, but because they think that working with computers would somehow be an attractive career path.

Still, my impression is that students that write good programs also seem to be good at theory.

> B. HR people who know nothing about the domain they're hiring for.

Well,  I think that goes beyond HR people. Also lead programmers in small businesses that either don't have an education or didn't do too well, will feel that someone that does know what they are doing is a threat to their position. Another issue is that management does not want to hire people who they think will get bored with their "boring" software projects... So they rather hire someone less apt that will not quit the job after 6 months...

So there are a lot of dysfunctional aspects at the very foundation of software development processes in many real world businesses.

I wouldn't expect anything great to come out of this... I also suspect that many managers don't truly understand that one good programmer can replace several bad ones...

> C. Overall societal reliance on schooling systems that:
>
>     - Know little about teaching and learning,
>
>     - Even less about software development,

Not sure what you mean by this. In many universities you can sign up for the courses you are interested in. It is really up to the student to figure out what their profile should be.

Anyway, since there are many methodologies, you will have to train your own team in your specific setup. With a well rounded education a good student should have the knowledge that will let them participate in discussions about how to structure the work.

So there is really no way for any university to teach you exactly what the process should be like.

This is no different from other fields.  Take a sawmill; there are many ways to structure the manufacturing process in a sawmill. Hopefully people with an education is able to grok the process and participate in discussions about how to improve it, but the specifics is dependent on the concrete sawmill production line.



September 02, 2018
On 9/1/2018 11:42 PM, Nick Sabalausky (Abscissa) wrote:
> On 09/01/2018 05:06 PM, Ola Fosheim Grøstad wrote:
>>
>> If you have a specific context (like banking) then you can develop a
>> software method that specifies how to build banking software, and
>> repeat it, assuming that the banks you develop the method for are similar
>>
>> Of course, banking has changed quite a lot over the past 15 years (online + mobile). Software often operates in contexts that are critically different and that change in somewhat unpredictable manners.
>>
> 
> Speaking of, that always really gets me:
> 
> The average ATM is 24/7. Sure, there may be some downtime, but what, how much? For the most part, these things were more or less reliable decades ago, from a time with *considerably* less of the "best practices" and accumulated experience, know-how, and tooling we have today. And over the years, they still don't seem to have screwed ATMs up too badly.
> 
> But contrast that to my bank's phone "app": This thing *is* rooted firmly in modern technology, modern experience, modern collective knowledge, modern hardware and...The servers it relies on *regularly* go down for several hours at a time during the night. That's been going on for the entire 2.5 years I've been using it.
> 
> And for about an hour the other day, despite using the latest update, most of the the buttons on the main page were *completely* unresponsive. Zero acknowledgement of presses whatsoever. But I could tell the app wasn't frozen: The custom-designed text entry boxes still handled focus events just fine.
> 
> Tech from 1970's: Still working fine. Tech from 2010's: Pfffbbttt!!!
> 
> Clearly something's gone horribly, horribly wrong with modern software development.

I wouldn't vouch for ATM reliability. You would be surprised what kinds of garbage software they run. Think Windows XP for OS:

http://info.rippleshot.com/blog/windows-xp-still-running-95-percent-atms-world

But in general, I believe the statement about comparative reliability of tech from 1970s is true. I'm perpetually impressed with is all the mainframe software that often runs mission-critical operations in places you would least expect.

Telecom systems are generally very reliable, although it feels that started to change recently.

September 02, 2018
On 9/1/2018 8:18 PM, Nick Sabalausky (Abscissa) wrote:
> [...]

My take on all this is people spend 5 minutes thinking about it and are confident they know it all.

A few years back some hacker claimed they'd gotten into the Boeing flight control computers via the passenger entertainment system. I don't know the disposition of this case, but if true, such coupling of systems is a gigantic no-no. Some engineers would have some serious 'splainin to do.
September 02, 2018
On 09/02/2018 07:17 PM, Gambler wrote:
> 
> But in general, I believe the statement about comparative reliability of
> tech from 1970s is true. I'm perpetually impressed with is all the
> mainframe software that often runs mission-critical operations in places
> you would least expect.

I suspect it may be because, up until around the 90's, in order to get any code successfully running on the computer at all, you pretty much had to know at least a thing or two about how a computer works and how to use it. And performance/efficiency issues were REALLY obvious. Not to mention the institutional barriers to entry: Everyone didn't just have a computer in their pocket, or even in their den at home.

(Plus the machines themselves tended to be simpler: It's easier to write good code when a single programmer can fully understand every byte of the machine and their code is all that's running.)

In the 50's/60's in particular, I imagine a much larger percentage of programmers probably had either some formal engineering background or something equally strong.

But now, pretty much anyone can (and often will) cobble together something that more-or-less runs. Ie, there used to be a stronger barrier to entry, and the machines/tools tended to be less tolerant of problems.
September 02, 2018
On 09/02/2018 09:20 PM, Walter Bright wrote:
> On 9/1/2018 8:18 PM, Nick Sabalausky (Abscissa) wrote:
>> [...]
> 
> My take on all this is people spend 5 minutes thinking about it and are confident they know it all.

Wouldn't it be nice if we COULD do that? :)

> A few years back some hacker claimed they'd gotten into the Boeing flight control computers via the passenger entertainment system. I don't know the disposition of this case, but if true, such coupling of systems is a gigantic no-no. Some engineers would have some serious 'splainin to do.

Wonder if it could've just been a honeypot. (Or just someone who was full-of-it.) Although, I'm not sure how much point there would be to a honeypot if the systems really were electronically isolated.
September 03, 2018
On Saturday, 1 September 2018 at 13:21:27 UTC, Jonathan M Davis wrote:
> On Saturday, September 1, 2018 6:37:13 AM MDT tide via Digitalmars-d wrote:
>> On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright
>>
>> wrote:
>> > On 8/31/2018 7:28 PM, tide wrote:
>> >> I'm just wondering but how would you code an assert to ensure the variable for a title bar is the correct color? Just how many asserts are you going to have in your real-time game that can be expected to run at 144+ fps ?
>> >
>> > Experience will guide you on where to put the asserts.
>> >
>> > But really, just apply common sense. It's not just for software. If you're a physicist, and your calculations come up with a negative mass, you screwed up. If you're a mechanical engineer, and calculate a force of billion pounds from dropping a piano, you screwed up. If you're an accountant, and calculate that you owe a million dollars in taxes on a thousand dollars of income, you screwed up. If you build a diagnostic X-ray machine, and the control software computes a lethal dose to administer, you screwed up.
>> >
>> > Apply common sense and assert on unreasonable results, because your code is broken.
>>
>> That's what he, and apparently you don't get. How are you going to use an assert to check that the color of a title bar is valid? Try and implement that assert, and let me know what you come up with.
>
> I don't think that H. S. Teoh's point was so much that you should be asserting anything about the colors in the graphics but rather that problems in the graphics could be a sign of a deeper, more critical problem and that as such the fact that there are graphical glitches is not necessary innocuous. However, presumably, if you're going to put assertions in that code, you'd assert things about the actual logic that seems critical and not anything about the colors or whatnot - though if the graphical problems would be a sign of a deeper problem, then the assertions could then prevent the graphical problems, since the program would be killed before they happened due to the assertions about the core logic failing.
>
> - Jonathan M Davis

Any graphic problems are going to stem probably more from shaders and interaction with the GPU than any sort of logic code. Not that you can really use asserts to ensure you are making calls to something like Vulkan correctly. There are validation layers for that, which are more helpful than assert would ever be. They still have a cost, as an example my engine runs at 60+ FPS on my crappy phone without the validation layers. But with them enabled I get roughly less than half that 10-15 fps, depending on where I'm looking. So using them in production code isn't exactly possible.


What he was talking about was basically that, he was saying how it could be used to identify possible memory corruption, which is completely absurd. That's just stretching it's use case so thin.
September 02, 2018
On Mon, Sep 03, 2018 at 03:21:00AM +0000, tide via Digitalmars-d wrote: [...]
> Any graphic problems are going to stem probably more from shaders and interaction with the GPU than any sort of logic code.
[...]
> What he was talking about was basically that, he was saying how it could be used to identify possible memory corruption, which is completely absurd.  That's just stretching it's use case so thin.

You misquote me. I never said asserts could be used to *identify* memory corruption -- that's preposterous.  What I'm saying is that when an assert failed, it *may* be caused by a memory corruption (among many other possibilities), and that is one of the reasons why it's a bad idea to keep going in spite of the assertion failure.

The reason I picked memory corruption is because it's a good illustration of how badly things can go wrong when code that is known to have programming bugs continue running unchecked.  When an assertion fails it basically means the program has a logic error, and what the programmer assumed the program will do is wrong.  Therefore, by definition, you cannot predict what the program will actually do -- and remote exploits via memory corruption is a good example of how your program can end up doing something completely different from what it was designed to do when you keep going in spite of logic errors.

Obviously, assertions aren't going to catch *all* memory corruptions, but given that an assertion failure *might* be caused by a memory corruption, why would anyone in their sane mind want to allow the program to keep going?  We cannot catch *all* logic errors by assertions, but why would anyone want to deliberately ignore the logic errors that we *can* catch?


T

-- 
If creativity is stifled by rigid discipline, then it is not true creativity.
September 02, 2018
On Sun, Sep 02, 2018 at 09:33:36PM -0700, H. S. Teoh wrote: [...]
> The reason I picked memory corruption is because it's a good illustration of how badly things can go wrong when code that is known to have programming bugs continue running unchecked.
[...]

P.S. And memory corruption is also a good illustration of how a logic error in one part of the program can cause another completely unrelated part of the program to malfunction.  The corruption could have happened in your network stack, but it overwrites memory used by your GPU code. You cannot simply assume that just because the network module has nothing to do with the GPU module, that a GPU code assertion failure cannot be caused by a memory corruption in the network module. Therefore, you also cannot assume that an assertion in the GPU code can be safely ignored, because by definition, the program's logic is flawed, and so any assumptions you may have made about it may no longer be true, and blindly continuing to run the code means the possibility of actually executing a remote exploit instead of the GPU code you thought you were about to execute.

When the program logic is known to be flawed, by definition the program is in an invalid state with unknown (and unknowable -- because it implies that your assumptions were false) consequences.  The only safe recourse is to terminate the program to get out of that state and restart from a known safe state.  Anything less is unsafe, because being in an invalid state means you cannot predict what the program will do when you try to recover it.  Your state graph may look nothing like what you thought it should look like, so an action that you thought would bring the program into a known state may in fact bring it into a different, unknown state, which can exhibit any arbitrary behaviour. (This is why certain security holes are known as "arbitrary code execution": the attacker exploits a loophole in the program's state graph to do something the programmer never thought the program could do -- because the programmer's assumptions turned out to be wrong.)


T

-- 
This sentence is false.
September 03, 2018
On 09/03/2018 12:46 AM, H. S. Teoh wrote:
> On Sun, Sep 02, 2018 at 09:33:36PM -0700, H. S. Teoh wrote:
> [...]
>> The reason I picked memory corruption is because it's a good
>> illustration of how badly things can go wrong when code that is known to
>> have programming bugs continue running unchecked.
> [...]
> 
> P.S. And memory corruption is also a good illustration of how a logic
> error in one part of the program can cause another completely unrelated
> part of the program to malfunction.  The corruption could have happened
> in your network stack, but it overwrites memory used by your GPU code.
> You cannot simply assume that just because the network module has
> nothing to do with the GPU module, that a GPU code assertion failure
> cannot be caused by a memory corruption in the network module.
> Therefore, you also cannot assume that an assertion in the GPU code can
> be safely ignored, because by definition, the program's logic is flawed,
> and so any assumptions you may have made about it may no longer be true,
> and blindly continuing to run the code means the possibility of actually
> executing a remote exploit instead of the GPU code you thought you were
> about to execute.
> 

Isn't that assuming the parts aren't @safe? ;)

> Anything less is unsafe, because being
> in an invalid state means you cannot predict what the program will do
> when you try to recover it.  Your state graph may look nothing like what
> you thought it should look like, so an action that you thought would
> bring the program into a known state may in fact bring it into a
> different, unknown state, which can exhibit any arbitrary behaviour.

You mean attempting to doing things, like say, generate a stack trace or format/display the name of the Error class and a diagnostic message? ;)

Not to say it's all-or-nothing of course, but suppose it IS memory corruption and trying to continue WILL cause some bigger problem like arbitrary code execution. In that case, won't the standard Error class stuff still just trigger that bigger problem, anyway?