June 25, 2022

On Saturday, 25 June 2022 at 00:24:08 UTC, forkit wrote:

>

(2) Philospohy and Programming/Programming Languages, are deeply 'entangled' whether you have the capacity to see to that level of detail, or not ;-)

(3) the statement item (2) above is demonstrated throughout academic literature on computing. It's not something I just made up ;-)

There is a lot truth to that. The reason being, of course, that the users of the software are more important than the programmers… so you need to extract what users' needs are, communicate about it and turn it into something that is implementable (by whatever means). And you invariably end up with focusing on objects, processes and communication.

When I went to uni we were trained to do relational modelling with NIAM, also known as… Object-Role Modelling! Yes, you read that right, we deal with objects when modelling tables.

To quote that wiki page: «An object-role model can be automatically mapped to relational and deductive databases (such as datalog).»

It can of course also be translated into an OOA model. One key difference is that OOA "clusters attributes", but Object-Role modelling is "free" of attributes and focus on relations.

June 25, 2022
On Saturday, 25 June 2022 at 18:05:31 UTC, Paul Backus wrote:
>
> ...
> Whether tables or objects are a better way of organizing data is a decades-old debate that I have no intention of wading into here. Regardless of which you prefer, you must admit that both tables and objects have a long history of successful use in real-world software.

When people see something as challenging their belief, they do tend to dig in, and turn it into a lonnnnggg debate ;-)

But really, OO decompostion is just a tool. It's not an idealogy (although many throughout computing history have pushed it as such).

It's just a tool. That is all it is. Nothing more.

It's a tool you should have the option of using, when you think it's needed.

A screw driver makes for a lousy hammer. Just pick the right tool for the job.

If you're trying to model a virtual city, you'll almost have to use object decomposition. I mean, it makes complete sense that you would. Of course, you could model it using logic chips - but why would you?

On the otherhand, if your writing a linker, it doesn't seem like OO decomposition would have any value whatsoever.

People need to be more pragmatic about this. Programming paradigms are just tools. They should not be used as the basis for conducting idealogical warfare against each other ;-)

June 25, 2022
On Saturday, 25 June 2022 at 20:06:45 UTC, Ola Fosheim Grøstad wrote:
>
> ....
> There is a lot truth to that. The reason being, of course, that the users of the software are more important than the programmers… so you need to extract what users' needs are, communicate about it and turn it into something that is implementable (by whatever means). And you invariably end up with focusing on objects, processes and communication.
>

Actually the concept of programming is really, really simple:

(1) You have objects (big and quantum size, they're all objects!).
(2) You have interactions between objects (no object is an island)
(3) You have processes whereby those interactions come about.
(4) You have emergent behaviour (side effects if you will) - the program itself.

The bigger the object, the more difficult it becomes to model it using quantum physics. The easier it becomes to understand the interactions, cause they're more visible (encapsulted if you will). The easier it becomes to identify the processes by which those interactions come about, cause they're more visible. And the easier it becomes to model what the emergent behaviour looks like, because it too is more visible.

On the otherhand, the smaller the object, the harder it becomes to model it using the same object decomposition used with larger objects, the harder it becomes to understand the interactions, the harder it becomes to identify the processes by which those interactions come about, and the harder it becomes to model what the emergent behaviour looks like.

The smaller the object gets, the less chance you have of understanding item (1), let alone items (2), (3), and (4).

In the end, you end up with something like the linux kernel!

It's just works. But nobody really knows why.

June 26, 2022

On Saturday, 25 June 2022 at 23:11:18 UTC, forkit wrote:

>

The bigger the object, the more difficult it becomes to model it using quantum physics. The easier it becomes to understand the interactions, cause they're more visible (encapsulted if you will). The easier it becomes to identify the processes by which those interactions come about, cause they're more visible. And the easier it becomes to model what the emergent behaviour looks like, because it too is more visible.

D is trying to position itself as a language where you start with a prototype and evolve it into a product. So you should ideally be able to start by composing generic (flexible and inefficient) components that give you an approximation of the end product. Yet representative enough to give the end user/developer enough of an idea of the final product to make judgments.

Is D there yet? Probably not. It has ranges in its library and not a lot more.

What is the core strategy for problem solving? Divide and conquer. Start with the big object (what you want the user to see) and then divide (break down) until you end up with something you have ready-mades to build an approximation of.

The D eco system lack those ready-mades, that is ok for now, but there are two questions:

  1. Can the D abstraction mechanisms provide ready mades that are easily configurable?

  2. Is it easy to replace those ready mades with more performant structures?

Can we say yes? Can we say no? Or maybe we just don't know.

What is needed is a methodology with patterns. Only when we collect experience of developers using the methodology trying to apply those patterns can we judge what D is lacking with absolute clarity.

What we have now is people with different unspoken ideas about development methodology based on personal experience and what they read on blogs. There is no
"philosophy" for systems development that can drive the language evolution to something coherent. As such language evolution is driven by a process of replicating other languages by taking bits and pieces with no unifying thought behind it.

Ideally a language would be designed to support a methodology geared to a specific set of use cases. Then you can be innovative and evaluate the innovations objectively. With no such methodology to back up the language design you end up randomly accruing features that are modified/distorted replications of features from other languages and it is difficult to evaluate if new features are supporting better development processes or if they create «noise» and «issues». It is also difficult to evaluate when your feature set is complete.

>

On the otherhand, the smaller the object, the harder it becomes to model it using the same object decomposition used with larger objects, the harder it becomes to understand the interactions, the harder it becomes to identify the processes by which those interactions come about, and the harder it becomes to model what the emergent behaviour looks like.

The smaller the object gets, the less chance you have of understanding item (1), let alone items (2), (3), and (4).

In the end, you end up with something like the linux kernel!

It's just works. But nobody really knows why.

Software development is basically an iterative process where you go back and forth between top-down and bottom-up analysis/development/programming. You have to go top down to find out what you need and can deliver, then you need to go bottom-up to meet those needs. Then you have to go back to the top-down and so on… iteration after iteration.

So you need to both work on the big «objects» and the small «objects» at the same time (or rather in an interleaving pattern).

Linux is kinda different. There was an existing well documented role model (Unix) with lots of educational material, so you could easily anticipate what the big and the small objects would be. That is not typical. There is usually not a need for a replica of something else (software development is too expensive for that). The only reason for there being a market for Linux was that there were no easily available free open source operating systems (Minix was open source, but not free).

Interestingly, Unix is a prime example of reducing complexity by dividing the infrastructure into objects with «coherent» interfaces (not really, but they tried). They didn't model the real world, but they grabbed a conceptualisation that is easily understood by programmers: file objects. So they basically went with: let's make everything a file object (screen, keyboard, mouse, everything).

Of course, the ideal for operating system design is the microkernel approach. What is that about? Break up everything into small encapsulated objects with limited responsibilities that can be independently rebooted. Basically OO/actors.

(Linux has also moved towards encapsulation in smaller less privileged units as the project grew.)

June 27, 2022
On Sunday, 26 June 2022 at 07:37:01 UTC, Ola Fosheim Grøstad wrote:
>
> D is trying to position itself as a language where you start with a prototype and evolve it into a product.

D, by design, defaults to 'flexibilty' (as Andrei Alexandrescu says in his book - The D Programming Language.

I don't think it's unreasonable, for me to assert, that flexibiltiy does not exactly encourage 'structured design'.

But for a language where you want to 'just write code' (which seems what most D users just want to do), then D's default makes complete sense.

2 examples:

If @safe were default, it would make it harder for people to 'just write code'.

If private were private to the class, instead of private to the module, it would make it harder for people to 'just write code'.

I'm not against the defaults, necessariy. Valid arguments can be made from different perspectives.

I much prefer to focus on advocating for choice, rather than focusing on advocating for defaults.

But in the year 2022, these defaults don't make sense any more - unless as stated, your aim is 'to just write code'.

I think this is what D is grappling with at the moment.

To do structured design in D, you have to make the conscious 'effort', to not accept the defaults.

btw. Here's a great talk on 'A philosophy of software design', by John Ousterhout, Professor of Computer Science at Stanford University.

The talk is more or less based on the question he asks the audience at the start of this talk.

Having not studied computer science (I did psychology), I was surprised when he mentioned 'we just don't teach this' :-(

https://www.youtube.com/watch?v=bmSAYlu0NcY

June 27, 2022
On Monday, 27 June 2022 at 01:35:59 UTC, forkit wrote:
>
> If @safe were default, it would make it harder for people to 'just write code'.
>

I don't think it would be necessarily harder, you'd just have to get used to a different starting point, but it should be fairly smooth.

Just like today if you start your first module in your project with @safe: then you essentially won't face many issues initially.

Most issues will stem from things that aren't finished in D, but not from the general gist of safeD.
June 27, 2022

On Monday, 27 June 2022 at 01:35:59 UTC, forkit wrote:

>

I don't think it's unreasonable, for me to assert, that flexibiltiy does not exactly encourage 'structured design'.

Well, max implementation flexibility is essentially machine language. The 68000 instruction set is surprisingly pleasant. You can invent your own mechanisms that can make some tasks easier… but a nightmare for others to read.

The more flexibility, the more chaos programmers will produce. You see this in JavaScript, TypeScript code tends to be much more structured. You see this in D code bases that allow string mixins too.

>

To do structured design in D, you have to make the conscious 'effort', to not accept the defaults.

I don't think the defaults matter much.

>

btw. Here's a great talk on 'A philosophy of software design', by John Ousterhout, Professor of Computer Science at Stanford University.

The talk is more or less based on the question he asks the audience at the start of this talk.

Having not studied computer science (I did psychology), I was surprised when he mentioned 'we just don't teach this' :-(

Too long… Are you suggesting that he said that they don't teach OO? OO is more tied to modelling/systems development than strict Computer Science though. Computer Science is a «messy» branch of discrete mathematics that is more theoretical than practical, but still aims to enable useful theory, e.g. algorithms. In Europe the broader umbrella term is «Informatics» which covers more applied fields as well as «Computer Science».

June 27, 2022
On Monday, 27 June 2022 at 12:18:10 UTC, Ola Fosheim Grøstad wrote:
>
> ...
> Too long… Are you suggesting that he said that they don't teach OO? OO is more tied to modelling/systems development than strict Computer Science though. Computer Science is a «messy» branch of discrete mathematics that is more theoretical than practical, but still aims to enable useful theory, e.g. algorithms. In Europe the broader umbrella term is «Informatics» which covers more applied fields as well as «Computer Science».

No. He specifically is against promoting any 'particular' methodology.

His 'philosophy' of programming (like mine), is 'try them all' (or at least try more than one, and ideally, one that is radically different to the other), so you can better understand the weakness and strengths, of each. Only by doing this, can you put yourself in a position to make better design choices.

The talk (and his book) is about getting programmers to not just focus on code that works, because that strategy is very hard to stop once it starts ('tactical tornadoes' he calls those programmers, as they leave behind a wake of destruction, that others must clean up). This (he argues) is how systems become complicated. And sooner or later, these complexities *will* start causing you problems.

I think what he is saying, is that most programmers are tactical tornadoes.

He want to change this.

The long-term structure of the system is more important, he argues.

He is saying that CS courses just don't teach this mindset, which I found to be surprising. That's what he's trying to change.

I like this comment from his book:

"Most modules have more users than developers, so it is better for the developers to suffer than the users.".

June 28, 2022

On Monday, 27 June 2022 at 22:48:35 UTC, forkit wrote:

>

No. He specifically is against promoting any 'particular' methodology.

Yes, you choose methodology based on the scenario.

A method uses many different techniques. The point of a method is to increase the successrate when faced with a similar
situation, not to be generally useful.

>

He is saying that CS courses just don't teach this mindset, which I found to be surprising. That's what he's trying to change.

Well, they do say it. You cannot teach beginners everything at once. Most students are beginners. So you need many different angles, spread over many courses. Given the amount of theory the time for practicing skills is very limited.

You can teach students techniques, but you cannot teach them intuition, which takes decades.

>

I like this comment from his book:

"Most modules have more users than developers, so it is better for the developers to suffer than the users.".

Yes, the common phrase is that code is read more frequently than written.

Students however, feel they are done when the code runs. Only a small percentage are mature enough as programmers to refine their skills. And that top 10% doesn’t need the teacher... only the book and the assignment. Or rather, there are not enough resources.

It takes time to mature ( measure in decades ) and in that time people will develop patterns. Only 5% of students are at a high level in programming IMHO.

Anyway, in the real world projects are delayed and code is written under time pressure. To get a module «perfect» you need to do it more than once. Very few projects can afford that kind of perfection, nor do they want programmers to rewrite modules over and over. Perfection is not a goal for applications. Only libraries and frameworks can try to achieve perfection.

1 2 3
Next ›   Last »