On Saturday, 25 June 2022 at 23:11:18 UTC, forkit wrote:
> The bigger the object, the more difficult it becomes to model it using quantum physics. The easier it becomes to understand the interactions, cause they're more visible (encapsulted if you will). The easier it becomes to identify the processes by which those interactions come about, cause they're more visible. And the easier it becomes to model what the emergent behaviour looks like, because it too is more visible.
D is trying to position itself as a language where you start with a prototype and evolve it into a product. So you should ideally be able to start by composing generic (flexible and inefficient) components that give you an approximation of the end product. Yet representative enough to give the end user/developer enough of an idea of the final product to make judgments.
Is D there yet? Probably not. It has ranges in its library and not a lot more.
What is the core strategy for problem solving? Divide and conquer. Start with the big object (what you want the user to see) and then divide (break down) until you end up with something you have ready-mades to build an approximation of.
The D eco system lack those ready-mades, that is ok for now, but there are two questions:
-
Can the D abstraction mechanisms provide ready mades that are easily configurable?
-
Is it easy to replace those ready mades with more performant structures?
Can we say yes? Can we say no? Or maybe we just don't know.
What is needed is a methodology with patterns. Only when we collect experience of developers using the methodology trying to apply those patterns can we judge what D is lacking with absolute clarity.
What we have now is people with different unspoken ideas about development methodology based on personal experience and what they read on blogs. There is no
"philosophy" for systems development that can drive the language evolution to something coherent. As such language evolution is driven by a process of replicating other languages by taking bits and pieces with no unifying thought behind it.
Ideally a language would be designed to support a methodology geared to a specific set of use cases. Then you can be innovative and evaluate the innovations objectively. With no such methodology to back up the language design you end up randomly accruing features that are modified/distorted replications of features from other languages and it is difficult to evaluate if new features are supporting better development processes or if they create «noise» and «issues». It is also difficult to evaluate when your feature set is complete.
> On the otherhand, the smaller the object, the harder it becomes to model it using the same object decomposition used with larger objects, the harder it becomes to understand the interactions, the harder it becomes to identify the processes by which those interactions come about, and the harder it becomes to model what the emergent behaviour looks like.
The smaller the object gets, the less chance you have of understanding item (1), let alone items (2), (3), and (4).
In the end, you end up with something like the linux kernel!
It's just works. But nobody really knows why.
Software development is basically an iterative process where you go back and forth between top-down and bottom-up analysis/development/programming. You have to go top down to find out what you need and can deliver, then you need to go bottom-up to meet those needs. Then you have to go back to the top-down and so on… iteration after iteration.
So you need to both work on the big «objects» and the small «objects» at the same time (or rather in an interleaving pattern).
Linux is kinda different. There was an existing well documented role model (Unix) with lots of educational material, so you could easily anticipate what the big and the small objects would be. That is not typical. There is usually not a need for a replica of something else (software development is too expensive for that). The only reason for there being a market for Linux was that there were no easily available free open source operating systems (Minix was open source, but not free).
Interestingly, Unix is a prime example of reducing complexity by dividing the infrastructure into objects with «coherent» interfaces (not really, but they tried). They didn't model the real world, but they grabbed a conceptualisation that is easily understood by programmers: file objects. So they basically went with: let's make everything a file object (screen, keyboard, mouse, everything).
Of course, the ideal for operating system design is the microkernel approach. What is that about? Break up everything into small encapsulated objects with limited responsibilities that can be independently rebooted. Basically OO/actors.
(Linux has also moved towards encapsulation in smaller less privileged units as the project grew.)