May 16, 2019
On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
> For example, in D we have
>
> enum x = 4;
> and
> int y = 4;
>
> That is explicitly two different programming contexts. On CT and the other RT. But that is the fallacy. They are EXACTLY identical programmatically.
>
> Only when y is modified at some point later in the code does things potentially change. enum says we will never change x at CT... but why can't the compiler figure that out automatically rather than forcing us to play by it's rules and have a separate context?

I agree with your basic goals.

But there are a few issues:

1. to avoid the special-casing you need to add type-variables as proper variables in the language. Then you can have functions as meta-level type-constructors.

2. you need to provide a construct for ensuring that a value is resolved at compile time.

3. how do you ensure that the library code you write is structured in a way that doesn't seem to arbitrarily break?

4. corollary of 3, how to you ensure that library code doesn't generate slow code paths spent on resolving typing information at runtime?


Anyway, I don't disagree with your goals, but you would have a completely different language.
May 17, 2019
On Thursday, 16 May 2019 at 13:05:52 UTC, Ola Fosheim Grøstad wrote:
> On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
>> For example, in D we have
>>
>> enum x = 4;
>> and
>> int y = 4;
>>
>> That is explicitly two different programming contexts. On CT and the other RT. But that is the fallacy. They are EXACTLY identical programmatically.
>>
>> Only when y is modified at some point later in the code does things potentially change. enum says we will never change x at CT... but why can't the compiler figure that out automatically rather than forcing us to play by it's rules and have a separate context?
>
> I agree with your basic goals.
>
> But there are a few issues:
>
> 1. to avoid the special-casing you need to add type-variables as proper variables in the language. Then you can have functions as meta-level type-constructors.
>
> 2. you need to provide a construct for ensuring that a value is resolved at compile time.
>
> 3. how do you ensure that the library code you write is structured in a way that doesn't seem to arbitrarily break?
>
> 4. corollary of 3, how to you ensure that library code doesn't generate slow code paths spent on resolving typing information at runtime?
>
>
> Anyway, I don't disagree with your goals, but you would have a completely different language.

Yeah, I didn't mean that that D it self would be this. My point was that this is where languages are ultimately headed.  Knowing where we are going helps us get there.

D itself might not be able to do this but the goal would be to get closer to the ideal. How that is achieved properly for D is not in my domain of expertise.
May 17, 2019
On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
> On Thursday, 16 May 2019 at 08:12:49 UTC, NaN wrote:
>
> No, you don't get it.

Unsurprisingly I feel that neither do you, at least you dont get what Im saying, or rather you think Im saying something that Im not.


> We are talking about a hypothetical compiler that doesn't have to have different contexts. In D, the contexts arbitrarily separated in the language...

I don't see how it can not have different contexts, it runs at compile time, produces a binary and that runs at runtime. You can hide that fact, make it so code looks and works the same in each context as much as possible. But you cant not have that be what actually happens.


> What we are ultimately talking about here is that CT and RT is not two different concepts but one concept with some very minute distinction for RT.

Yes, exactly my point, there will be differences. Andrei did a talk on why C++s static if was next to useless because it introduces a new scope. That's all I was pointing out, regular if and static if behave differently in D. And with good reason.


> As a programmer you shouldn't care if something is CT or RT and hence you shouldn't even know there is a difference.

Like most things that "just work" you dont care until you do.


> What are you saying is that you have one if and two contexts. What I'm saying is  hat you have one if and one context. That the CT programming and runtime programming are NOT treated as being two different universes with some overlap but the same universe with a slight boundary.

100% it'd be great if RT/CT were "same rules apply", but i was just pointing out one of the "boundaries" where that is currently not the case.


> enum x = 4;
> and
> int y = 4;
>
> That is explicitly two different programming contexts. On CT and the other RT. But that is the fallacy. They are EXACTLY identical programmatically.

static if (x) {}
and
if (y) {}

Are currently not identical grammatically. One introduces a new scope and one does not.


> Only when y is modified at some point later in the code does things potentially change. enum says we will never change x at CT... but why can't the compiler figure that out automatically rather than forcing us to play by it's rules and have a separate context?

Defining a constant is not you informing the compiler you wont change the value, it's you telling the compiler to make sure that you dont. That cant be inferred by the compiler because the act of modifying it would destroy the inference that it should not be modified. It's catch 22.
May 17, 2019
On Friday, 17 May 2019 at 00:25:36 UTC, Alex wrote:
> My point was that this is where languages are ultimately headed.  Knowing where we are going helps us get there.

It would be interesting to use a language like that, for sure.



May 17, 2019
On Friday, 17 May 2019 at 09:15:14 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 17 May 2019 at 00:25:36 UTC, Alex wrote:
>> My point was that this is where languages are ultimately headed.  Knowing where we are going helps us get there.
>
> It would be interesting to use a language like that, for sure.

You'll have to wait a few millennia ;/ And it will require people to create it. After all, a thousand years ago the Greeks were contemplating such things in their own limited way wishing there was a better way to do something... Socrates would be amazing at what we have just as you will be amazing at what the future brings(if humans don't destroy themselves in the process).

But there is a clear evolutionary goal of computation, a direction to where things are moving that is beyond humans control. 100 years ago the computer scientists had no clue about the complexities of programming yet everything did was a stepping stone in the natural logical evolution of computation. Mathematics was the first computer, physic then was created which helped speed up things tremendously, who knows what is next.

With quantum computing, if it is in the right direction, a new programming paradigm and languages will need to be laid to make it practical. Imagine when you first learned programming and how primitive you thought compared to now. That is a microcosm of what is happening on the large scale. Compilers are still in the primitive stage on the higher level just as your programming knowledge is primitive on a higher level(and very advanced on a lower level).


The thing is, abstraction is the key to being able to deal with complexity. That is the only way humans can do what they do. Compilers that make it hard to do abstraction become very difficult to use for complexity. The only languages I see that handle complexity on any meaningful level are functional programming languages. Procedural languages actually seem to create an upper limit where it becomes exponentially harder to do anything past a certain amount of complexity.

The problem with functional programming languages is they are difficult to use for simple stuff and since all programs start out as simple, it becomes very hard to find a happy medium. It's as if one needs a D+Haskell combo that work seamlessly together where Haskell can build the abstraction and D can handle the nitty gritty details but that there would be a perfect blend between the procedural to the functional and one can work at any level at any time without getting stuck(hence one chooses the right amount for the particular task(which might be coding a graphics function or designing a oop like hierarchy)).

Most languages are hammers, you are stuck with using them to solve all the problems and if they don't solve a particular problem well then you are screwed, you just have to hammer away until you get somewhere.

Unfortunately D+Haskell would be an entirely new language.

One way to see this is the wave/particle duality. Humans are notorious in choosing one view or the other in things... reality is that there is no distinction... there is just one thing. Same goes for programming. Procedural and functional are just two different extremes and making the distinction is actually limiting in the long run. The goal with such things is choosing the right view for the right problem at the right time and then one can generate the solution very easily. Difficulty is basically using the wrong tool for the job. [And you'll notice that most D users use D in a specific job that it excels at and then believes that it is a great language because it does their job well... they just chose the right tool(relatively speaking) for their job. They generally fail to realize there are many tools and many jobs. The same goes for most Haskell users and most python users, etc.]






May 17, 2019
On Friday, 17 May 2019 at 08:47:19 UTC, NaN wrote:
> On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
>> On Thursday, 16 May 2019 at 08:12:49 UTC, NaN wrote:
>>
>> No, you don't get it.
>
> Unsurprisingly I feel that neither do you, at least you dont get what Im saying, or rather you think Im saying something that Im not.
>
>
>> We are talking about a hypothetical compiler that doesn't have to have different contexts. In D, the contexts arbitrarily separated in the language...
>
> I don't see how it can not have different contexts, it runs at compile time, produces a binary and that runs at runtime. You can hide that fact, make it so code looks and works the same in each context as much as possible. But you cant not have that be what actually happens.
>
>
>> What we are ultimately talking about here is that CT and RT is not two different concepts but one concept with some very minute distinction for RT.
>
> Yes, exactly my point, there will be differences. Andrei did a talk on why C++s static if was next to useless because it introduces a new scope. That's all I was pointing out, regular if and static if behave differently in D. And with good reason.
>
>
>> As a programmer you shouldn't care if something is CT or RT and hence you shouldn't even know there is a difference.
>
> Like most things that "just work" you dont care until you do.
>
>
>> What are you saying is that you have one if and two contexts. What I'm saying is  hat you have one if and one context. That the CT programming and runtime programming are NOT treated as being two different universes with some overlap but the same universe with a slight boundary.
>
> 100% it'd be great if RT/CT were "same rules apply", but i was just pointing out one of the "boundaries" where that is currently not the case.
>
>
>> enum x = 4;
>> and
>> int y = 4;
>>
>> That is explicitly two different programming contexts. On CT and the other RT. But that is the fallacy. They are EXACTLY identical programmatically.
>
> static if (x) {}
> and
> if (y) {}
>
> Are currently not identical grammatically. One introduces a new scope and one does not.
>
>
>> Only when y is modified at some point later in the code does things potentially change. enum says we will never change x at CT... but why can't the compiler figure that out automatically rather than forcing us to play by it's rules and have a separate context?
>
> Defining a constant is not you informing the compiler you wont change the value, it's you telling the compiler to make sure that you dont. That cant be inferred by the compiler because the act of modifying it would destroy the inference that it should not be modified. It's catch 22.

There is a different between what the D compiler does, what it should do and what all compilers should do. Compilers must evolve and it requires changes. The reason why static if does not create a new scope is because one cannot create variables inside a normal if and them be seen outside. This is a limitation of if, not the other way around.

For example, if D had two types of scoping, say [] and {} then would not need static if.

if (x) [ ] <- does not create a scope for variable creation
if (x) { } <- standard

Many times we have to declare the variable outside of an if statement just to get things to work. Now, of course this is not great practice because the variable may not be created but used afterwards. static if already has that issue though.


The point is that you are trying to explain the issues that make it hard for D to do certain things when I'm talking about what D should do as if it didn't have the issues.

D was designed with flaws in it... that has to be accepted. But if you take those flaws as set in stone then there is never any point of making D better. You basically assume that it has no flaws(if you assume they can't be changed then you are effectively assuming it is perfect).

You need to learn to think about side the box a little more... thinking inside the box is easy. Everyone knows what D does and it's flaws(more or less, but flaws are generally easy to spot).

But if you can't imagine what D could be without the flaws then really, you don't see the flaws.

To truly see the flaws in something requires that you also see beyond those flaws, as if D existed in an alternate universe and didn't have them. From there one can work on fixing those flaws and making D better.

So, yes, D has issues. We have static if and if and CT and RT boundaries. Just arbitrarily trying to combine them will be impossible in D as we know it. That should be obvious(that is why the distinction exists and why we are talking about it. If it didn't have these issues we would be talking about other issues or none at all).

But the question is, how to fix these issues in the right way? First, are they fixable? Well, everything is fixable because one can just throw it away and start afresh. Is it worth fixing? That is up to the beholder.

Generally though, people that want something better(to fix a flaw) work towards a solution first by trying to quantify the flaw and think outside the box. By thinking about what they flaw is a flaw. This requires moving past what is and towards what could be. You are stuck in the IS part of the equation. I am stuck in the COULD BE. But why you are wrong is that you don't seem to realize this is a COULD BE thread. It's true that in some sense one ultimately has to deal with the practical issues of IS, but one has to get the order right. First one has to know where to go(the could be) before they actually start the travel(the IS).

So, yes, everything you have said, in the context of what IS, is true. But it is not helpful. The whole discussion is about the flaw in what IS and how to get beyond it. In the context of COULD BE you are wrong, because you are staying at home not going anywhere(you are focusing on the flaw and letting the flaw stay a flaw by not thinking beyond it).

Your thinking pattern probably invades every aspect of your life. You focus on more practical rather than theoretical aspects of stuff. Hence it's hard for you to think outside the box but you can think inside the box well. You get in to arguments with people because what they say doesn't make sense in your box. You fail to realize there are many boxes... many different levels to view, and understand. I too have a similar problem, completely opposite in some sense. I am an outside thinker. I tend to forget that some people are inside thinkers. I can think inside the box, but it is not my preferred medium(it's very limiting to me). I think you should ask yourself if you can think outside the box. If not then it is a flaw you have and you should work on fixing it as it will make you a much more balanced and powerful human. If it's just your preference then just try to keep in mind that you will interact with other types of thinkers in the world. (and try to parse which box people are thinking in based on context)

[And the fact is everyone thinks in different boxes and there tends to be a lot of confusion the world because "every" assumes everyone else thinks in the same box]









May 17, 2019
On 14.05.19 19:58, H. S. Teoh wrote:
> The underlying idea behind my proposal is to remove artificial
> distinctions and unify CT and RT so that, as far as is practical, they
> are symmetric to each other.

As long as templates are not statically type checked, this does not seem particularly appealing. The main argument Andrei uses against typed templates is precisely that it is not necessary because they are expanded at compile time.

The vision itself makes sense of course, it's basically system λ* [1].

[1] https://en.wikiversity.org/wiki/Foundations_of_Functional_Programming/Pure_type_systems#%CE%BB*_(na%C3%AFve_type_theory)
May 17, 2019
On Friday, 17 May 2019 at 10:51:48 UTC, Alex wrote:
> You'll have to wait a few millennia ;/

Not necessarily. A dynamic language like Python is quite close in some respects, except you usually don't want to be bogged down with types when doing practical programming in Python.

Anyway, if you want to design such a language the best way to go would probably be to first build an interpreted language that has the desired semantics. Then try to envision how that could be turned into a compiled language and do several  versions of alternative designs.

> Socrates would be amazing at what we have just as you will be amazing at what the future brings(if humans don't destroy themselves in the process).

Yes, there is that caveat.

> With quantum computing, if it is in the right direction, a new programming paradigm and languages will need to be laid to make it practical.

I don't think quantum computing is a prerequisite. What I believe will happen is that "machine learning" will fuel the making of new hardware with less "programmer control" and much more distributed architectures for computation. So, when that new architecture becomes a commodity then we'll se a demand for new languages too.

But the market is too smal in the foreseeable future, so at least for the next decades we will just get more of the same good old IBM PC like hardware design (evolved, sure, but there are some severe limitations in step-by-step backwards compatibility).

I guess there will be a market shift if/when robots become household items. Lawnmowers is a beginning, I guess.

> That is a microcosm of what is happening on the large scale.

I don't think it is happening yet though. The biggest change is in the commercial usage of machine learning. But I think contemporary applied machine learning is still at a very basic level, but the ability to increase the scale has made it much more useful.

> Compilers are still in the primitive stage on the higher level just as your programming knowledge is primitive on a higher level(and very advanced on a lower level).

Yes, individual languages are very primitive. However, if you look at the systems built on top of them, then there is some level of sophistication.

> difficult to use for complexity. The only languages I see that handle complexity on any meaningful level are functional programming languages.

Logic programming languages, but they are difficult to utilize outside narrow domains. Although I believe metaprogramming for the type system would be better done with a logic PL. How to make it accessible is a real issue, though.

Another class would be languages with builtin proof systems.

> Procedural languages actually seem to create an upper limit where it becomes exponentially harder to do anything past a certain amount of complexity.

You can do functional programming in imperative languages too, it is just that you tend not to do it. Anyway, there are mixed languages.

> The problem with functional programming languages is they are difficult to use for simple stuff and since all programs start out as simple, it becomes very hard to find a happy medium.

Well, I don't know. I think the main issue is that all programming languages lacks the ability to create a visual expression that makes the code easy to reason about. Basically, the code looks visually too uniform and similar and we have to struggle to read meaning into the code.

So as a result we need to have the model for large sections of the program in our head. Which is hard. So basically a language should be designed together with an accompanying editor with some visual modelling capabilities, but we don't know how to do that well… We just know how to do it in a "better than nothing" fashion.

> Unfortunately D+Haskell would be an entirely new language.

I think you would find it hard to bring those two together anyway.

The aims were quite different in the design. IIRC Haskell was designed to be a usable vehicle for research so that FP research teams could have some synergies from working on the same language model.

D was designed in a more immediate fashion, first as a clean up of C++, then as a series of extensions based on perceived user demand.

> thing. Same goes for programming. Procedural and functional are just two different extremes and making the distinction is actually limiting in the long run.

Well, I'm not sure they are extremes, you are more constrained with a FP, but that also brings you coherency and less state to consider when reasoning about the program.

Interestingly a logic programming language could be viewed as a generalization of a functional programming language.

But I'll have to admit that there are languages that sort of makes for a completely different approach to practical programming, like Erlang or Idris.

But then you have research languages that try to be more regular interpretative, while still having it as a goal to provide a prover, like Whiley and some languages built by people at Microsoft that is related to Z3. These are only suitable for toy programs at this point, though.

> The goal with such things is choosing the right view for the right problem at the right time and then one can generate the solution very easily.

Yes. However, it takes time to learn a completely different tool. If you know C# then you can easily pick up Java and vice versa, but there is no easy path when moving from C++ to Haskell.

May 17, 2019
On Friday, 17 May 2019 at 13:52:26 UTC, Ola Fosheim Grøstad wrote:
> But then you have research languages that try to be more regular interpretative, while still having it as a goal to

I meant imperative/procedural, not interpretative…

May 17, 2019
On Friday, 17 May 2019 at 13:52:26 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 17 May 2019 at 10:51:48 UTC, Alex wrote:
>> You'll have to wait a few millennia ;/
>
> Not necessarily. A dynamic language like Python is quite close in some respects, except you usually don't want to be bogged down with types when doing practical programming in Python.
>
> Anyway, if you want to design such a language the best way to go would probably be to first build an interpreted language that has the desired semantics. Then try to envision how that could be turned into a compiled language and do several  versions of alternative designs.
>
>> Socrates would be amazing at what we have just as you will be amazing at what the future brings(if humans don't destroy themselves in the process).
>
> Yes, there is that caveat.
>
>> With quantum computing, if it is in the right direction, a new programming paradigm and languages will need to be laid to make it practical.
>
> I don't think quantum computing is a prerequisite. What I believe will happen is that "machine learning" will fuel the making of new hardware with less "programmer control" and much more distributed architectures for computation. So, when that new architecture becomes a commodity then we'll se a demand for new languages too.
>
> But the market is too smal in the foreseeable future, so at least for the next decades we will just get more of the same good old IBM PC like hardware design (evolved, sure, but there are some severe limitations in step-by-step backwards compatibility).
>
> I guess there will be a market shift if/when robots become household items. Lawnmowers is a beginning, I guess.
>
>> That is a microcosm of what is happening on the large scale.
>
> I don't think it is happening yet though. The biggest change is in the commercial usage of machine learning. But I think contemporary applied machine learning is still at a very basic level, but the ability to increase the scale has made it much more useful.
>
>> Compilers are still in the primitive stage on the higher level just as your programming knowledge is primitive on a higher level(and very advanced on a lower level).
>
> Yes, individual languages are very primitive. However, if you look at the systems built on top of them, then there is some level of sophistication.
>
>> difficult to use for complexity. The only languages I see that handle complexity on any meaningful level are functional programming languages.
>
> Logic programming languages, but they are difficult to utilize outside narrow domains. Although I believe metaprogramming for the type system would be better done with a logic PL. How to make it accessible is a real issue, though.
>
> Another class would be languages with builtin proof systems.
>
>> Procedural languages actually seem to create an upper limit where it becomes exponentially harder to do anything past a certain amount of complexity.
>
> You can do functional programming in imperative languages too, it is just that you tend not to do it. Anyway, there are mixed languages.
>
>> The problem with functional programming languages is they are difficult to use for simple stuff and since all programs start out as simple, it becomes very hard to find a happy medium.
>
> Well, I don't know. I think the main issue is that all programming languages lacks the ability to create a visual expression that makes the code easy to reason about. Basically, the code looks visually too uniform and similar and we have to struggle to read meaning into the code.
>
> So as a result we need to have the model for large sections of the program in our head. Which is hard. So basically a language should be designed together with an accompanying editor with some visual modelling capabilities, but we don't know how to do that well… We just know how to do it in a "better than nothing" fashion.
>
>> Unfortunately D+Haskell would be an entirely new language.
>
> I think you would find it hard to bring those two together anyway.
>
> The aims were quite different in the design. IIRC Haskell was designed to be a usable vehicle for research so that FP research teams could have some synergies from working on the same language model.
>
> D was designed in a more immediate fashion, first as a clean up of C++, then as a series of extensions based on perceived user demand.
>
>> thing. Same goes for programming. Procedural and functional are just two different extremes and making the distinction is actually limiting in the long run.
>
> Well, I'm not sure they are extremes, you are more constrained with a FP, but that also brings you coherency and less state to consider when reasoning about the program.
>
> Interestingly a logic programming language could be viewed as a generalization of a functional programming language.
>
> But I'll have to admit that there are languages that sort of makes for a completely different approach to practical programming, like Erlang or Idris.
>
> But then you have research languages that try to be more regular interpretative, while still having it as a goal to provide a prover, like Whiley and some languages built by people at Microsoft that is related to Z3. These are only suitable for toy programs at this point, though.
>
>> The goal with such things is choosing the right view for the right problem at the right time and then one can generate the solution very easily.
>
> Yes. However, it takes time to learn a completely different tool. If you know C# then you can easily pick up Java and vice versa, but there is no easy path when moving from C++ to Haskell.


All I will say about this is that all the different programming languages are just different expressions of the same. No matter how different they seem, they all attempt to accomplish the same. In mathematics, it has been found that all the different branches are identical and just look different because the "inventors" approaches it from different angles with different intents and experiences.

Everything you describe is simply mathematical logic implemented using different syntactical and semantical constructs that all reduce to the same underlying boolean logic.

We already have general theorem solving languages and any compiler is a theorem solver because all programs are theorems.

The problem is not so much the logic side but the ability to deal with complexity. We can, even in machine code, write extremely complex programs... but, as you mention, the human brain really can't deal with the complexity. Visual methods and design patterns must be used to allow humans to abstract complexity.  Functional programs do this well because they are directly based in abstraction(category theory). Procedural does not. You basically get functions and maybe oop on top of that and then you have no way to manage all that stuff properly using the language and tools. As the program goes in complexity so does the code because there is no higher levels of abstraction to deal with it.

It all boils down to abstraction and that is the only way humans can deal with complexity. A programming language needs to be designed with that fact as the basis to be the most effective. All the other details are irrelevant if that isn't covered. This is why no one programs in assembly... not because it's a bad language necessarily but because it doesn't allow for abstraction. [I realize there are a lot of people that still program in assembly but only because they have to or their problems are not complex]

I don't use Haskell much to know if it has similar limitations but my feeling is that because it is directly based in category theory it has the abstraction problem solved... it just has a lot of other issues that makes it not a great language for practical usage.

Ultimately only the future will tell it's secrets. I'm just trying to extrapolate from my experiences and where I see the direction going. No one here will actually be alive to find out if I'm right or wrong so ultimately I can say what I want ;) [But there is a clear evolution of programming languages, mathematics, and computation that does provide history and hence a direction of the future]