Jump to page: 1 2 3
Thread overview
Re: Thoughts about "Compile-time types" talk
May 14, 2019
H. S. Teoh
May 15, 2019
NaN
May 15, 2019
Alex
May 15, 2019
H. S. Teoh
May 16, 2019
Alex
May 16, 2019
NaN
May 16, 2019
Alex
May 17, 2019
Alex
May 17, 2019
Alex
May 17, 2019
Alex
May 17, 2019
Alex
May 17, 2019
Alex
May 18, 2019
dayllenger
May 17, 2019
NaN
May 17, 2019
Alex
May 17, 2019
NaN
May 15, 2019
H. S. Teoh
May 16, 2019
NaN
May 14, 2019
H. S. Teoh
May 17, 2019
Timon Gehr
May 14, 2019
On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via Digitalmars-d wrote:
> On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote: [...]
> > I haven't fully thought through this yet, but the basic concept is that there should be *no need* to distinguish between compile-time and runtime in the general case, barring a small number of situations where a decision has to be made.
> [...]
> I was thinking and working in the same direction,
> when using regEx you can use the runtime (RT) or the compile time (CT)
> version, but why not let the compiler make the decisions?

It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary.

What I envision is this:

- Most parameters are not specifically designated compile-time or
  runtime; they are just general parameters. So your function fun(x,y,z)
  has 3 parameters that could be runtime, or compile-time, or a mix of
  either. Which, exactly, is not decided at the declaration of the
  function.

- At some higher level along the call chain, perhaps in main() but
  likely in some subordinate but still high-level function, a decision
  will be made to, for example, call fun() with a literal argument, a
  compile-time known value, and a runtime argument obtained from user
  input. This then causes a percolation of CT/RT designations down the
  call chain: if x is bound to a literal, the compiler can pass it as a
  compile-time argument. Then inside fun(), it may pass x to another
  function gun(p,q,r).  That in turn fixes one or more parameters of
  gun() as compile-time or runtime, and so on. Similarly, if y as a
  parameter of fun() is bound to a runtime value, then if y is also
  passed to gun() inside fun()'s body, then that "forces" the
  corresponding parameter to be bound to runtime value.

You could think of it as all functions being templates by default, and they get instantiated when called with a specific combination of runtime/compile-time arguments. With the added bonus that you don't have to decide which parameters are template parameters and which are runtime parameters; the compiler infers that for you based on what kind of arguments were passed to it.

Of course, sometimes you want to force a certain parameter to be either runtime or compile-time, e.g., to control template bloat. So perhaps some kind of designation like @ct or @rt on a parameter:

	// Tentative syntax
	auto fun(@ct int x, @rt int y) { ... }

This would force x to always be known at compile-time, whereas y can accept either (you can think of it as @ct "implicitly converts to" @rt, but not the other way round).

If a parameter is not designated either way, then the compiler is free to choose how it will be implemented.


T

-- 
If it tastes good, it's probably bad for you.
May 14, 2019
On Mon, May 13, 2019 at 10:26:11AM +0000, Luís Marques via Digitalmars-d wrote:
> On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote:
> > Skimmed over Luís Marques' slides on "compile-time types", and felt compelled to say the following in response:  I think we're still approaching things from the wrong level of abstraction.  This whole divide between compile-time and runtime is IMO an artificial one, and one that we must transcend in order to break new ground.
> 
> Thanks for the feedback. I think your counter-proposal makes some sense.  Whether it is the right way to attack the problem I don't know. My presentation was fairly high-level, but as part of my prototyping efforts I've gone over a lot of the details, the problems they would create, how they could be solved and so on. When I read your counter-proposal my knee jerk reaction was that it would address some deficiencies with my approach but also introduce other difficult practical problems.
[...]

I'd love to hear what are the difficult practical problems you have in mind, if you have any specific examples.

The underlying idea behind my proposal is to remove artificial distinctions and unify CT and RT so that, as far as is practical, they are symmetric to each other.

Currently, the asymmetry between CT and RT leads to a lot of incidental complexity: we have std.algorithm.filter and std.meta.Filter, std.algorithm.map and std.meta.Map, and so on, which are needless duplications that are necessary only because of the artificial distinction between CT and RT.  Also, UFCS only applies to RT values, so to chain std.meta.Filter, we'd have to write ugly nested expressions where the RT counterpart is already miles ahead in terms of readability and writability.

Then the ugly-looking !() vs. () between CT and RT arguments.  I'll admit !() was a very clever invention in the early days of D when templates were first introduced -- it's definitely much better than C++'s nasty ambiguous <> syntax.  But at the end of the day, it's still an artifact that only arose out of the artificial distinction between CT and RT parameters.

Looking forward, one asks, is this CT/RT distinction a *necessary* one? Certainly, at some point, the compiler must know whether something is available at compile-time or should be deferred to runtime.  But is this a decision that must be made *every single time* you declare a bunch of parameters?  Is it a decision so important that it has to be made *right then and there*?  Perhaps not.  Perhaps we can do better by passing this decision to the caller, who probably has a better idea of what context we're being called in, and who can make a more meaningful decision.

Hence, the idea of unifying CT and RT (to the extent possible) by not differentiating between them until necessary.


T

-- 
Why waste time learning, when ignorance is instantaneous? -- Hobbes, from Calvin & Hobbes
May 15, 2019
On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
> On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via Digitalmars-d wrote:
>> On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote: [...]
>> > I haven't fully thought through this yet, but the basic concept is that there should be *no need* to distinguish between compile-time and runtime in the general case, barring a small number of situations where a decision has to be made.
>> [...]
>> I was thinking and working in the same direction,
>> when using regEx you can use the runtime (RT) or the compile time (CT)
>> version, but why not let the compiler make the decisions?
>
> It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary.

If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.



May 15, 2019
On Wednesday, 15 May 2019 at 18:31:57 UTC, NaN wrote:
> On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
>> On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via Digitalmars-d wrote:
>>> On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote: [...]
>>> > I haven't fully thought through this yet, but the basic concept is that there should be *no need* to distinguish between compile-time and runtime in the general case, barring a small number of situations where a decision has to be made.
>>> [...]
>>> I was thinking and working in the same direction,
>>> when using regEx you can use the runtime (RT) or the compile time (CT)
>>> version, but why not let the compiler make the decisions?
>>
>> It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary.
>
> If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.

That is not true.

Any type is either known at "CT" or not. If it is not then it can't be simplified. If it is known then it can. A compiler just simplifies everything it can at CT and then runs the program to do the rest of the simplification.

The reason why it confuses you is that you are thinking in the wrong paradigm.


To unify requires a language that is built under the unified concept.

D's language was not designed with this unified concept, but it obviously does most of the work because it it does do CT compilation. Most compilers do. Any optimization/constant folding is CT compilation because the compiler knows that something can be computed.

So ideally, internally, a compiler would simply determine which statements are computable at compile time and simplify them, possibly simplifying an entire program... what is not known then is left to runtime to figure out.

D already does all this but the language clearly was not designed with the intention to unify them.

For example,

int x = 3;
if (x == 3) fart;

Hear the compiler should be able to reason that x = 3 and to optimize it all to

fart;


This requires flow analysis but if everything is 100% analyzed correctly the compiler could in theory determine if any statement is reducible and cascade everything if necessary until all that's left are things that are not reducible.

It's pretty simple in theory but probably very difficult to modify a pre-existing compiler that was designed without it to use it.

Any special cases then are handled by special keywords or syntaxes... which ideally would not have to exist at all.

Imagine if all statements in a program were known at compile time, even things like readln(as if it could see in to the future)...

The a compiler would compile everything down to a single return. It could evaluate everything, every mouse click, every file io or user choice, etc...

A program is simply a compiler that is compiling as it is being run, the users adding in the missing bits.

May 15, 2019
On Wed, May 15, 2019 at 06:31:57PM +0000, NaN via Digitalmars-d wrote:
> On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
> > On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via Digitalmars-d wrote:
> > > On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote: [...]
> > > > I haven't fully thought through this yet, but the basic concept is that there should be *no need* to distinguish between compile-time and runtime in the general case, barring a small number of situations where a decision has to be made.
> > > [...]
> > > I was thinking and working in the same direction, when using regEx
> > > you can use the runtime (RT) or the compile time (CT) version, but
> > > why not let the compiler make the decisions?
> > 
> > It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary.
> 
> If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.
[...]

I think the simplest solution would be for regular if to always introduce a new scope, and static if behaves as before. Due to their different semantics on the surrounding code, blindly unifying them would probably be a bad idea.

But taking a step back, there are really two (perhaps more) usages of
static if: (1) to statically select a branch of code based on a known CT
value, or (2) to statically inject declarations into the surrounding
code based on some CT condition.

Case (1) is relatively straightforward to unify with regular if.

Case (2) would seem to be solely in the domain of static if, and I don't see much point in trying to unify it with regular if, even if such were possible.  For example, static if can appear outside a function body, whereas regular if can't. Static if can therefore be used to choose a different set of declarations at the module level. If we were to hypothetically unify that with regular if, that would mean that we have to somehow support switching between two or more sets of likely-conflicting declarations at *runtime*.  I don't see any way of making that happen without creating a huge mess, unclear semantics, and hard-to-understand code -- not to mention the result is unlikely to be very useful.

So I'd say static if in the sense of (2) should remain as-is, since it serves a unique purpose that isn't subsumed by regular if.  Static if in the sense of (1), however, can probably be unified with regular if.


T

-- 
Genius may have its limitations, but stupidity is not thus handicapped. -- Elbert Hubbard
May 15, 2019
On Wed, May 15, 2019 at 06:57:02PM +0000, Alex via Digitalmars-d wrote: [...]
> D already does all this but the language clearly was not designed with the intention to unify them.
> 
> For example,
> 
> int x = 3;
> if (x == 3) fart;
> 
> Hear the compiler should be able to reason that x = 3 and to optimize it all to
> 
> fart;
> 
> 
> This requires flow analysis but if everything is 100% analyzed correctly the compiler could in theory determine if any statement is reducible and cascade everything if necessary until all that's left are things that are not reducible.
> 
> It's pretty simple in theory but probably very difficult to modify a pre-existing compiler that was designed without it to use it.

Good news: this is not just theory, and you don't have to modify any compiler to achieve this.  Just compile the above code with LDC (ldc2 -O3) and look at the disassembly.  :-)

In fact, the LDC optimizer is capable of far more than you might think. I've seen it optimize an entire benchmark, complete with functions, user-defined types, etc., into a single return instruction because it determined that the program's output does not depend on any of it.

Writing the output into a file (cause a visible effect to prevent the optimizer from eliding the entire program) doesn't always help either, because the optimizer would then run the entire program at compile-time then emit the equivalent of:

	enum answer = 123;
	outputfile.writeln(answer);

The only way to get an accurate benchmark is to make it do non-trivial work -- non-trivial meaning every operation the program does contributes to its output, and said operations cannot be (easily) simplified into a precomputed result.  The easiest way to do this is for the program to read some input that can only be known at runtime, then perform the calculation on this input.  (Of course, there's a certain complexity limit to LDC's aggressive optimizer; I don't think it'd optimize away an NP-complete problem with fixed input at compile-time, for example.  But it's much easier to read an integer at runtime than to implement a SAT solver just so LDC won't optimize it all away at compile-time. :-D  Well actually, I'm sure the LDC optimizer will give up long before you give it an NP-complete problem, but in theory it *could* just run the entire program at compile-time if all inputs are already known.)

The DMD optimizer, by comparison, is a sore loser in this game. That's why these days I don't even bother considering it where performance is concerned.


But the other point to all this, is that while LDC *can* do all of this at compile-time, meaning the LLVM backend can do all of this, for other languages like C and C++ too, what it *cannot* do without language support is to use the result of such a computation to influence program structure.  That's where D's CTFE + AST manipulation becomes such a powerful tool.  And that's where further unification of RT/CT concepts in the language will give us an even more powerfully-expressive language.


T

-- 
Computers aren't intelligent; they only think they are.
May 16, 2019
On Wednesday, 15 May 2019 at 19:28:21 UTC, H. S. Teoh wrote:
> On Wed, May 15, 2019 at 06:57:02PM +0000, Alex via Digitalmars-d wrote: [...]
>> D already does all this but the language clearly was not designed with the intention to unify them.
>> 
>> For example,
>> 
>> int x = 3;
>> if (x == 3) fart;
>> 
>> Hear the compiler should be able to reason that x = 3 and to optimize it all to
>> 
>> fart;
>> 
>> 
>> This requires flow analysis but if everything is 100% analyzed correctly the compiler could in theory determine if any statement is reducible and cascade everything if necessary until all that's left are things that are not reducible.
>> 
>> It's pretty simple in theory but probably very difficult to modify a pre-existing compiler that was designed without it to use it.
>
> Good news: this is not just theory, and you don't have to modify any compiler to achieve this.  Just compile the above code with LDC (ldc2 -O3) and look at the disassembly.  :-)
>

Yeah, I was just using it as an example.... most modern compilers can do some mixture and in fact do.

> In fact, the LDC optimizer is capable of far more than you might think. I've seen it optimize an entire benchmark, complete with functions, user-defined types, etc., into a single return instruction because it determined that the program's output does not depend on any of it.

Yes, this is because the evolution of compilers is moving towards what we are talking about. We are not the first to think about things this way, in fact, all things evolve in such ways. The "first compiler"(the abacus?) was meta programming at the time.

> Writing the output into a file (cause a visible effect to prevent the optimizer from eliding the entire program) doesn't always help either, because the optimizer would then run the entire program at compile-time then emit the equivalent of:
>
> 	enum answer = 123;
> 	outputfile.writeln(answer);
>
> The only way to get an accurate benchmark is to make it do non-trivial work -- non-trivial meaning every operation the program does contributes to its output, and said operations cannot be (easily) simplified into a precomputed result.  The easiest way to do this is for the program to read some input that can only be known at runtime, then perform the calculation on this input.  (Of course, there's a certain complexity limit to LDC's aggressive optimizer; I don't think it'd optimize away an NP-complete problem with fixed input at compile-time, for example.  But it's much easier to read an integer at runtime than to implement a SAT solver just so LDC won't optimize it all away at compile-time. :-D  Well actually, I'm sure the LDC optimizer will give up long before you give it an NP-complete problem, but in theory it *could* just run the entire program at compile-time if all inputs are already known.)

A programming is, is a very complicated mathematical equation. A compiler "simplifies" the equation so that it is easier to work with(faster, less space, etc). What's more mind blowing is that this is actually true... that is, the universe seems to be one giant mathematical processing machine. Men 200 years ago working on the foundations of computing had no idea about this stuff and that there would be these deep relationships between math, computers, and life itself. I think humanity is just scratching the surface though.

In any case, a program is just an equation, a compiler a simplifier. A compiler attempts to compile everything down to a final result, certain inputs are not known at compile time so they are determined at "run time".

Imagine this: Imagine you have some complex program, say a pc video game. What is the purpose of this program? Is it to run it and experience? NAY! It is a final result ultimately! If the compiler could, hypothetically, compile it down to a final value, that would be ideal. What is the final value though? Well, it is the experience of game in to the human mind. Imagine you could experience it without having to waste hours and hours... that would be the ideal compiler.

Obviously here I'm taking a very general definition of compiler... but again, this is where things are headed. The universe has time and space... the only way to make more time is to reduce the costs and increase the space. Eventually humans will have uC in their brains where they can experience things much quicker, interface with the "compiler" much quicker, etc... [probably thousands of years off if humanity makes it].


> The DMD optimizer, by comparison, is a sore loser in this game. That's why these days I don't even bother considering it where performance is concerned.
>
>
> But the other point to all this, is that while LDC *can* do all of this at compile-time, meaning the LLVM backend can do all of this, for other languages like C and C++ too, what it *cannot* do without language support is to use the result of such a computation to influence program structure.  That's where D's CTFE + AST manipulation becomes such a powerful tool.  And that's where further unification of RT/CT concepts in the language will give us an even more powerfully-expressive language.
>

It may be able to do such things as you describe. I'm not claiming anything is impossible, quite the contrary. I'm mainly talking about what is and what could be. LDC may do quite more work in this area. Ultimately syntax is an abstraction and it is a hurdle. Ideally we would have one symbol for one code, a unique hashing for all programs(Godel theory). Then it would be very easy to write a program! ;) Of course looking up the right symbol would take for ever. In fact, one could describe programming as precisely looking up this code, and it is quite complex and easy to get the wrong code(e.g., all programs are correct, we just choose the wrong one).


My main point with D's language is that it has separate CT and RT constructs. enum is CT. This complicates things. If D was designed with the concept of LDC and was able to simply optimize all "RT" code that could be optimized then the distinction isn't needed... although the separation does make it easier to reason about... everyone knows an enum is CT.

My way of thinking is this:

All programming should be thought of as in "CT". That is, all data is known, it just may be specifically known in the future only. A compiler cannot reduce the future state since it is "unknown" at the present. So it delays the compilation until the future(when you click the button or press a key or insert the USB device).

This is of course just thinking of CT and RT slightly different. It is more functional. But what it does is shift the emphasis in the right direction.

Why?

Because if one writes code as if it were all RT then one tends to prevent optimizations from occurring(as DMD does). If one thinks of CT one usually has the implicit idea that things are going to be reduced(as LDC does). It's more of a mindset but it has repercussions.

e.g., if one always use static if as default then one is thinking in CT. If one always defaults to if then one is thinking of RT. The difference is that the compiler always can optimize the static if while it may or may not(LDC or DMD) the standard if.

Which, as you have pointed out, usually there is no true difference. Either the if can or cannot be optimized depending on the state.

LDC is simply a more powerful compiler that "reasons" about the program. That is, it understands what it is doing more than DMD. DMD does things blind. Makes a lot of assumptions.

Until there is a massive paradigm shift in programming(e.g., from punch cards to assembly) the only way to optimize code is going to be to design languages and compilers that are optimal. That is the progression of compilers as we see with LDC vs DMD.

Programming is getting more complex, not less. But it is also becoming more optimal... that is the progression of all things... even compilers evolve(in the real sense of evolution).

I think my overall point is that D's language design itself has made the artificial distinction between CT and RT and now we have to live with it... The distinction was made up front(in the language) rather than waiting to the last minute(in the compiler). Of course, this problem started way back with the "first" programming language. Some point in the future someone will be saying the same types of things about LDC and some other "advanced" concept.

The more I program in D, the more I find meta programming a burden. Not because it is not powerful but that I have to think with two hats on at the same time. It's not difficult until I stop programming in D for a while and have to find the other hat and get good balancing both of them on my head again.

D's meta programming is powerful but it is not natural. Ideally we would have a language that is both powerful and natural. I think Haskell might be like this but it is unnatural in other ways.

Of course, at the end of the day, it is what it is...





May 16, 2019
On Wednesday, 15 May 2019 at 18:57:02 UTC, Alex wrote:
> On Wednesday, 15 May 2019 at 18:31:57 UTC, NaN wrote:
>> On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
>>> On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via Digitalmars-d wrote:

>> If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.
>
> That is not true.
>
> Any type is either known at "CT" or not. If it is not then it can't be simplified. If it is known then it can. A compiler just simplifies everything it can at CT and then runs the program to do the rest of the simplification.

You're conflating how it is implemented with the semantics of the actual language. I understand how static if works, what im saying is if you want to just have "if" and for the compiler to infer whether it's CT or RT, then you have the same construct with different semantics depending on the context.


May 16, 2019
On Wednesday, 15 May 2019 at 19:09:27 UTC, H. S. Teoh wrote:
> On Wed, May 15, 2019 at 06:31:57PM +0000, NaN via Digitalmars-d wrote:
>
> Case (1) is relatively straightforward to unify with regular if.
>
> Case (2) would seem to be solely in the domain of static if, and I don't see much point in trying to unify it with regular if, even if such were possible.  For example, static if can appear outside a function body, whereas regular if can't. Static if can therefore be used to choose a different set of declarations at the module level. If we were to hypothetically unify that with regular if, that would mean that we have to somehow support switching between two or more sets of likely-conflicting declarations at *runtime*.  I don't see any way of making that happen without creating a huge mess, unclear semantics, and hard-to-understand code -- not to mention the result is unlikely to be very useful.
>
> So I'd say static if in the sense of (2) should remain as-is, since it serves a unique purpose that isn't subsumed by regular if.  Static if in the sense of (1), however, can probably be unified with regular if.

So what you really have is (1) which is an optimisation problem. Enable parameters that can be either CT or RT, so that the same code can be used in each instance, and the compiler can do dead code elimination when a parameter is CT.

Or (2), which is conditional compilation, which stays as it is.

So it is more a question of unifying CT & RT parameters than it is maybe unifying CT & RT language constructs. In fact maybe it doesnt do anything for the later? So it comes down to enabling parameters that can do both CT and RT so the compiler can do dead code elimination on those parameters if CT. Because at the moment the choice of CT?RT is fixed by the function being called.
May 16, 2019
On Thursday, 16 May 2019 at 08:12:49 UTC, NaN wrote:
> On Wednesday, 15 May 2019 at 18:57:02 UTC, Alex wrote:
>> On Wednesday, 15 May 2019 at 18:31:57 UTC, NaN wrote:
>>> On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
>>>> On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via Digitalmars-d wrote:
>
>>> If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.
>>
>> That is not true.
>>
>> Any type is either known at "CT" or not. If it is not then it can't be simplified. If it is known then it can. A compiler just simplifies everything it can at CT and then runs the program to do the rest of the simplification.
>
> You're conflating how it is implemented with the semantics of the actual language. I understand how static if works, what im saying is if you want to just have "if" and for the compiler to infer whether it's CT or RT, then you have the same construct with different semantics depending on the context.

No, you don't get it.

We are talking about a hypothetical compiler that doesn't have to have different contexts. In D, the contexts arbitrarily separated in the language...

What we are ultimately talking about here is that CT and RT is not two different concepts but one concept with some very minute distinction for RT.

As a programmer you shouldn't care if something is CT or RT and hence you shouldn't even know there is a difference.

What are you saying is that you have one if and two contexts. What I'm saying is  hat you have one if and one context. That the CT programming and runtime programming are NOT treated as being two different universes with some overlap but the same universe with a slight boundary.

For example, in D we have

enum x = 4;
and
int y = 4;

That is explicitly two different programming contexts. On CT and the other RT. But that is the fallacy. They are EXACTLY identical programmatically.

Only when y is modified at some point later in the code does things potentially change. enum says we will never change x at CT... but why can't the compiler figure that out automatically rather than forcing us to play by it's rules and have a separate context?


CTFE sort of blends the two and it is a step in the direction of unifying CT and RT. Of course, it will never remove enum from the language...

The point is that D, and most programming languages, create a very strong distinction between CT and RT when the truth is that there is virtually no distinction.

This happens because programming languages started out as almost entirely RT and CT as added on top of them. This created the separation, not because there actually is a theoretical one. In the context of category theory, RT is simply the single unknown category in which any code that depends on must be deferred from computation until it is known(which occurs when we run the program and "compilation" can finish).

The problem is that 99.9% of programmers think almost entirely in terms RT, even when they do CT programming. For them a compiler just takes code and spits out machine code... but they do not see how it is all connected. A cpu is also a compiler and part of the compilation process. It takes certain bit patterns and compiles them down to others. Seeing things in this larger process shows one the bigger picture and how there are many artificial boundaries and some are no longer needed.







« First   ‹ Prev
1 2 3