On Monday, 23 December 2024 at 12:02:47 UTC, Renato Athaydes wrote:
> On Thursday, 19 December 2024 at 06:04:33 UTC, Jo Blow wrote:
> But there have been several cases when I have asked others about trying to do certain "compile time" things in Kotlin that say it can't be done. It seems that there is actually no compile time programming in Kotlin(and I guess Java too?)? That at best one has to work in some DSL like thing that uses a "plugin" but things that make D shine because you can just get shit done, usually several ways, doesn't exist. Things like string mixins, CTFE, etc. D would be an amazing language to use if it was integrated into android studio like Kotlin.
Those features are exactly what makes D such a difficult language to provide good support for in an IDE. Even things that are trivial in languages like Kotlin, like removing unused imports, can't be easily done in D (I was told that in a discussion about why that didn't work at all with D).
I don't think that is the case. Even with Kotlin I'm constantly getting import issues with auto import where it will import some random type is I copy and paste something. The removal part is essentially easy because you could just remove them all and then re-add if your adder was perfect(slow but).
Basically it's just a big import table graph. I think what makes it harder on D's end is lack of IDE support and the ad hoc nature of organizing libraries. In Koltin's ecosystem everything seems to work together and evolve in harmony while in D it's all add hock as people just tacked on stuff that they wanted and then it they eventually leave and others try to maintain it and so on. It's the difference of having a lot of $$$ behind something verses relying on "handouts".
There is literally nothing to stop D or any other language from having anything else any other does but time/$$$.
The underlying system is just mathematics and of that is just logic and all that stuff is perfectly well worked out.
Maybe one of the best languages might be Haskell as essentially it is an implement of "category theory" but it has the least support. Likely the more "complex" a system is the less users it will have and so the less $$$ behind it. This is why python is so popular but a terrible language(it doesn't need to exist). It was pushed as a simple language of simple programmers(mostly who were just entering the boom of programming in the 2000's) and a lot of $$$ got behind it but it's offers nothing of value as a programming language. OTH it is easy to use and works and has a lot of libraries(because of the user base) so in that sense it does offer something and something which D doesn't have.
I think the way things have evolved are perceived wrongly. E.g., python is no more complex or simple than C. They are essentially the same languages with just different rewrite rules and extra features or not. But The transformation of society into a digital one and C being before that(and things like Fortran, Algo, and other before or around that) was considered "hard" because, well, it was in a different era. As people tried to learn computers and programming it was difficult(no wikipedia, intellisense, nice GUI's, speed, etc) so these languages became "hard languages". As society started to change and more $$$ was pushed in we get newer languages and some of these were pushed as "easier" and it hooked a lot of people. But, in fact, they are no easier or harder but just look different. It's like driving two cars. Pretty much if you can drive a car you can drive any car. You might have some trouble if you have to drive a manual and haven't learned that but it's not hard but just a little work. Similarly, you could say that manuals were first and then autos came out and it turned manuals into being "hard" when before they were just what they were(neither hard or easy but there wasn't anything to "divide and conquer" them).
It's basically all about $$$. with enough money anything can be done because one can use the good ol capitalism to get people to do the work.
I don't think anyone wants to spend a huge amount of money writing a new system that is much better designed from the ground up because the cost vs reward is too great. The productivity, say, might increase 10% but is that really worth the cost(specially in our current world environment). It likely is much cheaper to just hire more "slaves" to overcome that 10%.
> Kotlin and Java have always taken IDE integration seriously and would not add a feature that "breaks" the IDE, even if that makes compile-time programming impossible. Reflection is heavily used by frameworks, but nearly never in application code, exactly because it makes IDE support and static analysis impossible.
Yes, impossible... because one can't know something that is unknowable and reflection typically isn't knowable at compile time. Yet, there are things that are known at compile time and must be known.
E.g., the type of any variable must be known at compile time. Even if it is "dynamic" it must be known that it is of type dynamic. Hence reflecting over the type is a well defined. There are many things that are defined at compile time that can be reflected over. Some types may be provably known to be something or variables may be specific variables(constants) and all of this can be used if it is tracked by the compiler.
I think the real issue here is not so much that reflection is "bad"... although it might be slower, but that there are generally ways that are used to overcome the need to reflect on types in application code.
The issue is that originally the type system in languages were very limited and fixed in nature. Compilers were very simple and "dumb". Things needed to be explicitly done. As things evolved things evolved. People learned a little more and added a little more.
I think the real problem is that we've been building on systems that were build on very primitive ideas and so it's a constantly piling on of ad-hoc changes that may or may not evolve into something more and then a constant need to maintain them.
Ideally if, say, we could just create a new system from the ground up(including hardware) with all the "learned lessons" we would get systems that would be far more efficient and "complex". It is because we can, as a species, handle the complexity more. But what we really have is having to work with essentially a primitive system with many layers upon layers of improvements to give it the features and expressiveness we really want from realizing there are better ways.
The issue is that the cost to truly start fresh is too much to go back and "do things right" with the lessons learned. So we are stuck with the flaws of the past that has become part of our "DNA". [This is true of all things because it is evolution at work]
The point I'm making is that none of this is "impossible" but simply impractical due to the cost involved.
The mathematics is pretty much completely worked out. Computers are just calculators and how they work are dictated by relatively straightforward mathematical ideas.
When we program we are pretty much just doing high level semantic bit fiddling. There's only one "god" and that is nand or nor. All logic, all math, all science, is just nand or nor and binary input and output. (it's all transistors at the end of the day)
So all this "stuff" we have in programming and computers is just abstractions to bridge the gap of how our minds like to represent things(very high level) vs the hardware layer of bits.
I guess we'll have to wait until some new discovery happens that kicks off a new path which will enable starting fresh in many ways. (the life and death are the same thing)
Maybe this will happen if quantum computing actually ever does anything. [Of course it will evolve long the same lines in the sense that it will start ad-hoc and build into itself alot of new types of flaws but it will have less of the old flaws]
> > Anyways, the entire point of this point is to ask this question: Is anyone who is pretty familiar with Kotlin and it's compile time programming willing to explain what the heck is really going on with it and why it seems so different to D's?
As someone coming the opposite way, D metaprogramming looks like pure magic. It's mind blowing (and so completely underrated, Zig is getting all the attention for having something similar but still less powerful). But yeah, the cost of those features seem to be bad IDE support, which kind of sucks, for sure.
To me D is great and it doesn't seem like magic at all. It was what I was looking for. Before D I was doing C# and so I guess it is appropo since D comes after C# in the "musical scale" ;) In other languages I was using I would always want to do certain things to avoid the repetativeness and to find elegant and robust solutions.
I have programmed in assembly quite a bit when I was younger as well as C and C++ and many other languages(lua, JS, VBS, pascal, basic, php, perl, etc) so I got used to the idea that all languages are effectively the same with just different "vocab". Similar to natural languages that essentially are all the same even though they might have significant difference such as how they arrange the grammatical elements it is really just a "permutation" and "concatenation" issue. The idea is the same under the hood of all of them(after all, they all get translated into binary).
So it was always about the "features" the language had that could reduce the "visual complexity" or increase the expressiveness. E.g., imagine a language without loops where you had manually add each iteration as a new block and check. The entire point of a for loop is to abstract over this common "pattern". Just think how slow programming would be if you had to do it "manually"(another reason I don't like driving manuals. Not because I can't but because it's time consuming and overs nothing for me). Also, there were programming languages that did not have for loop. Most hand held calculators do not have for loops(or only for certain operations).
Now, if you understand that then that pretty much is ALL of programming in a nut-shell. It's all about finding better ways to do things(usually common things as to benefit through abstracting out the repetition).
Also, I also studied quite a bit of mathematics so I was used to this idea of abstractions. E.g., think of algebra and using variables. Same idea as programing with variables. If you think of arithmetic then that is sorta like programming without "meta programming". Your stuck doing everything by hand and things get very tedious.
When you go to algebra you can start doing things that would be very complicated in arithmetic very quickly. That is the abstraction process.
What is algebra? It is working with types. Arithmetic is working with the elements of the type. Arithmetic is when you are forced to use the "built in types" and can't use your own. You are stuck doing calculations on those types(e.g., integers or rationals).
Algebra starts to let you have some "wiggle room" with types. Usually you are still thinking in terms of fixed types in arithmetic(numbers) but it starts to get you thinking away from such things just because it is more abstract.
But algebra typically is quite limiting still to numbers. You have variables(rather than everything being constants) but it's power is only in abstracting out numbers.
Well, the point is that you can continue going. E.g., what if types themselves were variables? What if we could do "arithmetic" on types(rather than just numbers).
When you start going down this path you realize that you have a common pattern of basically types of ... of types. It's types all the way down. Everything is a type. Even 3 is it's own type.
Category theory embodies this concept by forcing us to think in terms of relationships between types and what is the correct way to think about them(using things like composition, functors(how types relate to each other), natural transformations(relationships between relations), etc.
So when you program enough and do math all this stuff just falls out. It's not something really explainable because it has to be experienced but you also eventually have to learn the formal "mathematics" to realize that people have already developed this stuff to a very high degree.
So at some point "D becomes inevitable". At some point algebra becomes inevitable and then abstract algebra and then category theory. It's just evolution at work.
The more you work in D the more working in the higher type levels(types of types) becomes natural. Just as the more you do algebra the more it becomes natural and arithmetic gets put on the back burner and becomes easy.
Then you want to work in types of types of types but even D becomes hard to use at this level. You start realizing that D is limited and start to look for something better. Then you come across Haskell and realize it can do all of what D could, in terms of the type system, and is even better.
With programming, though, at some point it becomes pointless to keep going up the type system.a the "reward" for using higher level abstracts becomes minimal vs just using what you got. It's also personal. Some people are ok with wasting time unrolling a loop for no reason what so ever but just because they can.
Category theory is the sort of "ultimate programming language" in that it has the tools to work at any level of abstraction(so types^5324325 is possible and the abstract looks exactly the same as types). So it sort of melds all the complexity of "meta programming" into just programming. That is why it's math of course. Haskell attempts to approximate this but it really is just stuck at the same level as D or slightly higher but with a more "type friendly" syntax.
But the point here is that it's all about abstraction. There is nothing more to it. But abstraction without realization is meaningless and so for computers there has to be some way to compose all that abstraction up into bits that connect to transistors that do the thing the abstraction is suppose to represent. This means that if there is any point in the the process of "concreatization" of an abstraction that can't be resolved to something known at compile time or knowingly determined at runtime one can't resolve it to just bits. There will be some spots where no bits exist but which all bits need to be known to exist.
Basically if you think of a compiler as "flattening" all the abstractions you have to actually be able to flatten them and that might not be the case if abstractions are not properly formed/well-defined.
Hence programming languages offer various degree's of "meta programming" of the "type system" and the more you learn to think abstractly the more you'll run into their limitations and desire more from it. Sometimes even very simple metaprogramming tasks are not implemented and this can be a PITA for someone who is used to using them. It becomes very difficult to go back down the ladder.
> Kotlin has annotation processors, like Java, which are about the only metaprogramming you can do outside of just writing code that explicitly generates source code (which you can do easily enough with https://github.com/palantir/javapoet for Java, or the Kotlin spinoff: https://square.github.io/kotlinpoet/
This is the current Kotlin solution: https://kotlinlang.org/docs/ksp-overview.html
As far as I know, almost no one uses this though. People are happy to just NOT do anything at compile-time, I guess it's hard to believe that if you come from D, but it's indeed possible.
Yes, this is the strange thing I still struggle with. I hear a lot of talk about "compiler plugins" and "annotations" and "KSP", "KAPT', etc but none of this stuff is what I would really call meta programming or it is very basic and extremely complex for doing things that should be extremely simple.
E.g., in D I can do mixin("int x;"); instead of int x;
This might seem moronic to do but I can also do mixin("int "~name~";"); which is is like kotlins string interpolations but this literally creates "variable code".
Now I can generate code that depends on other code(whatever name is) and as long as the compiler can resolve what name is at the type of compilation(not runtime) then it can evaluate that string to a constant and hence insert it into the code as if I typed it by hand.
It's beautiful and very simple(the syntax might not be pretty but it's simple and so powerful and so it is beautiful in the meta programming sense).
That simple idea can be extremely powerful and used to reduce very complex things when used with other similar metaprogramming ideas. It allows you to do all kinds of things. You can parse strings using regex at compile time. You might convert a text file(read at compile type) into D.
Basically you turn the compiler itself into an application or utility and it is all integrated into the language.
It solves about 95% of meta programming that one would ever want to do. The only thing I have found that D does not do well is higher level meta programming concepts like functors and natural transformations(and hence adjunctions, kan extensions, etc).
E.g., if I want to transform an entire class hierarchy into another one it could be very complicated but if D understood these higher level ideas it could be just one line of code. This is where Haskell does a much better job. Of course one can parameterize the class type and this gives one some power but it is "linear" in the sense that the transformation doesn't really do anything advanced. So what happens is every once in a while, when you know about these things, you'll say "I want to do this" and you just can't do it in D or at least not in any way that is desirable but you know there is a better way to do it. These typically happen when you want to mess with "large abstractions" or "abstracts of abstractions". That is why D really is really just a language with a single level of meta programming while Haskell is several levels(it's not easy to define this stuff though because sometimes it might have more or less depending on the specifics).
> > The way I see D's metal language was that essentially anything that was "defined at compile time" could be treated as a program at compile time and all the compile time information would be filled in as such since it could. If code could be determined to be completely well-defined at compilation then it could simply be "executed"(evaluated) at compile time(in a first pass). Kotlin doesn't seem to have anything like this. It doesn't even have string mixins or eval without including a kotlin compiler inside the language.
Correct. Kotlin and Java people would be horrified of those features, I think :D
Again, because of IDE and tooling support.
well, Visual D handles them just well. I guess you mean that the IDE won't be able to provide various helper info? I haven't used android studio enough to know if it offers advanced things but it really doesn't do any more than what visual D does. It does do it better but I've not seen anything but basically syntax checking, intellisense, and various helper things like "import" or lint stuff or whatever. Only intellisense really matters a lot to me. I like that it can show some help in a window and it's refactoring stuff works well(unlike visual D) but it's not stuff I can't live without or make programming faster. I might not be using it's features though(I'm relatively new to it so). To me it just feels much better than Visual D because it works well and everything seems to be integrated to work for the programmer and the experience. but this is the difference between probably thousands of people working on it vs 1.
> > I'm wondering if it would be worth the effort to try to get D to work with android studio since it can compile to all those. E.g., write the jetpack compose code in kotlin and hook up the business in with D somehow. If it wasn't too much trouble and worked well then this might give one the best of both worlds. JNI is a bit of a PIA when trying to call java from C because one has to use reflection for every aspect and it is very long winded. This probably could be simplified greatly with some D meta programming.
You're absolutely correct that with D, you could easily generate stuff like JNI bindings.
But notice that JNI is going to be replaced with a new solution: https://docs.oracle.com/en/java/javase/21/core/foreign-function-and-memory-api.html
This is still in preview but you may want to look at it if you find JNI too horrible.
Thanks. I don't find JNI that bad. It's a little bit of a pain to deal with "marshaling" but it's surprisingly worked well for what I have done even stumbling around. Basically I only ran in a few problems later on. I did use jni_bind though which helped a little setting up the structures.
I ran into a few problems when trying to call java functions but once I worked out precisely what it really wanted it then all just worked(of course I have no idea if it's leaking memory or whatnot but it's not crashing so ;)).
I basically was just using ffmpeg to decode some audio and was able to get it all to work quite easily. Normally, in D, you can really struggle with some of these things. It may be because there is more info online(but not enough but still more) and it's enough to avoid the major pitfalls.
> Regarding whether D could be a good option for Android, I very much doubt it. You can't remove the nice tools people are used to in the Java/Kotlin world, if you try you will get a very large backlash... and I'm afraid D will never have as good tools even if it some day gets huge corporate backing because the language was not designed to allow that to be possible (maybe it is possible, but I doubt it's actually feasible - correct if I am wrong), from the list of features it is very clear that was absolutely never a high priority concern.
Anything is possible...well, almost anything. Every language is the same.
Because the D metaprogramming is done at compile time and is effectively just "generating code" it would have no issues integrating in except that any dependencies on that generated code would have to obviously come afterwards or two passes would have to be used.
It might be a bit of work but it could be done. The cost/reward though is significant and unfortunately many people are very myopic specially when their $$$ depends on being so.
My logic, and it should be the logic of everyone in theory, is that if it only adds to the capabilities then it can't hurt anyone(except for, of course, bugs and such that may be introduced).
Through the years I have asked for feature requests in various languages and apps and the single most common response was "I don't need it" as if that is a relevant argument as to why it should or shouldn't be implemented.
Most people are not going to find the need for anything until they need it and with an attitude like that nothing would ever be added. After all, when meta programming was first thought of there were plenty of "people"(in quotes because I don't technically know if they exist) who said "why would you ever need anything like that! Nope, bad idea. It's useless, I don't see any need for it!".
It is the way of the world. Progress is made though. Slowly. I guess that isn't a bad thing. It's a little frustrating when people do claim that something has no use when they can't prove it in any way, shape, or form but just their opinion based on their ignorance of what is actually being discussed. (after all, what is the use of a new born baby?)
The fact of the matter is though that abstractions is how we think as humans and it's been a long struggle to get programming languages(which are just things that translate math/logic into "computer-speak"(bits)) where they are now. I think more people are open to the expansive nature than ever before but that resistance always shows up in people in different areas. I guess it is a "fear of change" type of thing.