On Monday, 11 October 2021 at 22:01:34 UTC, max haughton wrote:
The decision to build languages as monolithic lumps of specification, then a compiler, also phased in it's design, while simple, I think will be a detriment in the post Moore's law age, as it makes it very irritating to use and understand the full muscle of the optimizer in the right places - and fundamentally limits the potential of the/a language in a now post-heterogenous world: you should he able to compile for a GPU as part of the compiler a la a trait (This is an acceptable use of the keyword, reflection is not).
I think a big problem from an exploratory perspective is that that is essentially
impossible in a one-file-at-a-time compilation world and that to explore the space
of possibilities far more people than the D community has.
C, C++, OpenCL, IPSC (1 file at a time, lack of anything other than immediate local context)
CUDA, SYCL, (one file at a time, but lets do it twice! one for the host, one for the device)
OpenMP (where premature outlining for device offloading has caused massive missed optimisation opportunity)
I think D's worst feature is really a human tendency to avoid language solutions for things. sumtype is a good library, but it should be core language for example. Typecons.tuple is even less marginal.
__traits should've died years ago. Its continued existence shows some level of paralysis.
in a static slice of time, yes. As a feature to easily add language functionality, without taking up more keywords, with minimal "feature space", its indispensable. It does however show that a program needs an API to the compiler during compilation. Is there a better way to do this? core.reflect and core.codegen seem like good steps.