Thread overview
A better way of managing backwards compatibility?
Sep 03, 2015
Prudence
Sep 03, 2015
qznc
Sep 03, 2015
Prudence
September 03, 2015
Dealing with changes to standardized interfacing such as api function names, namespaces, file names/module names, etc is usually impossible to do because it severs the code written before the change from the new compiler. Most people think this is the way to do it and there is no, better, alternative.

Well, *NOW* there is:

One could completely change D(could replace it with a Fortran version, if desired) yet still keep backwards compatibility!!!! And it would be relatively easy.

"How can this be possible", you exclaim with relative disdain!!

Well, there are two distinct but compatible ways:


1. Essentially keep track of the version the compiler that is currently being used to compile the project, somewhere. (or one might be able to infer this from the dates, as long as one has a date to version mapping, but this is not accurate)

As long as those binaries of the compiler(and source would be nice) are somewhere, the user can be informed to download the correct version and/or have the D compiler automatically do this for it.

Note though, the requirement for D is only that it can do the checking and possibly downloading of another compiler. In fact, it could have it such delegating facilities to compile Fortran by similar means or it could be the D3 compiler that downloads D1 for some ancient source code.


2. Write a translation process that essentially "updates" the source code to work.

Suppose D **wanted** to change a keyword for some reason or another. The compiler first runs the code through the translation process which mostly is just a token replacer but it could be more advanced and modify function differences(such as a swapping of two parameters).

The good news, is that such a feature is easy to implement as it is just a mapping of the token stream to another. The hard part is getting everyone on a high enough level to put such changes in the translation process when they modify the compiler or library or whatever.

For example, if we wanted to change "for" with "pour" then the parser just as an additional step. Instead of something like: parse(token[i]); we would have: parse(translate(token[i])); Where translate is just a string to string map, and we could make translate more complex by dealing with context(parse(translate(token, i)).

Again, one would have to know the version in some way.

We can think of 1 and 2 as different ends of the granular spectrum. 1 will get the correct version used and use it. It has to work, if not, then it wouldn't have worked anyways(user/setup issue) since this is sort of just automating what a user can do. I believe there are already tools that sort of emulate this(allows you to switch between versions easily but they are dumb(no memory and no automation)). 2 generally deals with simple changes.


The good news is that all this can come from just knowing the version to compile the source with. This then gives the compiler designer the freedom not to worry about naming stuff perfectly. Documentation is not a problem, as the same translation and versioning can be automated in the same way using the same data.

e.g., go to the docs, if you are using an old version, select the version, your correct documentation shows up.


What these two processes, together, do, is essentially gives a discrete "history" of the compiler. It would be like an continuous incremental backup of every change to a compiler, but since most changes do not effect source code backwards compatibility, One doesn't need every single change.

It also allows one to migrate to new versions seamlessly.

Imagine going from D1 to D2, but it's just D anyways because the versions "don't matter" anymore.

1. The compiler will try to translate the D1 source to the D2 source through a series of micro translations(for each version we would have a translate). If it fails at some point, it will report to the user the "errors in translation" which can be due to a syntax change that is known to break versioning. (e.g., change for(i=0;i<10;i++) to for i=0,9,1, which breaks code because of semantics... although with a more intelligent translator this specific case can easily be handled)

2. If step one fails, which depends on how many times it has to be called, the further way the versions we are going between the more likely step 1 will fail. In that case we revert the "exact" compiler needed by finding and using the correct compiler that the source code was designed with.


Of course! We would want this to be build in to the compiler from day one. But step 2 saves us with D!! Since almost every version has been saved, we can pretty much use step 2 the whole time.

What to this do for everyone when implemented properly? Simply keep the versions with the source code somehow(embed in source in a comment at the bottom of a file or in a configuration file somewhere), and keep the translation changes up to don't on compiler modification, and include this feature in the compiler, and then ***NO ONE*** will have to worry about versions again. (at worse, if they want to use the tools manually to translate the software to a working version then manually edit it to get it past a break(essentially replace step 1 with actually fingers on the keyboard... and if their smart, they could add those changes to the translation database for that version which the compiler could use(pull from the cloud))








September 03, 2015
On Thursday, 3 September 2015 at 17:48:23 UTC, Prudence wrote:
> 2. Write a translation process that essentially "updates" the source code to work.

Lucky you: https://github.com/Hackerpilot/dfix
September 03, 2015
On Thursday, 3 September 2015 at 21:03:05 UTC, qznc wrote:
> On Thursday, 3 September 2015 at 17:48:23 UTC, Prudence wrote:
>> 2. Write a translation process that essentially "updates" the source code to work.
>
> Lucky you: https://github.com/Hackerpilot/dfix

There you go!! It seems someone just as intelligent as me realizes the power of such a thing!!

The only real problem is integrating it with the compiler to make it seamless. I know there are detractors who think it is too much work, but these people like to waste time, They think they have it to waste!

But even spending 1 hr on trying to compile source code with versioning issues is a terrible of time that could be better used on more productive things. DFix is obviously a step in the right direction... but a few more steps are required to save the many thousands of man hours that are wasted on versioning control.