Thread overview | |||||
---|---|---|---|---|---|
|
March 20, 2011 [phobos] CTFE Regressions | ||||
---|---|---|---|---|
| ||||
It has been brought to my attention that Phobos has a tendancy to be inconsistent between releases with regards to what is CTFE-able and what isn't. Making something which is not CTFE-able so that it's CTFE-able generally isn't a problem, because it isn't going to break code. However, making a function which is CTFE-able so that it's not CTFE-able _can_ break code. And I suspect that this is an issue that we don't think about much when making changes to Phobos (I certainly haven't been). Given the current state of CTFE and Phobos' development, I'm not sure that we want to get particularly strict about CTFE at the moment (CTFE _can_ be a bit of a black box with regards to what works and doesn't and making necessary changes _can_ change the CTFE-ability fairly easily sometimes). Also, it's the sort of thing that could change as the compiler changes, so it colud be completely out our hands. However, I think that the question needs to be raised about what we want to do about this. Making CTFE-able functions not CTFE-able breaks code. The only solution that I can think of is that any function that we think should be CTFE-able and should _stay_ CTFE-able should have a unit test guaranteeing that it's CTFE-able - initialize an enum with its return value or whatever would be necessary to force compile time evaluation. I don't think that we should do that we all functions which are currently CTFE-able (enough stuff is in flux that I doubt that that would be a good idea), but if we pick a subset which really should be CTFE-able and test for it, and then over time as it becomes clear what else should guaranteed to be CTFE-able, we add tests for those functions as well. Once Phobos is appropriately mature and stable, it could be that every function which is supposed to be CTFE-able will have a test for it (and if we're really picky, stuff which isn't CTFE-able could have a test guaranteeing that it _isn't_ CTFE-able just so that we don't accidentally make it CTFE-able and then having people complaining when we the accidentally make it not CTFE-able once again). This is the only thing that I can think of which would really help cut back on code breaking because a function stops being CTFE-able. Thoughts? Good idea? Bad idea? Good idea, but not right now? I expect that we'd need to start with a fairly small subset of functions in Phobos, but it would at least start the process of ensuring that we don't break people's code because something we do makes it so that something is no longer CTFE-able. - Jonathan M Davis |
March 21, 2011 [phobos] CTFE Regressions | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jonathan M Davis | This is a funny issue. They are not regressions against documented behaviour.
At this stage, using a Phobos function at compile time is basically
relying on undocumented behaviour --
although it may happen to work in CTFE, that's just luck. The problem
is, that this isn't stated strongly enough anywhere.
In particular, I think we should document that
*** any Phobos function which is not marked as pure, should not be
relied on as working in CTFE ***.
Bear in mind that because of bug 1330, a large fraction of Phobos
array functions which will compile in CTFE, don't actually work in
reality.
We may want to choose a small set of functions which we guarantee will
work in CTFE, and add tests for them in general as you suggest.
But the most important issue is that we have to make clear to people
that using Phobos functions in CTFE is undefined behaviour.
On 20 March 2011 23:53, Jonathan M Davis <jmdavisProg at gmx.com> wrote:
> It has been brought to my attention that Phobos has a tendancy to be inconsistent between releases with regards to what is CTFE-able and what isn't. Making something which is not CTFE-able so that it's CTFE-able generally isn't a problem, because it isn't going to break code. However, making a function which is CTFE-able so that it's not CTFE-able _can_ break code. And I suspect that this is an issue that we don't think about much when making changes to Phobos (I certainly haven't been).
>
> Given the current state of CTFE and Phobos' development, I'm not sure that we want to get particularly strict about CTFE at the moment (CTFE _can_ be a bit of a black box with regards to what works and doesn't and making necessary changes _can_ change the CTFE-ability fairly easily sometimes). Also, it's the sort of thing that could change as the compiler changes, so it colud be completely out our hands. However, I think that the question needs to be raised about what we want to do about this. Making CTFE-able functions not CTFE-able breaks code.
>
> The only solution that I can think of is that any function that we think should be CTFE-able and should _stay_ CTFE-able should have a unit test guaranteeing that it's CTFE-able - initialize an enum with its return value or whatever would be necessary to force compile time evaluation. I don't think that we should do that we all functions which are currently CTFE-able (enough stuff is in flux that I doubt that that would be a good idea), but if we pick a subset which really should be CTFE-able and test for it, and then over time as it becomes clear what else should guaranteed to be CTFE-able, we add tests for those functions as well. Once Phobos is appropriately mature and stable, it could be that every function which is supposed to be CTFE-able will have a test for it (and if we're really picky, stuff which isn't CTFE-able could have a test guaranteeing that it _isn't_ CTFE-able just so that we don't accidentally make it CTFE-able and then having people complaining when we the accidentally make it not CTFE-able once again).
>
> This is the only thing that I can think of which would really help cut back on code breaking because a function stops being CTFE-able. Thoughts? Good idea? Bad idea? Good idea, but not right now?
>
> I expect that we'd need to start with a fairly small subset of functions in Phobos, but it would at least start the process of ensuring that we don't break people's code because something we do makes it so that something is no longer CTFE-able.
>
> - Jonathan M Davis
> _______________________________________________
> phobos mailing list
> phobos at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/phobos
>
|
March 20, 2011 [phobos] CTFE Regressions | ||||
---|---|---|---|---|
| ||||
Posted in reply to Don Clugston | > This is a funny issue. They are not regressions against documented
> behaviour. At this stage, using a Phobos function at compile time is
> basically relying on undocumented behaviour --
> although it may happen to work in CTFE, that's just luck. The problem
> is, that this isn't stated strongly enough anywhere.
> In particular, I think we should document that
> *** any Phobos function which is not marked as pure, should not be
> relied on as working in CTFE ***.
> Bear in mind that because of bug 1330, a large fraction of Phobos
> array functions which will compile in CTFE, don't actually work in
> reality.
> We may want to choose a small set of functions which we guarantee will
> work in CTFE, and add tests for them in general as you suggest.
> But the most important issue is that we have to make clear to people
> that using Phobos functions in CTFE is undefined behaviour.
And of course bringing pure into it brings in all of the problems with pure (we _really_ need conditional purity of some kind if we want the templated functions in Phobos to work with pure). :)
I have no problem with stating that whether a function works with CTFE in Phobos is undefined behavior, but it's pretty limiting if you can't CTFE much in Phobos. Most functions end up relying on Phobos one way or another, and if you can't rely on Phobos working with CTFE consistently, then CTFE becomes a _lot_ less useful.
This _does_ strike me as the sort of issue, however, that isn't really ready to be completely resolved (properly sorting out how Phobos deals with stuff like const, pure, and nothrow could have a definite effect on how things work and whether things are CTFE-able). However, I also think that it's something that we need to think about. And there may indeed be functions which we can unequivocally say can be used with CTFE and should always be useable with CTFE. _Those_ are the sort of functions that we should really probably consider testing for CTFE in the short term.
However, it _is_ an interesting problem overall in that by relying on CTFE, you're effectively relying on something which is not part of the function signature, and CTFE was designed from the get-go with the idea that it just worked without you having to do anything special like use a particular function attribute or whatnot. So, relying on is arguably bad, but if you can't rely on it, you can't really use it... So, yeah. This is a funny issue. I do think that it needs to be considered though.
- Jonathan M Davis
|
Copyright © 1999-2021 by the D Language Foundation