October 04, 2012
On Tuesday, 2 October 2012 at 01:00:25 UTC, Walter Bright wrote:
>
> Since all you need to do to guarantee compile time evaluation is use it in a context that requires CTFE, which are exactly the cases where you'd care that it was CTFE'd, I just don't see much utility here.

I suppose the most common use case would be efficient struct literals which are essentially value types but have non-trivial constructors.

struct Law
{
    ulong _encodedId;

    this(string state, int year) @aggressive_ctfe
    {
        // non-trivial constructor sets _encodedId
        // ...
    }
}

Policy policy = getPolicy();

if( policy.isLegalAccordingTo(Law("Kentucky", 1898)) )
{
    // ...
}

I think the function attribute would be the most convenient solution.


> Note that it is also impossible in the general case for the compiler to guarantee that a specific function is CTFE'able for all arguments that are also CTFE'able.

I'll have to take your word for it for not knowing enough (anything) about the subject.
October 07, 2012
Am Tue, 02 Oct 2012 09:38:56 +0200
schrieb Don Clugston <dac@nospam.com>:

> Any code that behaves differently when compiled with -O, will do this as well. Constant folding of floating point numbers does the same thing, if the numbers are represented in the compiler in a different precision to how the machine calculates them. I believe that GCC, for example, uses very much higher precision (hundreds of bits) at compile time.

I'm not an expert, but I would have thought compilers strive to be IEEE compliant - whatever that means in detail. I've seen a compression algorithm that relies on exact floating-point semantics and accuracy. It would just fail, if compilers were creative or lax at certain optimization levels. (excluding the "I know what I am doing -ffast-math.)

-- 
Marco

1 2 3 4 5 6
Next ›   Last »