July 11, 2012
On 7/11/12 3:58 AM, Don Clugston wrote:
> Something I wonder about, though, is how many different use cases are we
> dealing with?
>
> Suppose we had a caching solution (you could think of it as @cached, but
> it could be done in a library). The user would need to provide a const,
> pure function which returns the same value that is stored in the cache.
> This is enforceable. The only way to write to the cache, is by calling
> the function.
>
> How far would that take us? I don't think there are many use cases for
> logically pure, apart from caching, but I have very little idea about
> logical const.

I think a caching solution would cover most valid needs and indeed would be checkable.

We can even try its usability with a library-only solution. The idea is to plant a mixin inside the object that defines a static hashtable mapping addresses of objects to cached values of the desired types. The destructor of the object removes the address of the current object from the hash (if there).

Given that the hashtable is global, it doesn't obey the regular rules for immutability, so essentially each object has access to a private stash of unbounded size. The cost of getting to the stash is proportional to the number of objects within the thread that make use of that stash.

Sample usage:

class Circle {
    private double radius;
    private double circumferenceImpl() const {
        return radius * 2 * pi;
    }
    mixin Cached!(double, "circumference", circumferenceImpl);
    ...
}

auto c = new const(Circle);
double len1 = c.circumference;
double len2 = c.circumference;

Upon the first use of property c.circumference, Lazy computes the value by calling this.circumferenceImpl() and stashes it in the hash. The second call just does a hash lookup.

In this example searching the hash may actually take longer than computing the thing, but I'm just proving the concept.

If this is a useful artifact, Walter had an idea a while ago that we can have the compiler help by using the per-object monitor pointer instead of the static hashtable. Right now the pointer points to a monitor object, but it could point to a little struct containing e.g. a Monitor and a void*, which opens the way to O(1) access to unbounded cached data. The compiler would then "understand" to not consider that date regular field accesses, and not make assumptions about them being immutable.

Any takers for Cached? It would be good to assess its level of usefulness first.


Andrei
July 11, 2012
I don't think that is a big issue.

Either the value isn't accessed often, and then it make no sense to cache it, or it is accessed often and it make sense to precalculate the cached value when building the object.

Doing the calculation at that point avoid synchronization cost involved otherwise.

People here talk about the atomicity of a pointer write. This is true, but the object pointed isn't guaranteed to be committed to memory, so synchronization is required anyway.
July 11, 2012
On 7/11/12 8:55 AM, Andrei Alexandrescu wrote:
> The cost of getting to the stash is
> proportional to the number of objects within the thread that make use of
> that stash.

Oops, that would be with linear search :o).

Andrei
July 11, 2012
On Wednesday, 11 July 2012 at 12:55:39 UTC, Andrei Alexandrescu wrote:
> Any takers for Cached? It would be good to assess its level of usefulness first.

As for me, lazy computation (with caching) would likely be the last feature I really miss in D.
July 11, 2012
On 11/07/2012 03:30, Jakob Ovrum wrote:
> On Wednesday, 11 July 2012 at 01:19:16 UTC, Andrei Alexandrescu wrote:
>> On 7/10/12 6:05 PM, Walter Bright wrote:
>>> A straightforward workaround is to use PIMPL to encapsulate the logical
>>> const stuff, and then cast the reference to const to use inside the
>>> opEquals.
>>
>> s/straightforward/awful/
>>
>> Andrei
>
> Honestly, I think the ideal would be to give people the alternative of
> having a mutable opEquals etc. These methods only need to be logically
> constant (of course, they don't *need* to be even that, to the joy of
> operator overloading abusers worldwide), which means no help from the
> compiler, as it should be - just plain mutable, with the programmer
> providing the guarantee.
>
> There was work in this direction but I understand it was ripe with
> issues of its own, however I don't understand how any other path could
> even be considered when it's just moving from one end of the scale to
> the other.
>
> Yes, Object *needs* to work with const, this is imperative. But does it
> need to compromise on implementations relying on mutability? I was
> hoping the answer was "no". Obviously such an override would not work
> with const references, but some classes really don't lean themselves to
> immutability at all, alleviating the need for const. I can also imagine
> that if both were allowed, someone would leverage the ability to use
> caching in the mutable overload, then falling back to eager computation
> in the const overload.

The cached value can in fact rely outside the object.

private string[const MyObject] cache;

class MyObject {
    override string toString() {
        return cache.get(this, complexCalculationOfToStringWeNeedToCache);
    }

    ~this() {
        cache.remove(this);
    }
}

This really isn't a problem. This is even thread safe. cache can be made static into MyObject too. Actually, plenty of solutions exists.
July 11, 2012
On 10/07/2012 22:11, Jonathan M Davis wrote:
> On Tuesday, July 10, 2012 12:00:59 H. S. Teoh wrote:
>> I think hidden somewhere in this is an unconscious conflation of
>> physical const with logical const.
>
> I completely disagree at least as far as classes go. opEquals, opCmp,
> toString, and toHash must be _physically_ const, because they must work with
> physically const objects. There is _no_ way around that, and whether the
> actual internals of those functions could conceivably mutate something if they
> were logically const is irrelevant.

That is pretty clear and I couldn't agree more.
July 11, 2012
On 07/11/2012 02:58 PM, deadalnix wrote:
> I don't think that is a big issue.
>
> Either the value isn't accessed often, and then it make no sense to
> cache it, or it is accessed often and it make sense to precalculate the
> cached value when building the object.
>

And sometimes it is not known in advance if it will be accessed at all.
And this is somewhere in a class that seems unrelated to the object
that wants to compute a string representation/hash value/comparison.
It is not the simple cases that are an issue.
July 11, 2012
On 11/07/2012 02:20, Timon Gehr wrote:
> On 07/11/2012 01:57 AM, Walter Bright wrote:
>> On 7/10/2012 4:16 PM, Timon Gehr wrote:
>>> Const is stronger than what is required to bridge the gap between
>>> mutable and immutable. It guarantees that a reference cannot be used to
>>> mutate the receiver regardless of whether or not the receiver is
>>> immutable underneath. That is unnecessary as far as immutable is
>>> concerned. It only needs to guarantee that the receiver does not change
>>> if it is immutable underneath.
>>
>> If you have a const function that accepts both mutable and immutable
>> args, then that function *by definition* cannot tell if it received
>> mutable or immutable args.
>>
>
> I understand. The trick is to disallow creating immutable instances of a
> class which is allowed to mutate the receiver in const methods.
> This is essentially your proposal with the casts, but it is type safe.
>
> This removes the 'const' guarantees, but 'immutable' stays
> unaffected. Furthermore, functions with closures are type checked at
> their creation site and may violate const-transitivity without affecting
> 'immutable'.
>

It is interesting. But is does transfers the constraint on const on constraint of not being const for children.

This isn't a free win, and I'm not sure it worth it.
July 11, 2012
On 11/07/2012 16:04, Timon Gehr wrote:
> On 07/11/2012 02:58 PM, deadalnix wrote:
>> I don't think that is a big issue.
>>
>> Either the value isn't accessed often, and then it make no sense to
>> cache it, or it is accessed often and it make sense to precalculate the
>> cached value when building the object.
>>
>
> And sometimes it is not known in advance if it will be accessed at all.
> And this is somewhere in a class that seems unrelated to the object
> that wants to compute a string representation/hash value/comparison.
> It is not the simple cases that are an issue.

Arguably, this is a software design problem. I never encountered a case where I need to do such thing and never encountered something similar in code except code that demonstrate trick or subverting language features/librairies in unexpected ways (which is always fun to do, but should never ends up in production or be promoted by a language/a lib).
July 11, 2012
On 11/07/2012 16:16, deadalnix wrote:
> On 11/07/2012 16:04, Timon Gehr wrote:
>> On 07/11/2012 02:58 PM, deadalnix wrote:
>>> I don't think that is a big issue.
>>>
>>> Either the value isn't accessed often, and then it make no sense to
>>> cache it, or it is accessed often and it make sense to precalculate the
>>> cached value when building the object.
>>>
>>
>> And sometimes it is not known in advance if it will be accessed at all.
>> And this is somewhere in a class that seems unrelated to the object
>> that wants to compute a string representation/hash value/comparison.
>> It is not the simple cases that are an issue.
>
> Arguably, this is a software design problem. I never encountered a case
> where I need to do such thing and never encountered something similar in
> code except code that demonstrate trick or subverting language
> features/librairies in unexpected ways (which is always fun to do, but
> should never ends up in production or be promoted by a language/a lib).

By the way, I'd be happy to be proven false, but I think the issue is overstated here.