November 19, 2009
On Wed, 18 Nov 2009 18:14:08 -0500, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:

> We're entering the finale of D2 and I want to keep a short list of things that must be done and integrated in the release. It is clearly understood by all of us that there are many things that could and probably should be done.
>
> 1. Currently Walter and Don are diligently fixing the problems marked on the current manuscript.
>
> 2. User-defined operators must be revamped. Fortunately Don already put in an important piece of functionality (opDollar). What we're looking at is a two-pronged attack motivated by Don's proposal:
>
> http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP7
>
> The two prongs are:
>
> * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this:
>
> T opBinary(string op)(T rhs) { ... }
>
> The string is "+", "*", etc. We need to design what happens with read-modify-write operators like "+=" (should they be dispatch to a different function? etc.) and also what happens with index-and-modify operators like "[]=", "[]+=" etc. Should we go with proxies? Absorb them in opBinary? Define another dedicated method? etc.

I don't like this.  The only useful thing I can see is if you wanted to write less code to do an operation on a wrapper aggregate, such as an array, where you could define all binary operations with a single mixin.

Other than that, it munges together all binary operations into a single function, when all those operations are different it:

1) prevents code separation from things that are considered separately
2) makes operators non-virtual, which can be solved by a thunk, but that seems like a lot of boilerplate code that will just cause bloat
3) If you derive from a class that implements an operator, and you want to make that operator virtual, it will be impossible
4) auto-generated documentation is going to *really* suck
5) you can't define operators on interfaces, or if you do, it looks ridiculous (a thunk function that dispatches to the virtual methods).
6) implementing a new operator in a derived class is virtually impossible (no pun intended).

I imagine that dcollections for example will be *very* hard to write with this change.

Seems like you are trying to solve a very focused problem without looking at the new problems your solution will cause outside that domain.

Can we do something like how opApply/ranges resolves? I.e. the compiler tries doing opAdd or opMul or whatever, and if that doesn't exist, try opBinary("+").

> 3. It was mentioned in this group that if getopt() does not work in SafeD, then SafeD may as well pack and go home. I agree. We need to make it work. Three ideas discussed with Walter:
>
> * Allow taking addresses of locals, but in that case switch allocation from stack to heap, just like with delegates. If we only do that in SafeD, behavior will be different than with regular D. In any case, it's an inefficient proposition, particularly for getopt() which actually does not need to escape the addresses - just fills them up.

Perhaps, but getopt is probably not the poster child for optimizing performance -- you most likely call it once, changing that single application to use heap data isn't going to make a difference.

> * Allow @trusted (and maybe even @safe) functions to receive addresses of locals. Statically check that they never escape an address of a parameter. I think this is very interesting because it enlarges the common ground of D and SafeD.

I think allowing calling @trusted or @safe functions with addresses to locals is no good for @safe functions (i.e. a @safe function calls a @trusted function with an address to a local without heap-allocating).  Remember the "returning a parameter array" problem...

> * Figure out a way to reconcile "ref" with variadics. This is the actual reason why getopt chose to traffic in addresses, and fixing it is the logical choice and my personal favorite.

This sounds like the best choice.

> 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list.

I know it's not part of the spec, but I'm not sure if you mention the array "data stomping" problem in the book.  If not, the MRU cache needs to be implemented.

-Steve
November 19, 2009
Steven Schveighoffer wrote:
> On Thu, 19 Nov 2009 03:53:51 -0500, Don <nospam@nospam.com> wrote:
> 
>> Andrei Alexandrescu wrote:
>>> We're entering the finale of D2 and I want to keep a short list of things that must be done and integrated in the release. It is clearly understood by all of us that there are many things that could and probably should be done.
>>>  1. Currently Walter and Don are diligently fixing the problems marked on the current manuscript.
>>>  2. User-defined operators must be revamped.
>>
>> Should opIndex and opSlice be merged?
>> This would be simpler, and would allow multi-dimensional slicing.
>> Probably the simplest way to do this would be to use fixed length arrays   of length 2 for slices.
>> So, for example, if the indices are integers, then
>> opIndex(int x) { } is the 1-D index, and
>> opIndex(int[2] x) {} is a slice from x[0] to x[1],
>> which exactly corresponding to the current opSlice(x[0]..x[1]).
> 
> I hope you still mean to allow arguments other than int.

Read it.
"So, _for example_, if the indices are integers,".

> Also, how does this work with Andrei's "opBinary" proposal?

I don't know -- I'm not proposing a solution to the indexing-and-slicing-expression issues. Still, combining indexing and slicing helps a little. But I don't yet know if the opBinary concept will work.

I think the problem of verbosity in operator overloads (opAdd, opMul, ... all being nearly the same) is very unimportant and shouldn't be the focus of attention. Two things matter:
(1) expressivity; and
(2) performance.
If you don't have these two, your verbosity will be shot to pieces anyway.
This proposal, together with opDollar(), closes the last remaining element of (1).
November 19, 2009
On Thu, 19 Nov 2009 08:09:14 -0500, Don <nospam@nospam.com> wrote:

> Steven Schveighoffer wrote:
>> On Thu, 19 Nov 2009 03:53:51 -0500, Don <nospam@nospam.com> wrote:
>>
>>> Andrei Alexandrescu wrote:
>>>> We're entering the finale of D2 and I want to keep a short list of things that must be done and integrated in the release. It is clearly understood by all of us that there are many things that could and probably should be done.
>>>>  1. Currently Walter and Don are diligently fixing the problems marked on the current manuscript.
>>>>  2. User-defined operators must be revamped.
>>>
>>> Should opIndex and opSlice be merged?
>>> This would be simpler, and would allow multi-dimensional slicing.
>>> Probably the simplest way to do this would be to use fixed length arrays   of length 2 for slices.
>>> So, for example, if the indices are integers, then
>>> opIndex(int x) { } is the 1-D index, and
>>> opIndex(int[2] x) {} is a slice from x[0] to x[1],
>>> which exactly corresponding to the current opSlice(x[0]..x[1]).
>>  I hope you still mean to allow arguments other than int.
>
> Read it.
> "So, _for example_, if the indices are integers,".

D'oh, sorry for the noise :(

>> Also, how does this work with Andrei's "opBinary" proposal?
>
> I don't know -- I'm not proposing a solution to the indexing-and-slicing-expression issues. Still, combining indexing and slicing helps a little. But I don't yet know if the opBinary concept will work.
>
> I think the problem of verbosity in operator overloads (opAdd, opMul, ... all being nearly the same) is very unimportant and shouldn't be the focus of attention. Two things matter:
> (1) expressivity; and
> (2) performance.
> If you don't have these two, your verbosity will be shot to pieces anyway.
> This proposal, together with opDollar(), closes the last remaining element of (1).

I agree.

-Steve
November 19, 2009
== Quote from Steven Schveighoffer (schveiguy@yahoo.com)'s article
> On Wed, 18 Nov 2009 18:14:08 -0500, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
> > We're entering the finale of D2 and I want to keep a short list of things that must be done and integrated in the release. It is clearly understood by all of us that there are many things that could and probably should be done.
> >
> > 1. Currently Walter and Don are diligently fixing the problems marked on the current manuscript.
> >
> > 2. User-defined operators must be revamped. Fortunately Don already put in an important piece of functionality (opDollar). What we're looking at is a two-pronged attack motivated by Don's proposal:
> >
> > http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP7
> >
> > The two prongs are:
> >
> > * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this:
> >
> > T opBinary(string op)(T rhs) { ... }
> >
> > The string is "+", "*", etc. We need to design what happens with read-modify-write operators like "+=" (should they be dispatch to a different function? etc.) and also what happens with index-and-modify operators like "[]=", "[]+=" etc. Should we go with proxies? Absorb them in opBinary? Define another dedicated method? etc.
> I don't like this.  The only useful thing I can see is if you wanted to
> write less code to do an operation on a wrapper aggregate, such as an
> array, where you could define all binary operations with a single mixin.
> Other than that, it munges together all binary operations into a single
> function, when all those operations are different it:
> 1) prevents code separation from things that are considered separately
> 2) makes operators non-virtual, which can be solved by a thunk, but that
> seems like a lot of boilerplate code that will just cause bloat
> 3) If you derive from a class that implements an operator, and you want to
> make that operator virtual, it will be impossible
> 4) auto-generated documentation is going to *really* suck
> 5) you can't define operators on interfaces, or if you do, it looks
> ridiculous (a thunk function that dispatches to the virtual methods).
> 6) implementing a new operator in a derived class is virtually impossible
> (no pun intended).
> I imagine that dcollections for example will be *very* hard to write with
> this change.
> Seems like you are trying to solve a very focused problem without looking
> at the new problems your solution will cause outside that domain.
> Can we do something like how opApply/ranges resolves? I.e. the compiler
> tries doing opAdd or opMul or whatever, and if that doesn't exist, try
> opBinary("+").

This sounds like another candidate for inclusion in a std.mixins module.  We could make a mixin that gives you back the old behavior for those cases were you need it:

enum string oldOperatorOverloading =
q{
    T opBinary(string op)(T rhs) {
        static if(op == "+"
            && __traits(compiles, this.opAdd(T.init)) {
            return opAdd(rhs);
        }
    }

    // etc.
};

Usage:

class Foo {
    mixin(oldOperatorOverloading);

    Foo opAdd(Foo rhs) {  /* do stuff. */ }
}
November 19, 2009
== Quote from Chad J (chadjoan@__spam.is.bad__gmail.com)'s article
> Andrei Alexandrescu wrote:
> > grauzone wrote:
> >>
> >> Also, you should fix the auto-flattening of tuples before it's too late. I think everyone agrees that auto-flattening is a bad idea, and that tuples should be nestable. Flattening can be done manually with an unary operator.
> >>
> >> (Introducing sane tuples (e.g. unify type and value tuples, sane and
> >> short syntax, and all that) can wait for later if it must. Introducing
> >> these can be downwards compatible, I hope.)
> >
> > Non-flattening should be on the list but I am very afraid the solution would take a long time to design, implement, and debug. I must discuss this with Walter.
> >
> > Andrei
> Might I suggest a daring stop-gap:  kill tuples altogether.
> Then we can implement them later correctly and it won't break backwards
> compatibility.
> Ideally it isn't an all-or-nothing proposition either.  Maybe we can
> just kill the parts that are bad.  Like disallow tuples-of-tuples, but
> allow tuples that are already flat.
> - Chad

But that would destroy most of the metaprogramming capabilities of D.
November 19, 2009
Steven Schveighoffer wrote:
> On Wed, 18 Nov 2009 18:14:08 -0500, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
> 
>> We're entering the finale of D2 and I want to keep a short list of things that must be done and integrated in the release. It is clearly understood by all of us that there are many things that could and probably should be done.
>>
>> 1. Currently Walter and Don are diligently fixing the problems marked on the current manuscript.
>>
>> 2. User-defined operators must be revamped. Fortunately Don already put in an important piece of functionality (opDollar). What we're looking at is a two-pronged attack motivated by Don's proposal:
>>
>> http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP7
>>
>> The two prongs are:
>>
>> * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this:
>>
>> T opBinary(string op)(T rhs) { ... }
>>
>> The string is "+", "*", etc. We need to design what happens with read-modify-write operators like "+=" (should they be dispatch to a different function? etc.) and also what happens with index-and-modify operators like "[]=", "[]+=" etc. Should we go with proxies? Absorb them in opBinary? Define another dedicated method? etc.
> 
> I don't like this.  The only useful thing I can see is if you wanted to write less code to do an operation on a wrapper aggregate, such as an array, where you could define all binary operations with a single mixin.
> 
> Other than that, it munges together all binary operations into a single function, when all those operations are different it:
> 
> 1) prevents code separation from things that are considered separately

(I'll retort inline for each point.) That's quite exactly the opposite of what my experience with C++ and D operator overloading suggests: most of the time (a) I need to overload operators in large groups, (b) I need to do virtually the same actions for each operator in a group.

Note that with opBinary you have unprecedented flexibility on how you want to group operators. Consider:

struct A {
    A opBinary(string op)(A rhs)
        if (op == "+" || op == "-" || op == "*" || op == "/" ||
            op == "^^")
    {
        ...
    }
    A opBinary(string op)(A rhs) if (op == "~")
    {
        ...
    }
    ...
}

So anyway I contend that your argument is not correct. The "if" clause allows you to separate code for things that are considered separately. So essentially you can do things with one function per operator if you so wanted. Correct?

> 2) makes operators non-virtual, which can be solved by a thunk, but that seems like a lot of boilerplate code that will just cause bloat

Bloat of source or bloat of binary code? I don't know about the latter, but the former is actually nothing to worry about - it's easier to define an interface or a mixin to convert from the proposed approach to the old approach, than vice versa.

> 3) If you derive from a class that implements an operator, and you want to make that operator virtual, it will be impossible

It means that base class didn't mean for that function to make the operator overridable. If they wanted to make it configurable, they would have forwarded the operator to a virtual function.

> 4) auto-generated documentation is going to *really* suck

Agreed.

> 5) you can't define operators on interfaces, or if you do, it looks ridiculous (a thunk function that dispatches to the virtual methods).

interface Ridiculous {
    // Final functions in interfaces are allowed per TDPL
    Ridiculous opBinary(string op)(Ridiculous rhs) {
        return opAdd(rhs);
    }
    // Implement this
    Ridiculous opAdd(Ridiculous);
}

You can group things as you wish and combine virtual calls with string comparisons if that helps:

interface Ridiculous {
    // Final functions in interfaces are allowed per TDPL
    Ridiculous opArith(string op)(Ridiculous rhs)
        if (op == "+" || op == "-" || op == "*" || op == "/" ||
            op == "^^")
    {
        return opArith(op, rhs);
    }
    // Implement this
    Ridiculous opArith(string, Ridiculous);
}

> 6) implementing a new operator in a derived class is virtually impossible (no pun intended).

class Base {
    Base opBinary(string op)(Base rhs) if (op == "+") {
        ...
    }
}

class Derived : Base {
    Derived opBinary(string op)(Derived rhs) if (op == "-") {
        ...
    }
}

When you do so, you retain the advantage of grouping operators together (I think it's most likely that Base defines operators of one kind e.g. arithmetic and Derived defines operators of a different kind e.g. logic or catenation). Add thunking as you need and you're good to go.

> I imagine that dcollections for example will be *very* hard to write with this change.

I hope my arguments above convinced you to the contrary.

> Seems like you are trying to solve a very focused problem without looking at the new problems your solution will cause outside that domain.

You are correct in that I'm trying to smooth things primarily for structs. But I'll say that the templated approach is no slouch and can accommodate classes with virtual functions very capably, even though it is a bit more work than before.

One question is whether it's more often to overload operators for structs vs. classes. I imagine dcollections defines catenation and slicing, but not the bulk of operators. But the vast majority of operator overloading application is with value types as far as I can tell.

> Can we do something like how opApply/ranges resolves? I.e. the compiler tries doing opAdd or opMul or whatever, and if that doesn't exist, try opBinary("+").

I wouldn't want to have too many layers that do essentially the same thing.

>> 3. It was mentioned in this group that if getopt() does not work in SafeD, then SafeD may as well pack and go home. I agree. We need to make it work. Three ideas discussed with Walter:
>>
>> * Allow taking addresses of locals, but in that case switch allocation from stack to heap, just like with delegates. If we only do that in SafeD, behavior will be different than with regular D. In any case, it's an inefficient proposition, particularly for getopt() which actually does not need to escape the addresses - just fills them up.
> 
> Perhaps, but getopt is probably not the poster child for optimizing performance -- you most likely call it once, changing that single application to use heap data isn't going to make a difference.

I agree. My fear is that getopt is only an example of a class of functions.

>> * Allow @trusted (and maybe even @safe) functions to receive addresses of locals. Statically check that they never escape an address of a parameter. I think this is very interesting because it enlarges the common ground of D and SafeD.
> 
> I think allowing calling @trusted or @safe functions with addresses to locals is no good for @safe functions (i.e. a @safe function calls a @trusted function with an address to a local without heap-allocating).  Remember the "returning a parameter array" problem...

I've been thinking more of examples where you pass a pointer to a @trusted or @safe function and that function escapes the pointer. I couldn't find an example. So maybe allowing that is a good solution.

How would returning a parameter array break things?

>> * Figure out a way to reconcile "ref" with variadics. This is the actual reason why getopt chose to traffic in addresses, and fixing it is the logical choice and my personal favorite.
> 
> This sounds like the best choice.

Well it's not that simple. As I explained in a different post, getopt takes (string, pointer, string, pointer, string, pointer, ...). Now we need to make it take references instead of pointers, but the strings should stay values. We can't express a checkered constraint like that.

Incidentally there's a theory for allowing that, it's called "regular types" inspired from regular grammars. With a regular type you can define getopt signature as one or more pairs of string and ref. (Unfortunately C++ defined regular types differently which makes things difficult to search.) Anyhow, I don't think such an approach would help D - it's too complicated.

>> 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list.
> 
> I know it's not part of the spec, but I'm not sure if you mention the array "data stomping" problem in the book.  If not, the MRU cache needs to be implemented.

Yes, it will be because the book has a few failing unittests. In fact, I was hoping I could talk you or David into doing it :o).


Andrei
November 19, 2009
Andrei Alexandrescu wrote:
> Any more thoughts, please let them known. Again, this is the ideal time to contribute. But "meh, it's a hack" is difficult to discuss.

Well that's just like as if Bjarne Stroustrup would ask you: "What would you have done in my place? This looked like the right thing to do at this time!". And now we're working on a language that's supposed to replace C++.

Partially I don't really know how opBinary is supposed to solve most operator overloading problems (listed in DIP7). It just looks like a stupid dispatch mechanism. It could be implemented by using CTFE and mixins without compiler changes: just let a CTFE function generate a dispatcher function for each opSomething to opBinary. Of course, if you think operators are something that's forwarded to something else, it'd be nicer if dmd would be doing this, because code gets shorter. So opSomething gets ditched in favor of opBinary. But actually, the functionality of opBinary can be provided as a template mixin or a CTFE function in Phobos. At least then the user has a choice what to use.

(About the issue that you need to remember names for operator symbols: you know C++ has a fabulous idea how to get around this...)

Now what about unary operators? Or very specific stuff like opApply? What's with opSomethingAssign (or "expr1[expr2] @= expr3" in general)? opBinary doesn't seem to solve any of those. Also, all this just goes down to a generic "opOperator(char[] expression, T...)(T args)", where expression is actually an expression involved with the object. It feels like this leads to nothing. And opBinary is a just a quite arbitrary stop on that way to nothing. Just a hack to make code shorter for some use cases.

One way of solving this issue about "extended operator overloading" would be to introduce proper AST macros. An AST macro could match on a leaf of an expression and replace it by custom code, and use this mechanism to deal with stuff like "expr1[expr2] @= expr3" (and I don't see how opBinary would solve this... encode the expression as a string? fallback to naive code if opBinary fails to match?). At least that's what I thought AST macros would be capable to do.

Anyway, AST macros got ditched in favor of const/immutable, so that's not an option. You also might feel about AST macros as a vague idea, that only solves "everything" because it's so vague and unspecified. Feel free to go on about this.

November 19, 2009
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail@erdani.org)'s article
> Yes, it will be because the book has a few failing unittests. In fact, I
> was hoping I could talk you or David into doing it :o).
> Andrei

Unfortunately, I've come to hate the MRU idea because it would fail miserably for large arrays.  I've explained this before, but not particularly thoroughly, so I'll try to explain it more thoroughly here.  Let's say you have an array that takes up more than half of the total memory you are using.  You try to append to it and:

1.  The GC runs.  The MRU cache is therefore cleared.

2.  Your append succeeds, but the array is reallocated.

3.  You try to append again.  Now, because you have a huge piece of garbage that you just created by reallocating on the last append, the GC needs to run again. The MRU cache is cleared again.

4.  Goto 2.

Basically, for really huge arrays, we will have the nasty surprise that arrays are reallocated almost every time.  Unless something can be done about this, my vote is as follows:

1.  a ~= b -> syntactic sugar for a = a ~ b.  It's inefficient, but at least it's predictably inefficient and people won't use it if they care at all about performance.

2.  .length always reallocates when increasing the length of the array.

3.  Get a library type with identical syntax to slices that truly owns its contents and supports efficient appending, resizing, etc.  I'd be willing to write this (and even write it soon) if noone else wants to, especially since I think we might be able to use it to define unique ownership for arrays to allow essentially a safe assumeUnique for arrays and kill two birds w/ one stone.
November 19, 2009
Andrei Alexandrescu wrote:
> grauzone wrote:
>> Andrei Alexandrescu wrote:
>>> 3. It was mentioned in this group that if getopt() does not work in SafeD, then SafeD may as well pack and go home. I agree. We need to make it work. Three ideas discussed with Walter:
>>
>> If that's such an issue, why don't you just change it and use a struct defined by the user? structs are natural name-value pairs that work at compile time, and they can always be returned from functions.

No reply to this one? I would have been curious.
November 19, 2009
grauzone wrote:
> Andrei Alexandrescu wrote:
>> Any more thoughts, please let them known. Again, this is the ideal time to contribute. But "meh, it's a hack" is difficult to discuss.
> 
> Well that's just like as if Bjarne Stroustrup would ask you: "What would you have done in my place? This looked like the right thing to do at this time!". And now we're working on a language that's supposed to replace C++.

I'm not sure what you mean here.

> Partially I don't really know how opBinary is supposed to solve most operator overloading problems (listed in DIP7). It just looks like a stupid dispatch mechanism. It could be implemented by using CTFE and mixins without compiler changes: just let a CTFE function generate a dispatcher function for each opSomething to opBinary. Of course, if you think operators are something that's forwarded to something else, it'd be nicer if dmd would be doing this, because code gets shorter. So opSomething gets ditched in favor of opBinary. But actually, the functionality of opBinary can be provided as a template mixin or a CTFE function in Phobos. At least then the user has a choice what to use.

Things could indeed be generated with a CTFE mixin, but first we'd need a hecatomb of names to be added: all floating-point comparison operators and all index-assign operators. With the proposed approach there is no more need to add all those names and have the users consult tables to know how they are named.

> (About the issue that you need to remember names for operator symbols: you know C++ has a fabulous idea how to get around this...)

I think it's not as flexible a solution as passing a compile-time string because there is no way to actually use that token.

> Now what about unary operators?

opUnary.

> Or very specific stuff like opApply? 

opApply stays as it is.

> What's with opSomethingAssign (or "expr1[expr2] @= expr3" in general)? opBinary doesn't seem to solve any of those.

opBinary does solve opIndex* morass because it only adds one function per category, not one function per operator. For example:

struct T {
    // op can be "=", "+=", "-=" etc.
    E opAssign(string op)(E rhs) { ... }
    // op can be "=", "+=", "-=" etc.
    E opIndexAssign(string op)(size_t i, E rhs) { ... }
}

This was one motivation: instead of defining a lot of small functions that have each a specific name, define one function for each category of operations and encode the operator name as its own token. I don't understand exactly what the problem is with that.

> Also, all this just goes down to a generic "opOperator(char[] expression, T...)(T args)", where expression is actually an expression involved with the object. It feels like this leads to nothing.

We need something to work with. "looks like a stupid mechanism" and "feels" are not things that foster further dialog.

> And opBinary is a just a quite arbitrary stop on that way to nothing. Just a hack to make code shorter for some use cases.

opBinary is a binary operator, hardly something someone would pull out of a hat. I'm not sure what you mean to say here.

> One way of solving this issue about "extended operator overloading" would be to introduce proper AST macros. An AST macro could match on a leaf of an expression and replace it by custom code, and use this mechanism to deal with stuff like "expr1[expr2] @= expr3" (and I don't see how opBinary would solve this... encode the expression as a string? fallback to naive code if opBinary fails to match?). At least that's what I thought AST macros would be capable to do.

See above on how the proposed approach addresses RMW operations on indexes.

> Anyway, AST macros got ditched in favor of const/immutable, so that's not an option.

That I agree with.


Andrei