Thread overview
What are (dis)advantages of using pure and immutable by default?
Sep 07, 2015
Bahman Movaqar
Sep 07, 2015
anonymous
Sep 07, 2015
Bahman Movaqar
Sep 07, 2015
anonymous
Sep 07, 2015
Bahman Movaqar
September 07, 2015
It seems to me a good practice to mark all functions that I write as `pure` and define all the variables as `immutable`, unless there is a reason not to.
I can see some serious advantages of this, most notable of which is minimum side-effect and predictability of the code.  However I suppose it's going to impact the performance and memory footprint as well, though I have no idea how deep the impact will be.

I'm pretty sure I'm not the first one to think about this; so what would you seasoned D'ers say?

September 07, 2015
On Monday 07 September 2015 12:40, Bahman Movaqar wrote:

> It seems to me a good practice to mark all functions that I write as `pure` and define all the variables as `immutable`, unless there is a reason not to.

I agree.

> I can see some serious advantages of this, most notable of which is minimum side-effect and predictability of the code.  However I suppose it's going to impact the performance and memory footprint as well, though I have no idea how deep the impact will be.

I don't see how merely marking things immutable/pure would affect performance negatively. They're just marks on the type. If anything, you could get a performance boost from the stricter guarantees. But realistically, there won't be a difference.

If you change your algorithms to avoid mutable/impure, then you may see worse performance than if you made use of them. But I suppose that would be "a reason not to" mark everything immutable/pure.
September 07, 2015
On Monday, 7 September 2015 at 10:55:13 UTC, anonymous wrote:
> On Monday 07 September 2015 12:40, Bahman Movaqar wrote:
>> I can see some serious advantages of this, most notable of which is minimum side-effect and predictability of the code.  However I suppose it's going to impact the performance and memory footprint as well, though I have no idea how deep the impact will be.
>
> I don't see how merely marking things immutable/pure would affect performance negatively. They're just marks on the type. If anything, you could get a performance boost from the stricter guarantees. But realistically, there won't be a difference.

Just "marks", eh?
I was under the impression that when a variable, that is declared as `immutable`, is passed to a function, a copy of the value is passed.
However based on "marks" I can imagine that since the data is marked as `immutable` only a reference is passed; and the compiler guarantees that what is referenced to never changes.  Am I right?

> If you change your algorithms to avoid mutable/impure, then you may see worse performance than if you made use of them. But I suppose that would be "a reason not to" mark everything immutable/pure.

True.
Nowadays that more algorithms are designed with parallelism and distribution in mind, though, I believe mutating values and impure functions, at least in certain domains, will cease to exist.

September 07, 2015
On Monday 07 September 2015 13:12, Bahman Movaqar wrote:

> I was under the impression that when a variable, that is declared as `immutable`, is passed to a function, a copy of the value is passed.
>
> However based on "marks" I can imagine that since the data is marked as `immutable` only a reference is passed; and the compiler guarantees that what is referenced to never changes.  Am I right?

Generally immutable doesn't affect how things are passed around. An immutable type is passed the same as a mutable version of it.

Compilers may optimize based on immutability. This may mean that a reference is passed instead of a copy, or the other way around. I don't know if compilers do such things currently, or how much potential is in that. Also, I don't think the language has a stance on that; it's purely an optimization.

immutable does affect how you can pass things around, though. An example:

----
void f(int a) {}
void g(int* a) {}

void main()
{
    int xm;
    immutable int xi;
    f(xm); /* ok, obviously */
    f(xi); /* ok */

    int* ym = &xm;
    immutable int* yi = ξ
    g(ym); /* ok, obviously */
    g(yi); /* doesn't compile */
}
----

f(xi) is ok because when passing an int or an immutable(int), it's copied completely. So the f cannot possibly mutate the original immutable variable xi.

g(yi) is not ok, because when passing an int* or an immutable(int*), only the pointer is copied. So g could dereference yi and mutate the referenced immutable variable xi.

The same applies to other forms of references: classes, ref parameters, arrays.
September 07, 2015
On Monday, 7 September 2015 at 11:49:32 UTC, anonymous wrote:
> void f(int a) {}
> void g(int* a) {}
>
> void main()
> {
>     int xm;
>     immutable int xi;
>     f(xm); /* ok, obviously */
>     f(xi); /* ok */
>
>     int* ym = &xm;
>     immutable int* yi = ξ
>     g(ym); /* ok, obviously */
>     g(yi); /* doesn't compile */
> }
> ----
>
> f(xi) is ok because when passing an int or an immutable(int), it's copied completely. So the f cannot possibly mutate the original immutable variable xi.
>
> g(yi) is not ok, because when passing an int* or an immutable(int*), only the pointer is copied. So g could dereference yi and mutate the referenced immutable variable xi.
>
> The same applies to other forms of references: classes, ref parameters, arrays.

Thanks for the explanation.
So from what I can gather, `immutable`, apart from possible compiler-level optimisations, is mainly there to help the programmer catch otherwise runtime errors at compile time.

This is nice.  Indeed very helpful in a language when one deals with pointers and references.