On 4/22/2020 5:04 AM, Manu wrote:
> [...]
Ok, I've had a chance to think about it. It's a scathingly brilliant idea!
But (there's always a but!) something stuck out at me. Consider arrays:
void test()
{
auto a = [1, 2, 3];
int[3] b = a[]*a[]; // b[0] = a[0]*a[0]; b[1] = a[1]*a[1]; b[2] = a[2]*a[2];
int[3] c = a[]*2; // c[0] = a[0]*2; c[1] = a[1]*2; c[2] = a[2]*2;
}
These look familiar! D tuples already use array syntax - they can be indexed and
sliced. Instead of the ... syntax, just use array syntax!
The examples from the DIP:
=====================================
--- DIP
(Tup*10)... --> ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 )
--- Array syntax
Tup*10
====================================
--- DIP
alias Tup = AliasSeq!(1, 2, 3);
int[] myArr;
assert([ myArr[Tup + 1]... ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1],
myArr[Tup[2] + 1] ]);
--- Array
alias Tup = AliasSeq!(1, 2, 3);
int[] myArr;
assert([ myArr[Tup + 1] ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1],
myArr[Tup[2] + 1] ]);
===================================
---DIP
alias Values = AliasSeq!(1, 2, 3);
alias Types = AliasSeq!(int, short, float);
pragma(msg, cast(Types)Values...);
---Array
alias Values = AliasSeq!(1, 2, 3);
alias Types = AliasSeq!(int, short, float);
pragma(msg, cast(Types)Values);
=================================
---DIP
alias OnlyTwo = AliasSeq!(10, 20);
pragma(msg, (Values + OnlyTwo)...);
---Array
alias OnlyTwo = AliasSeq!(10, 20);
pragma(msg, Values + OnlyTwo);
The idea is simply if we have:
t op c
where t is a tuple and c is not, the result is:
tuple(t[0] op c, t[1] op c, ..., t[length - 1] op c)
For:
t1 op t2
the result is:
tuple(t1[0] op t2[0], t1[1] op t2[1], ..., t1[length - 1] op t2[length - 1])
The AST doesn't have to be walked to make this work, just do it as part of the
usual bottom-up semantic processing.
I thought about this, but this reaches much further than `a op b `.
When I considered your approach, it appeared to add a lot of edges and limits on the structure of the expressions, particularly where it interacts with var-args or variadic templates.
The advantage is:
1. no new grammar
Fortunately, the grammar is trivial.
2. no new operator precedence rules
3. turn expressions that are currently errors into doing the obvious thing
This is compelling, but I couldn't think how it can work from end to end.
Why does C++ use ... rather than array syntax? Because C++ doesn't have arrays!
Another reason I introduce `...` is for static fold.
The follow-up to this DIP would make this expression work:
`Tup + ...` -> `Tup[0] + Tup[1] + ... + Tup[$-1]`
For instance, up-thread it was noted that a static-fold algorithm may implement a find-type-in-tuple; it would look like this:
`is(MyType == Types) || ...` <- evaluate `true` if MyType is present in Types with no template instantiation junk.
So, the `...` is deliberately intended to being additional value.
Can you show how your suggestion applies to some more complex cases (not yet noted in the DIP).
// controlled expansion:
alias Tup = AliasSeq!(0, 1, 2);
alias Tup2 = AliasSeq!(3, 4, 5);
[ Tup, Tup2... ]... ->
[ 0, 3, 4, 5 ],
[ 1, 3, 4, 5 ],
[ 2, 3, 4, 5 ]
// template instantiations
alias TTup = AliasSeq!(int, float, char);
MyTemplate!(Tup, TTup.sizeof...)... ->
MyTemplate!(0, 4, 4, 1),
MyTemplate!(1, 4, 4, 1),
MyTemplate!(2, 4, 4, 1)
// replace staticMap
alias staticMap(alias F, T...) = F!T...;
// more controlled expansion, with template arg lists
AliasSeq!(10, Tup, 20)... -> ( 10, 0, 20, 10, 1, 20, 10, 2, 20 )
AliasSeq!(10, Tup..., 20) -> ( 10, 0, 1, 2, 20 )
// static fold (outside the scope of this DIP, but it's next in line)
`Tup + ...` -> `Tup[0] + Tup[1] + ... + Tup[$-1]`
// static find
`is(MyType == Types) || ...`
That said, with respect to these fold expressions, it would be ideal if they applied to arrays equally as I propose to tuples.