November 21, 2009
== Quote from Max Samukha (spambox@d-coding.com)'s article
> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright <newshound1@digitalmars.com> wrote:
> >Yigal Chripun wrote:
> >> what about foreach_reverse ?
> >
> >No love for foreach_reverse? <tear>
> And no mercy for opApply

opApply **must** be kept!!!!  It's how my parallel foreach loop works.  This would be **impossible** to implement with ranges.  If opApply is removed now, I will fork the language over it.
November 21, 2009
On Sat, 21 Nov 2009 18:51:40 +0000 (UTC), dsimcha <dsimcha@yahoo.com>
wrote:

>== Quote from Max Samukha (spambox@d-coding.com)'s article
>> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright <newshound1@digitalmars.com> wrote:
>> >Yigal Chripun wrote:
>> >> what about foreach_reverse ?
>> >
>> >No love for foreach_reverse? <tear>
>> And no mercy for opApply
>
>opApply **must** be kept!!!!  It's how my parallel foreach loop works.  This would be **impossible** to implement with ranges.  If opApply is removed now, I will fork the language over it.

I guess it is possible:

uint[] numbers = new uint[1_000];

pool.parallel_each!((size_t i){
        numbers[i] = i;
    })(iota(0, numbers.length));

Though I agree it's not as cute but it is faster since the delegate is called directly. Or did I miss something?
November 21, 2009
== Quote from Max Samukha (spambox@d-coding.com)'s article
> On Sat, 21 Nov 2009 18:51:40 +0000 (UTC), dsimcha <dsimcha@yahoo.com>
> wrote:
> >== Quote from Max Samukha (spambox@d-coding.com)'s article
> >> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright <newshound1@digitalmars.com> wrote:
> >> >Yigal Chripun wrote:
> >> >> what about foreach_reverse ?
> >> >
> >> >No love for foreach_reverse? <tear>
> >> And no mercy for opApply
> >
> >opApply **must** be kept!!!!  It's how my parallel foreach loop works.  This would be **impossible** to implement with ranges.  If opApply is removed now, I will fork the language over it.
> I guess it is possible:
> uint[] numbers = new uint[1_000];
> pool.parallel_each!((size_t i){
>         numbers[i] = i;
>     })(iota(0, numbers.length));
> Though I agree it's not as cute but it is faster since the delegate is
> called directly. Or did I miss something?

I'm sorry, but I put a lot of work into getting parallel foreach working, and I also have a few other pieces of code that depend on opApply and could not (easily) be rewritten in terms of ranges.  I feel very strongly that opApply and ranges accomplish different enough goals that they should both be kept.

opApply is good when you **just** want to define foreach syntax and nothing else, with maximum flexibility as to how the foreach syntax is implemented.  Ranges are good when you want to solve a superset of this problem and define iteration over your object more generally, giving up some flexibility as to how this iteration will be implemented.

Furthermore, ranges don't allow for overloading based on the iteration type.  For example, you can't do this with ranges:

foreach(char[] line; file) {}  // Recycles buffer.
foreach(string line; file) {}  // Doesn't recycle buffer.

They also don't allow iterating over more than one variable, like: foreach(var1, var2, var3; myObject) {}

Contrary to popular belief, opApply doesn't even have to be slow.  Ranges can be as slow as or slower than opApply if at least one of the three functions (front, popFront, empty) is not inlined.   This actually happens in practice.  For example, based on reading disassemblies and the code to inline.c, neither front() nor popFront() in std.range.Take is ever inlined.  If the range functions are virtual, none of them will be inlined.

Just as importantly, I've confirmed by reading the disassembly that LDC is capable of inlining the loop body of opApply at optimization levels >= O3.  If D becomes mainstream, D2 will eventually also be implemented on a compiler that's smart enough to do stuff like this.  To remove opApply for performance reasons would be to let the capabilities of DMD's current optimizer influence long-term design decisions.

If anyone sees any harm in keeping opApply other than a slightly larger language spec, please let me know.  Despite its having been superseded by ranges for a subset of use cases (and this subset, I will acknowledge, is better handled by ranges), I actually think the flexibility it gives in terms of how foreach can be implemented makes it one of D's best features.
November 21, 2009
Sat, 21 Nov 2009 21:49:21 +0200, Max Samukha wrote:

> On Sat, 21 Nov 2009 18:51:40 +0000 (UTC), dsimcha <dsimcha@yahoo.com>
> wrote:
> 
>>== Quote from Max Samukha (spambox@d-coding.com)'s article
>>> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright <newshound1@digitalmars.com> wrote:
>>> >Yigal Chripun wrote:
>>> >> what about foreach_reverse ?
>>> >
>>> >No love for foreach_reverse? <tear>
>>> And no mercy for opApply
>>
>>opApply **must** be kept!!!!  It's how my parallel foreach loop works. This would be **impossible** to implement with ranges.  If opApply is removed now, I will fork the language over it.
> 
> I guess it is possible:
> 
> uint[] numbers = new uint[1_000];
> 
> pool.parallel_each!((size_t i){
>         numbers[i] = i;
>     })(iota(0, numbers.length));
> 
> Though I agree it's not as cute but it is faster since the delegate is called directly. Or did I miss something?

Sorry for asking this stupid question but why was 'iota' chosen for that purpose? I was kind of expecting that D2 would have these extension methods which would allow writing '0.to(numbers.length)' by defining 'auto to(int begin, int end) { ... }'. I looked at wikipedia (http:// en.wikipedia.org/wiki/Iota) and they seem to give credit for the Go programming language which seems rather weird since they probably existed in D long before Go was released. I've never heard of this APL being used anywhere.
November 21, 2009
== Quote from retard (re@tard.com.invalid)'s article
> Sat, 21 Nov 2009 21:49:21 +0200, Max Samukha wrote:
> > On Sat, 21 Nov 2009 18:51:40 +0000 (UTC), dsimcha <dsimcha@yahoo.com>
> > wrote:
> >
> >>== Quote from Max Samukha (spambox@d-coding.com)'s article
> >>> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright <newshound1@digitalmars.com> wrote:
> >>> >Yigal Chripun wrote:
> >>> >> what about foreach_reverse ?
> >>> >
> >>> >No love for foreach_reverse? <tear>
> >>> And no mercy for opApply
> >>
> >>opApply **must** be kept!!!!  It's how my parallel foreach loop works. This would be **impossible** to implement with ranges.  If opApply is removed now, I will fork the language over it.
> >
> > I guess it is possible:
> >
> > uint[] numbers = new uint[1_000];
> >
> > pool.parallel_each!((size_t i){
> >         numbers[i] = i;
> >     })(iota(0, numbers.length));
> >
> > Though I agree it's not as cute but it is faster since the delegate is called directly. Or did I miss something?
> Sorry for asking this stupid question but why was 'iota' chosen for that purpose? I was kind of expecting that D2 would have these extension methods which would allow writing '0.to(numbers.length)' by defining 'auto to(int begin, int end) { ... }'. I looked at wikipedia (http:// en.wikipedia.org/wiki/Iota) and they seem to give credit for the Go programming language which seems rather weird since they probably existed in D long before Go was released. I've never heard of this APL being used anywhere.

Because parallel foreach works over arbitrary ranges (though somewhat inefficiently over non-random access ranges) and iota is what's implemented now, so that's what I used in my examples.  If we get extension methos and 0.to(numbers.length) returns a random access range of integers [0, numbers.length) , then this will work, too.
November 21, 2009
On Nov 22, 09 04:32, retard wrote:
> Sat, 21 Nov 2009 21:49:21 +0200, Max Samukha wrote:
>
>> On Sat, 21 Nov 2009 18:51:40 +0000 (UTC), dsimcha<dsimcha@yahoo.com>
>> wrote:
>>
>>> == Quote from Max Samukha (spambox@d-coding.com)'s article
>>>> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright
>>>> <newshound1@digitalmars.com>  wrote:
>>>>> Yigal Chripun wrote:
>>>>>> what about foreach_reverse ?
>>>>>
>>>>> No love for foreach_reverse?<tear>
>>>> And no mercy for opApply
>>>
>>> opApply **must** be kept!!!!  It's how my parallel foreach loop works.
>>> This would be **impossible** to implement with ranges.  If opApply is
>>> removed now, I will fork the language over it.
>>
>> I guess it is possible:
>>
>> uint[] numbers = new uint[1_000];
>>
>> pool.parallel_each!((size_t i){
>>          numbers[i] = i;
>>      })(iota(0, numbers.length));
>>
>> Though I agree it's not as cute but it is faster since the delegate is
>> called directly. Or did I miss something?
>
> Sorry for asking this stupid question but why was 'iota' chosen for that
> purpose? I was kind of expecting that D2 would have these extension
> methods which would allow writing '0.to(numbers.length)' by defining
> 'auto to(int begin, int end) { ... }'. I looked at wikipedia (http://
> en.wikipedia.org/wiki/Iota) and they seem to give credit for the Go
> programming language which seems rather weird since they probably existed
> in D long before Go was released. I've never heard of this APL being used
> anywhere.

Because C++ used iota. Although "range" is easier to understand, and I would say an C++  programmer has higher chance to know what is range(1,10) than iota(1,10).

Also, "to" has been used in conversion (e.g. to!(string)(345))

November 22, 2009
dsimcha wrote:
> == Quote from Max Samukha (spambox@d-coding.com)'s article
>> On Sat, 21 Nov 2009 18:51:40 +0000 (UTC), dsimcha <dsimcha@yahoo.com>
>> wrote:
>>> == Quote from Max Samukha (spambox@d-coding.com)'s article
>>>> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright
>>>> <newshound1@digitalmars.com> wrote:
>>>>> Yigal Chripun wrote:
>>>>>> what about foreach_reverse ?
>>>>> No love for foreach_reverse? <tear>
>>>> And no mercy for opApply
>>> opApply **must** be kept!!!!  It's how my parallel foreach loop works.  This would
>>> be **impossible** to implement with ranges.  If opApply is removed now, I will
>>> fork the language over it.
>> I guess it is possible:
>> uint[] numbers = new uint[1_000];
>> pool.parallel_each!((size_t i){
>>         numbers[i] = i;
>>     })(iota(0, numbers.length));
>> Though I agree it's not as cute but it is faster since the delegate is
>> called directly. Or did I miss something?
> 
> I'm sorry, but I put a lot of work into getting parallel foreach working, and I
> also have a few other pieces of code that depend on opApply and could not (easily)
> be rewritten in terms of ranges.  I feel very strongly that opApply and ranges
> accomplish different enough goals that they should both be kept.
> 
> opApply is good when you **just** want to define foreach syntax and nothing else,
> with maximum flexibility as to how the foreach syntax is implemented.  Ranges are
> good when you want to solve a superset of this problem and define iteration over
> your object more generally, giving up some flexibility as to how this iteration
> will be implemented.
> 
> Furthermore, ranges don't allow for overloading based on the iteration type.  For
> example, you can't do this with ranges:
> 
> foreach(char[] line; file) {}  // Recycles buffer.
> foreach(string line; file) {}  // Doesn't recycle buffer.
> 
> They also don't allow iterating over more than one variable, like:
> foreach(var1, var2, var3; myObject) {}
> 
> Contrary to popular belief, opApply doesn't even have to be slow.  Ranges can be
> as slow as or slower than opApply if at least one of the three functions (front,
> popFront, empty) is not inlined.   This actually happens in practice.  For
> example, based on reading disassemblies and the code to inline.c, neither front()
> nor popFront() in std.range.Take is ever inlined.  If the range functions are
> virtual, none of them will be inlined.
> 
> Just as importantly, I've confirmed by reading the disassembly that LDC is capable
> of inlining the loop body of opApply at optimization levels >= O3.  If D becomes
> mainstream, D2 will eventually also be implemented on a compiler that's smart
> enough to do stuff like this.  To remove opApply for performance reasons would be
> to let the capabilities of DMD's current optimizer influence long-term design
> decisions.
> 
> If anyone sees any harm in keeping opApply other than a slightly larger language
> spec, please let me know.  Despite its having been superseded by ranges for a
> subset of use cases (and this subset, I will acknowledge, is better handled by
> ranges), I actually think the flexibility it gives in terms of how foreach can be
> implemented makes it one of D's best features.

There are three types of iteration: internal to the container, external by index, pointer, range, etc, and a third design with co-routines (fibers) in which the container internally iterates itself and yields a single item on each call.
Ranges accomplish only the external type of iteration. opApply allows for internal iteration. All three strategies have their uses and should be allowed in D.
November 22, 2009
dsimcha wrote:
> If anyone sees any harm in keeping opApply other than a slightly larger language
> spec, please let me know.  Despite its having been superseded by ranges for a
> subset of use cases (and this subset, I will acknowledge, is better handled by
> ranges), I actually think the flexibility it gives in terms of how foreach can be
> implemented makes it one of D's best features.

Thanks for writing this. It's nice to have a compelling argument, and you gave one. Looks like opApply is staying.
November 22, 2009
== Quote from Yigal Chripun (yigal100@gmail.com)'s article
> dsimcha wrote:
> > == Quote from Max Samukha (spambox@d-coding.com)'s article
> >> On Sat, 21 Nov 2009 18:51:40 +0000 (UTC), dsimcha <dsimcha@yahoo.com>
> >> wrote:
> >>> == Quote from Max Samukha (spambox@d-coding.com)'s article
> >>>> On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright <newshound1@digitalmars.com> wrote:
> >>>>> Yigal Chripun wrote:
> >>>>>> what about foreach_reverse ?
> >>>>> No love for foreach_reverse? <tear>
> >>>> And no mercy for opApply
> >>> opApply **must** be kept!!!!  It's how my parallel foreach loop works.  This
would
> >>> be **impossible** to implement with ranges.  If opApply is removed now, I will fork the language over it.
> >> I guess it is possible:
> >> uint[] numbers = new uint[1_000];
> >> pool.parallel_each!((size_t i){
> >>         numbers[i] = i;
> >>     })(iota(0, numbers.length));
> >> Though I agree it's not as cute but it is faster since the delegate is
> >> called directly. Or did I miss something?
> >
> > I'm sorry, but I put a lot of work into getting parallel foreach working, and I also have a few other pieces of code that depend on opApply and could not (easily) be rewritten in terms of ranges.  I feel very strongly that opApply and ranges accomplish different enough goals that they should both be kept.
> >
> > opApply is good when you **just** want to define foreach syntax and nothing else, with maximum flexibility as to how the foreach syntax is implemented.  Ranges are good when you want to solve a superset of this problem and define iteration over your object more generally, giving up some flexibility as to how this iteration will be implemented.
> >
> > Furthermore, ranges don't allow for overloading based on the iteration type.  For example, you can't do this with ranges:
> >
> > foreach(char[] line; file) {}  // Recycles buffer.
> > foreach(string line; file) {}  // Doesn't recycle buffer.
> >
> > They also don't allow iterating over more than one variable, like: foreach(var1, var2, var3; myObject) {}
> >
> > Contrary to popular belief, opApply doesn't even have to be slow.  Ranges can be as slow as or slower than opApply if at least one of the three functions (front, popFront, empty) is not inlined.   This actually happens in practice.  For example, based on reading disassemblies and the code to inline.c, neither front() nor popFront() in std.range.Take is ever inlined.  If the range functions are virtual, none of them will be inlined.
> >
> > Just as importantly, I've confirmed by reading the disassembly that LDC is capable of inlining the loop body of opApply at optimization levels >= O3.  If D becomes mainstream, D2 will eventually also be implemented on a compiler that's smart enough to do stuff like this.  To remove opApply for performance reasons would be to let the capabilities of DMD's current optimizer influence long-term design decisions.
> >
> > If anyone sees any harm in keeping opApply other than a slightly larger language spec, please let me know.  Despite its having been superseded by ranges for a subset of use cases (and this subset, I will acknowledge, is better handled by ranges), I actually think the flexibility it gives in terms of how foreach can be implemented makes it one of D's best features.
> There are three types of iteration: internal to the container, external
> by index, pointer, range, etc, and a third design with co-routines
> (fibers) in which the container internally iterates itself and yields a
> single item on each call.
> Ranges accomplish only the external type of iteration. opApply allows
> for internal iteration. All three strategies have their uses and should
> be allowed in D.

Exactly.  I've said this before, but I think you said it much better.  Now that Walter has agreed to keep opApply, this should really be explained somewhere in TDPL and in the online docs to clarify to newcomers why someone would choose opApply over ranges or vice-versa.  External iteration is more flexible for the user of the object, internal iteration is more flexible for the designer of the object.
November 23, 2009
Walter Bright wrote:
> dsimcha wrote:
>> If anyone sees any harm in keeping opApply other than a slightly larger language
>> spec, please let me know.  Despite its having been superseded by ranges for a
>> subset of use cases (and this subset, I will acknowledge, is better handled by
>> ranges), I actually think the flexibility it gives in terms of how foreach can be
>> implemented makes it one of D's best features.
> 
> Thanks for writing this. It's nice to have a compelling argument, and you gave one. Looks like opApply is staying.

I think the same. Thanks, David.

Andrei