August 27, 2008
superdan wrote:
> yeppers. amend that to o(log n). in d, that rule is a social contract derived from the built-in vector and hash indexing syntax.

I see what you did thar -- you made up a rule you like and called it a "social contract". Whether it _should be_ a rule or not is debatable, but it is neither a written nor unwritten rule in use right now, so what you said there is a lie.

First, a hash access is already time unbounded. hash["hello"] where "hello" is not already in the hash will create a hash entry for hello. This requires heap allocation, which can take arbitrarily long. So having unbounded opIndex is in the language already!

Second, opIndex can be used for things other than data structures. For example, if I had a handle to a folder that had a file "foo.txt" in it, folder["foo.txt"] seems a natural syntax to create a handle to that file (which allocates memory = time unbounded). I can see the opIndex syntax being used for things like properties that may require searching through a parse tree. Maybe this is sort of stretching it, but I wouldn't mind having the opIndex syntax as a shorthand for executing database queries, i.e. `auto result = db["SELECT * FROM posts WHERE from = 'superdan']";`.

It's a shorthand syntax that makes no guarantees as far as complexity nor should it.
August 27, 2008
"Dee Girl" <deegirl@noreply.com> wrote in message news:g943oi$11f4$1@digitalmars.com...
> Benji Smith Wrote:
>
>> Dee Girl wrote:
>> > Michiel Helvensteijn Wrote:
>> >> That's simple. a[i] looks much nicer than a.nth(i).
>> >
>> > It is not nicer. It is more deceiving (correct spell?). If you look at code it looks like array code.
>> >
>> > foreach (i; 0 .. a.length)
>> > {
>> >     a[i] += 1;
>> > }
>> >
>> > For array works nice. But for list it is terrible! Many operations for incrementing only small list.
>>
>> Well, that's what you get with operator overloading.
>
> I am sorry. I disagree. I think that is what you get with bad design.
>
>> The same thing could be said for "+" or "-". They're inherently deceiving, because they look like builtin operations on primitive data types.
>>
>> For expensive operations (like performing division on an unlimited-precision decimal object), should the author of the code use "opDiv" or should he implement a separate "divide" function?
>
> The cost of + and - is proportional to digits in number. For small number of digits computer does fast in hardware. For many digits the cost grows. The number of digits is log n. I think + and - are fine for big integer. I am not surprise.
>
>> Forget opIndex for a moment, and ask the more general question about all overloaded operators. Should they imply any sort of asymptotic complexity guarantee?
>
> I think depends on good design. For example I think ++ or -- for iterator. If it is O(n) it is bad design. Bad design make people say like you "This is what you get with operator overloading".
>
>> Personally, I don't think so.
>>
>> I don't like "nth".
>>
>> I'd rather use the opIndex. And if I'm using a linked list, I'll be aware of the fact that it'll exhibit linear-time indexing, and I'll be cautious about which algorithms to use.
>
> But inside algorithm you do not know if you use a linked list or a vector. You lost that information in bad abstraction. Also abstraction is bad because if you change data structure you have concept errors that still compile. And run until tomorrow ^_^.
>

A generic algoritm has absolutely no business caring about the complexity of the collection it's operating on. If it does, then you've created a concrete algoritm, not a generic one. If an algoritm uses [] and doesn't know the complexity of the []...good! It shouldn't know, and it shouldn't care. It's the code that sends the collection to the algoritm that knows and cares. Why? Because "what algoritm is best?" depends on far more than just what type of collection is used. It depends on "Will the collection ever be larger than X elements?". It depends on "Is it a standard textbook list, or does it use trick 1 and/or trick 2?". It depends on "Is it usually mostly sorted or mostly random?". It depends on "What do I do with it most often? Sort, append, search, insert or delete?". And it depends on other things, too.

Using "[]" versus "nth()" can't tell the algoritm *any* of those things. But those things *must* be known in order to make an accurate decision of "Is this the right algoritm or not?" Therefore, a generic algoritm *cannot* ever know for certain if it's the right algoritm *even* if you say "[]" means "O(log n) or better". Therefore, the algorithm should not be designed to only work with certain types of collections. The code that sends the collection to the algoritm is the *only* code that knows the answers to all of the questions above, therefore it is the only code that should ever decide "I should use this algorithm, I shouldn't use that algorithm."

> I also like or do not like things. But good reason can convince me? Thank you, Dee Girl.



August 27, 2008
Nick Sabalausky Wrote:

> "Dee Girl" <deegirl@noreply.com> wrote in message news:g943oi$11f4$1@digitalmars.com...
> > Benji Smith Wrote:
> >
> >> Dee Girl wrote:
> >> > Michiel Helvensteijn Wrote:
> >> >> That's simple. a[i] looks much nicer than a.nth(i).
> >> >
> >> > It is not nicer. It is more deceiving (correct spell?). If you look at code it looks like array code.
> >> >
> >> > foreach (i; 0 .. a.length)
> >> > {
> >> >     a[i] += 1;
> >> > }
> >> >
> >> > For array works nice. But for list it is terrible! Many operations for incrementing only small list.
> >>
> >> Well, that's what you get with operator overloading.
> >
> > I am sorry. I disagree. I think that is what you get with bad design.
> >
> >> The same thing could be said for "+" or "-". They're inherently deceiving, because they look like builtin operations on primitive data types.
> >>
> >> For expensive operations (like performing division on an unlimited-precision decimal object), should the author of the code use "opDiv" or should he implement a separate "divide" function?
> >
> > The cost of + and - is proportional to digits in number. For small number of digits computer does fast in hardware. For many digits the cost grows. The number of digits is log n. I think + and - are fine for big integer. I am not surprise.
> >
> >> Forget opIndex for a moment, and ask the more general question about all overloaded operators. Should they imply any sort of asymptotic complexity guarantee?
> >
> > I think depends on good design. For example I think ++ or -- for iterator. If it is O(n) it is bad design. Bad design make people say like you "This is what you get with operator overloading".
> >
> >> Personally, I don't think so.
> >>
> >> I don't like "nth".
> >>
> >> I'd rather use the opIndex. And if I'm using a linked list, I'll be aware of the fact that it'll exhibit linear-time indexing, and I'll be cautious about which algorithms to use.
> >
> > But inside algorithm you do not know if you use a linked list or a vector. You lost that information in bad abstraction. Also abstraction is bad because if you change data structure you have concept errors that still compile. And run until tomorrow ^_^.
> >
> 
> A generic algoritm has absolutely no business caring about the complexity of the collection it's operating on.

absolutely. desperate. need. of. chanel.

> If it does, then you've created a concrete algoritm, not a generic one.

sure you don't know what you're talking about. it is generic insofar as it abstracts away the primitives it needs from its iterator. run don't walk and get a dose of stl.

> If an algoritm uses [] and doesn't know the complexity of the []...good! It shouldn't know, and it shouldn't care.

nonsense. so wrong i won't even address it.

> It's the code that sends the collection to the algoritm that knows and cares. Why? Because "what algoritm is best?" depends on far more than just what type of collection is used. It depends on "Will the collection ever be larger than X elements?". It depends on "Is it a standard textbook list, or does it use trick 1 and/or trick 2?". It depends on "Is it usually mostly sorted or mostly random?". It depends on "What do I do with it most often? Sort, append, search, insert or delete?". And it depends on other things, too.

sure it does. problem is you have it backwards. types and algos tell you the theoretical properties. then you in knowledge of what's goin' on use the algorithm that does it for you in knowledge that the complexity would work for your situation. or you encode your own specialized algorithm.

thing is stl encoded the most general linear search. you can use it for linear searching everything. moreover it exactly said what's needed for a linear search: a one-pass forward iterator aka input iterator.

now to tie it with what u said: you know in your situation whether linear find cuts the mustard. that don't change the nature of that fundamental algorithm. so you use it or use another. your choice. but find remains universal so long as it has access to the basics of one-pass iteration.

> Using "[]" versus "nth()" can't tell the algoritm *any* of those things.

doesn't have to.

> But those things *must* be known in order to make an accurate decision of "Is this the right algoritm or not?"

sure. you make them decision on the call site.

> Therefore, a generic algoritm *cannot* ever know for certain if it's the right algoritm *even* if you say "[]" means "O(log n) or better".

utterly wrong. poppycock. gobbledygook. nonsense. this is so far off i don't have time to even address. if you want to learn stuff go learn stl. then we talk. if you want to teach me i guess we're done.

> Therefore, the algorithm should not be designed to only work with certain types of collections.

it should be designed to work with certain iterator categories.

> The code that sends the collection to the algoritm is the *only* code that knows the answers to all of the questions above, therefore it is the only code that should ever decide "I should use this algorithm, I shouldn't use that algorithm."

correct. you just have all of your other hypotheses jumbled.

sorry dood don't be hatin' but there's so much you don't know i ain't gonna continue this. last word is yours. call me a pompous prick if you want. go ahead.
August 27, 2008
Robert Fraser Wrote:

> superdan wrote:
> > yeppers. amend that to o(log n). in d, that rule is a social contract derived from the built-in vector and hash indexing syntax.
> 
> I see what you did thar -- you made up a rule you like and called it a "social contract". Whether it _should be_ a rule or not is debatable, but it is neither a written nor unwritten rule in use right now, so what you said there is a lie.

well i'm exposed. good goin' johnny drama.

in c++ it's written. in d it's not yet. lookin' at std.algorithm i have no doubt it will. so my lie is really a prophecy :D

> First, a hash access is already time unbounded. hash["hello"] where "hello" is not already in the hash will create a hash entry for hello. This requires heap allocation, which can take arbitrarily long. So having unbounded opIndex is in the language already!

hash was always an oddball. it is acceptable because it offers constant time [] on average.

> Second, opIndex can be used for things other than data structures. For example, if I had a handle to a folder that had a file "foo.txt" in it, folder["foo.txt"] seems a natural syntax to create a handle to that file (which allocates memory = time unbounded).

guess i wouldn't be crazy about it. but yeah it works no problem.

s'pose there's a misunderstanding s'mewhere. i'm not against opIndex usage in various data structs. no problem! i am only against opIndex masquerading as random access in a collection. that would allow algos thinkin' they do some effin' good iteration. when in fact they do linear search each time they make a pass. completely throws the shit towards the fan.

> I can see the opIndex syntax being used for things like properties that may require searching through a parse tree. Maybe this is sort of stretching it, but I wouldn't mind having the opIndex syntax as a shorthand for executing database queries, i.e. `auto result = db["SELECT * FROM posts WHERE from = 'superdan']";`.
> 
> It's a shorthand syntax that makes no guarantees as far as complexity nor should it.

kinda cute, but 100% agree.
August 27, 2008
"superdan" <super@dan.org> wrote in message news:g94g3e$20e9$1@digitalmars.com...
> Nick Sabalausky Wrote:
>
>> "Dee Girl" <deegirl@noreply.com> wrote in message news:g943oi$11f4$1@digitalmars.com...
>> > Benji Smith Wrote:
>> >
>> >> Dee Girl wrote:
>> >> > Michiel Helvensteijn Wrote:
>> >> >> That's simple. a[i] looks much nicer than a.nth(i).
>> >> >
>> >> > It is not nicer. It is more deceiving (correct spell?). If you look
>> >> > at
>> >> > code it looks like array code.
>> >> >
>> >> > foreach (i; 0 .. a.length)
>> >> > {
>> >> >     a[i] += 1;
>> >> > }
>> >> >
>> >> > For array works nice. But for list it is terrible! Many operations
>> >> > for
>> >> > incrementing only small list.
>> >>
>> >> Well, that's what you get with operator overloading.
>> >
>> > I am sorry. I disagree. I think that is what you get with bad design.
>> >
>> >> The same thing could be said for "+" or "-". They're inherently deceiving, because they look like builtin operations on primitive data types.
>> >>
>> >> For expensive operations (like performing division on an unlimited-precision decimal object), should the author of the code use "opDiv" or should he implement a separate "divide" function?
>> >
>> > The cost of + and - is proportional to digits in number. For small
>> > number
>> > of digits computer does fast in hardware. For many digits the cost
>> > grows.
>> > The number of digits is log n. I think + and - are fine for big
>> > integer. I
>> > am not surprise.
>> >
>> >> Forget opIndex for a moment, and ask the more general question about
>> >> all
>> >> overloaded operators. Should they imply any sort of asymptotic
>> >> complexity guarantee?
>> >
>> > I think depends on good design. For example I think ++ or -- for
>> > iterator.
>> > If it is O(n) it is bad design. Bad design make people say like you
>> > "This
>> > is what you get with operator overloading".
>> >
>> >> Personally, I don't think so.
>> >>
>> >> I don't like "nth".
>> >>
>> >> I'd rather use the opIndex. And if I'm using a linked list, I'll be aware of the fact that it'll exhibit linear-time indexing, and I'll be cautious about which algorithms to use.
>> >
>> > But inside algorithm you do not know if you use a linked list or a
>> > vector.
>> > You lost that information in bad abstraction. Also abstraction is bad
>> > because if you change data structure you have concept errors that still
>> > compile. And run until tomorrow ^_^.
>> >
>>
>> A generic algoritm has absolutely no business caring about the complexity
>> of
>> the collection it's operating on.
>
> absolutely. desperate. need. of. chanel.
>
>> If it does, then you've created a concrete
>> algoritm, not a generic one.
>
> sure you don't know what you're talking about. it is generic insofar as it abstracts away the primitives it needs from its iterator. run don't walk and get a dose of stl.
>
>> If an algoritm uses [] and doesn't know the
>> complexity of the []...good! It shouldn't know, and it shouldn't care.
>
> nonsense. so wrong i won't even address it.
>
>> It's
>> the code that sends the collection to the algoritm that knows and cares.
>> Why? Because "what algoritm is best?" depends on far more than just what
>> type of collection is used. It depends on "Will the collection ever be
>> larger than X elements?". It depends on "Is it a standard textbook list,
>> or
>> does it use trick 1 and/or trick 2?". It depends on "Is it usually mostly
>> sorted or mostly random?". It depends on "What do I do with it most
>> often?
>> Sort, append, search, insert or delete?". And it depends on other things,
>> too.
>
> sure it does. problem is you have it backwards. types and algos tell you the theoretical properties. then you in knowledge of what's goin' on use the algorithm that does it for you in knowledge that the complexity would work for your situation. or you encode your own specialized algorithm.
>
> thing is stl encoded the most general linear search. you can use it for linear searching everything. moreover it exactly said what's needed for a linear search: a one-pass forward iterator aka input iterator.
>
> now to tie it with what u said: you know in your situation whether linear find cuts the mustard. that don't change the nature of that fundamental algorithm. so you use it or use another. your choice. but find remains universal so long as it has access to the basics of one-pass iteration.
>
>> Using "[]" versus "nth()" can't tell the algoritm *any* of those things.
>
> doesn't have to.
>
>> But
>> those things *must* be known in order to make an accurate decision of "Is
>> this the right algoritm or not?"
>
> sure. you make them decision on the call site.
>
>> Therefore, a generic algoritm *cannot* ever
>> know for certain if it's the right algoritm *even* if you say "[]" means
>> "O(log n) or better".
>
> utterly wrong. poppycock. gobbledygook. nonsense. this is so far off i don't have time to even address. if you want to learn stuff go learn stl. then we talk. if you want to teach me i guess we're done.
>
>> Therefore, the algorithm should not be designed to
>> only work with certain types of collections.
>
> it should be designed to work with certain iterator categories.
>
>> The code that sends the
>> collection to the algoritm is the *only* code that knows the answers to
>> all
>> of the questions above, therefore it is the only code that should ever
>> decide "I should use this algorithm, I shouldn't use that algorithm."
>
> correct. you just have all of your other hypotheses jumbled.
>
> sorry dood don't be hatin' but there's so much you don't know i ain't gonna continue this. last word is yours. call me a pompous prick if you want. go ahead.

I'll agree to drop this issue. There's little point in debating with someone whose arguments frequently consist of things like "You are wrong", "I'm not going to explain my point", and "dood don't be hatin'".


August 27, 2008
On Wed, 27 Aug 2008 16:33:24 -0400, Nick Sabalausky wrote:

> A generic algoritm has absolutely no business caring about the complexity of the collection it's operating on.

I also believe this to be true.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell
August 27, 2008
Nick Sabalausky Wrote:

> "Dee Girl" <deegirl@noreply.com> wrote in message news:g943oi$11f4$1@digitalmars.com...
> > Benji Smith Wrote:
> >
> >> Dee Girl wrote:
> >> > Michiel Helvensteijn Wrote:
> >> >> That's simple. a[i] looks much nicer than a.nth(i).
> >> >
> >> > It is not nicer. It is more deceiving (correct spell?). If you look at code it looks like array code.
> >> >
> >> > foreach (i; 0 .. a.length)
> >> > {
> >> >     a[i] += 1;
> >> > }
> >> >
> >> > For array works nice. But for list it is terrible! Many operations for incrementing only small list.
> >>
> >> Well, that's what you get with operator overloading.
> >
> > I am sorry. I disagree. I think that is what you get with bad design.
> >
> >> The same thing could be said for "+" or "-". They're inherently deceiving, because they look like builtin operations on primitive data types.
> >>
> >> For expensive operations (like performing division on an unlimited-precision decimal object), should the author of the code use "opDiv" or should he implement a separate "divide" function?
> >
> > The cost of + and - is proportional to digits in number. For small number of digits computer does fast in hardware. For many digits the cost grows. The number of digits is log n. I think + and - are fine for big integer. I am not surprise.
> >
> >> Forget opIndex for a moment, and ask the more general question about all overloaded operators. Should they imply any sort of asymptotic complexity guarantee?
> >
> > I think depends on good design. For example I think ++ or -- for iterator. If it is O(n) it is bad design. Bad design make people say like you "This is what you get with operator overloading".
> >
> >> Personally, I don't think so.
> >>
> >> I don't like "nth".
> >>
> >> I'd rather use the opIndex. And if I'm using a linked list, I'll be aware of the fact that it'll exhibit linear-time indexing, and I'll be cautious about which algorithms to use.
> >
> > But inside algorithm you do not know if you use a linked list or a vector. You lost that information in bad abstraction. Also abstraction is bad because if you change data structure you have concept errors that still compile. And run until tomorrow ^_^.
> >
> 
> A generic algoritm has absolutely no business caring about the complexity of the collection it's operating on. If it does, then you've created a concrete algoritm, not a generic one.

I appreciate your view point. Please allow me explain. The view point is in opposition with STL. In STL each algorithm defines what kind of iterator it operates with. And it requires what iterator complexity.

I agree that other design can be made. But STL has that design. In my opinion is much part of what make STL so successful.

I disagree that algorithm that knows complexity of iterator is concrete. I think exactly contrary. Maybe it is good that you read book about STL by Josuttis. STL algorithms are the most generic I ever find in any language. I hope std.algorithm in D will be better. But right now std.algorithm works only with array.

> If an algoritm uses [] and doesn't know the complexity of the []...good! It shouldn't know, and it shouldn't care. It's the code that sends the collection to the algoritm that knows and cares.

I think this is mistake. Algorithm should know. Otherwise "linear find" is not "linear find"! It is "cuadratic find" (spell?). If you want to define something called linear find then you must know iterator complexity.

> Why? Because "what algoritm is best?" depends on far more than just what type of collection is used. It depends on "Will the collection ever be larger than X elements?". It depends on "Is it a standard textbook list, or does it use trick 1 and/or trick 2?". It depends on "Is it usually mostly sorted or mostly random?". It depends on "What do I do with it most often? Sort, append, search, insert or delete?". And it depends on other things, too.

I agree it depends on many things. But such practical matters do not change the nature of generic algorithm. Linear find is same on 5, 50, or 5 million objects. I have to say I also think you have inversed some ideas. Algorithm is same. You use it the way you want.

> Using "[]" versus "nth()" can't tell the algoritm *any* of those things.

This is interface convention. Like any other interface convention! Nobody says that IStack.Push() puts something on stack. It is described in documentation. If a concrete stack is wrong it can do anything.

Only special about [] is that built in array has []. So I do not think a list should want to look like array.

> But those things *must* be known in order to make an accurate decision of "Is this the right algoritm or not?" Therefore, a generic algoritm *cannot* ever know for certain if it's the right algoritm *even* if you say "[]" means "O(log n) or better". Therefore, the algorithm should not be designed to only work with certain types of collections. The code that sends the collection to the algoritm is the *only* code that knows the answers to all of the questions above, therefore it is the only code that should ever decide "I should use this algorithm, I shouldn't use that algorithm."

I respectfully disagree. For example binary_search in STL should never compile on a list. Because it would be simply wrong to use it with a list. It has no sense. So I am happy that STL does not allow that.

I think you can easy build structure and algorithm library that allows wrong combinations. In programming you can do anything ^_^. But I think then I say: Your library is inferior and STL is superior. I am sorry, Dee Girl
August 27, 2008
Derek Parnell Wrote:

> On Wed, 27 Aug 2008 16:33:24 -0400, Nick Sabalausky wrote:
> 
> > A generic algoritm has absolutely no business caring about the complexity of the collection it's operating on.
> 
> I also believe this to be true.

It is true. But only if you take out of context. An algorithm does not need to know the complexity of collection. But it must have a minim guarantee of what iterator the collection has. Is it forward only, bidirectional, or random access? This is small interface. And very easy to implement. Inside the iterator can do the thing it needs to access collection. Algorithm must not know it! Only ++, *, [] and comparison. That is why in STL algorithms are so general. Because they work with very small (narrow) interface. Thank you, Dee Girl
August 27, 2008
On 2008-08-27 23:27:52 +0200, "Nick Sabalausky" <a@a.a> said:

> "superdan" <super@dan.org> wrote in message
> news:g94g3e$20e9$1@digitalmars.com...
>> Nick Sabalausky Wrote:
>> 
>>> "Dee Girl" <deegirl@noreply.com> wrote in message
>>> news:g943oi$11f4$1@digitalmars.com...
>>>> Benji Smith Wrote:
>>>> 
>>>>> Dee Girl wrote:
>>>>>> Michiel Helvensteijn Wrote:
>>>>>>> That's simple. a[i] looks much nicer than a.nth(i).
>>>>>> 
>>>>>> It is not nicer. It is more deceiving (correct spell?). If you look
>>>>>> at
>>>>>> code it looks like array code.
>>>>>> 
>>>>>> foreach (i; 0 .. a.length)
>>>>>> {
>>>>>> a[i] += 1;
>>>>>> }
>>>>>> 
>>>>>> For array works nice. But for list it is terrible! Many operations
>>>>>> for
>>>>>> incrementing only small list.
>>>>> 
>>>>> Well, that's what you get with operator overloading.
>>>> 
>>>> I am sorry. I disagree. I think that is what you get with bad design.
>>>> 
>>>>> The same thing could be said for "+" or "-". They're inherently
>>>>> deceiving, because they look like builtin operations on primitive data
>>>>> types.
>>>>> 
>>>>> For expensive operations (like performing division on an
>>>>> unlimited-precision decimal object), should the author of the code use
>>>>> "opDiv" or should he implement a separate "divide" function?
>>>> 
>>>> The cost of + and - is proportional to digits in number. For small
>>>> number
>>>> of digits computer does fast in hardware. For many digits the cost
>>>> grows.
>>>> The number of digits is log n. I think + and - are fine for big
>>>> integer. I
>>>> am not surprise.
>>>> 
>>>>> Forget opIndex for a moment, and ask the more general question about
>>>>> all
>>>>> overloaded operators. Should they imply any sort of asymptotic
>>>>> complexity guarantee?
>>>> 
>>>> I think depends on good design. For example I think ++ or -- for
>>>> iterator.
>>>> If it is O(n) it is bad design. Bad design make people say like you
>>>> "This
>>>> is what you get with operator overloading".
>>>> 
>>>>> Personally, I don't think so.
>>>>> 
>>>>> I don't like "nth".
>>>>> 
>>>>> I'd rather use the opIndex. And if I'm using a linked list, I'll be
>>>>> aware of the fact that it'll exhibit linear-time indexing, and I'll be
>>>>> cautious about which algorithms to use.
>>>> 
>>>> But inside algorithm you do not know if you use a linked list or a
>>>> vector.
>>>> You lost that information in bad abstraction. Also abstraction is bad
>>>> because if you change data structure you have concept errors that still
>>>> compile. And run until tomorrow ^_^.
>>>> 
>>> 
>>> A generic algoritm has absolutely no business caring about the complexity
>>> of
>>> the collection it's operating on.
>> 
>> absolutely. desperate. need. of. chanel.
>> 
>>> If it does, then you've created a concrete
>>> algoritm, not a generic one.
>> 
>> sure you don't know what you're talking about. it is generic insofar as it
>> abstracts away the primitives it needs from its iterator. run don't walk
>> and get a dose of stl.
>> 
>>> If an algoritm uses [] and doesn't know the
>>> complexity of the []...good! It shouldn't know, and it shouldn't care.
>> 
>> nonsense. so wrong i won't even address it.
>> 
>>> It's
>>> the code that sends the collection to the algoritm that knows and cares.
>>> Why? Because "what algoritm is best?" depends on far more than just what
>>> type of collection is used. It depends on "Will the collection ever be
>>> larger than X elements?". It depends on "Is it a standard textbook list,
>>> or
>>> does it use trick 1 and/or trick 2?". It depends on "Is it usually mostly
>>> sorted or mostly random?". It depends on "What do I do with it most
>>> often?
>>> Sort, append, search, insert or delete?". And it depends on other things,
>>> too.
>> 
>> sure it does. problem is you have it backwards. types and algos tell you
>> the theoretical properties. then you in knowledge of what's goin' on use
>> the algorithm that does it for you in knowledge that the complexity would
>> work for your situation. or you encode your own specialized algorithm.
>> 
>> thing is stl encoded the most general linear search. you can use it for
>> linear searching everything. moreover it exactly said what's needed for a
>> linear search: a one-pass forward iterator aka input iterator.
>> 
>> now to tie it with what u said: you know in your situation whether linear
>> find cuts the mustard. that don't change the nature of that fundamental
>> algorithm. so you use it or use another. your choice. but find remains
>> universal so long as it has access to the basics of one-pass iteration.
>> 
>>> Using "[]" versus "nth()" can't tell the algoritm *any* of those things.
>> 
>> doesn't have to.
>> 
>>> But
>>> those things *must* be known in order to make an accurate decision of "Is
>>> this the right algoritm or not?"
>> 
>> sure. you make them decision on the call site.
>> 
>>> Therefore, a generic algoritm *cannot* ever
>>> know for certain if it's the right algoritm *even* if you say "[]" means
>>> "O(log n) or better".
>> 
>> utterly wrong. poppycock. gobbledygook. nonsense. this is so far off i
>> don't have time to even address. if you want to learn stuff go learn stl.
>> then we talk. if you want to teach me i guess we're done.
>> 
>>> Therefore, the algorithm should not be designed to
>>> only work with certain types of collections.
>> 
>> it should be designed to work with certain iterator categories.
>> 
>>> The code that sends the
>>> collection to the algoritm is the *only* code that knows the answers to
>>> all
>>> of the questions above, therefore it is the only code that should ever
>>> decide "I should use this algorithm, I shouldn't use that algorithm."
>> 
>> correct. you just have all of your other hypotheses jumbled.
>> 
>> sorry dood don't be hatin' but there's so much you don't know i ain't
>> gonna continue this. last word is yours. call me a pompous prick if you
>> want. go ahead.
> 
> I'll agree to drop this issue. There's little point in debating with someone
> whose arguments frequently consist of things like "You are wrong", "I'm not
> going to explain my point", and "dood don't be hatin'".

I am with dan dee_girl & co on this issue, the problem is that a generic algorithm "knows" the types he is working on and can easily check the operations they have, and based on this decide the strategy to use. This choice works well if the presence of a given operation is also connected with some performance guarantee.

Concepts (or better categories (aldor concept not C++), that are interfaces for types, but interfaces that have to be explicitly assigned to a type) might relax this situation a little, but the need for some guarantees will remain.

Fawzi

August 28, 2008
superdan wrote:
> Chris R. Miller Wrote:
>> Furthermore, you're faulting linked lists as having a bad opIndex.  Why not implement a cursor (Java LinkedList-like iterator) in the opIndex function?  Thus you could retain the reference to the last indexed location, and simply use that instead of the root node when calling opIndex.  Granted that whenever the contents of the list are modified that reference would have to be considered invalid (start from the root node again), but it'd work with an O(1) efficiency for sequential accesses from 0 to length.  True, it'll add another pointer to a node in memory, as well as a integer representing the position of that node reference.
> 
> you: "this scent will make skunk farts stink less."
> me: "let's kick the gorram skunk outta here!"

I would imagine that you'll have a hard time convincing others that linked-lists are evil when you apparently have two broken shift keys.

>> Wrong.  It works.  That it's not precisely what the spec for sort dictates (which is probably in error, since no spec can guarantee a precise efficiency if it doesn't know the precise container type).
> 
> sure it can. in big oh.

Which is simply identifying the algorithm used by its efficiency.  If you're not familiar with the types of algorithms, it tells you the proximate efficiency of the algorithm used.  If you are familiar with algorithms, then you can identify the type of algorithm used so you can better leverage it to do what you want.

>>  You
>> are also misinterpreting the spec.  It is saying that it uses a specific
>> efficiency of algorithm, not that you can arbitrarily expect a certain
>> efficiency out of it regardless of how dumb you might be with the choice
>> of container you use.
> 
> in stl the spec says as i say. in d the spec is not precise. it should.

Yes, it probably should explicitly say that "sort uses the xxxxx algorithm, which gives a proximate efficiency of O(n log n) when used with optimal data structures."

You honestly cannot write a spec for generic programming and expect uniform performance.

Trying to move back on topic, yes, I believe it is important that such a degree of ambiguity be avoided with something so simple as string handling.  So no strings-as-objects.  But writing a String class and using that wherever possible is advantageous, especially because it does not remove the ability of the language to support the simpler string implementation.

>> A Ph.D from superdan... gee, I'd value that just above my MSDN membership.  Remember: I value nothing less than my MSDN membership.
> 
> humor is a sign of intelligence. but let me explain it. i was referring to the spam emails advertising phds from non-accredited universities.

You get different spam than I do then.  I just get junk about cheap Canadian pharmaceuticals and dead South African oil moguls who have left large amounts of money in my name.