December 20, 2021

On Monday, 20 December 2021 at 11:04:00 UTC, Dennis wrote:

>

On Sunday, 19 December 2021 at 18:31:46 UTC, Steven Schveighoffer wrote:

>

There's your answer -- make a custom type that allows in with an array-like structure.

For the record, I'm not arguing in favor of in for slices. I'm just baffled that whenever this comes up, the first thing people bring up is the 'time complexity' argument.

Yes, I agree with this. When I use "in" in Python I am very much aware that it does a linear scan, but it is a very useful shorthand that can replaces a long and confusing sequence of or-expressions.

E.g.

if keyword in ['if', 'while', 'for]

is a very useful and clear syntactical construct.

December 20, 2021

On Monday, 20 December 2021 at 13:54:56 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 20 December 2021 at 11:04:00 UTC, Dennis wrote:

>

On Sunday, 19 December 2021 at 18:31:46 UTC, Steven Schveighoffer wrote:

>

There's your answer -- make a custom type that allows in with an array-like structure.

For the record, I'm not arguing in favor of in for slices. I'm just baffled that whenever this comes up, the first thing people bring up is the 'time complexity' argument.

Yes, I agree with this. When I use "in" in Python I am very much aware that it does a linear scan, but it is a very useful shorthand that can replaces a long and confusing sequence of or-expressions.

E.g.

if keyword in ['if', 'while', 'for]

is a very useful and clear syntactical construct.

import std.algorithm.searching;

if (["if", "while", "for"].canFind(keyword)) {
    // ...
}
December 20, 2021

On Monday, 20 December 2021 at 14:21:08 UTC, Paul Backus wrote:

>

On Monday, 20 December 2021 at 13:54:56 UTC, Ola Fosheim Grøstad wrote:

>

E.g.

if keyword in ['if', 'while', 'for]

is a very useful and clear syntactical construct.

import std.algorithm.searching;

if (["if", "while", "for"].canFind(keyword)) {
    // ...
}

TBH, for that particular case, this looks nicer, and IMO documents the intent better than the original Pythonism:

import std.algorithm.comparison : among;

if (keyword.among("if", "while", "for")) { /* ... */ }

It also doesn't let work go to waste, as it returns a 1-based index.

December 20, 2021

On Monday, 20 December 2021 at 13:07:40 UTC, Biotronic wrote:

>

If 'in' is O(1), that's a fairly sensible implementation.

Thanks, that's an illuminating example.

December 20, 2021

On Monday, 20 December 2021 at 14:21:08 UTC, Paul Backus wrote:

>
import std.algorithm.searching;

if (["if", "while", "for"].canFind(keyword)) {
    // ...
}

You can always create a library construct, but then you should also argue in favour of removing AAs and "in". You either do it, or you don't. Why the half-measures?

December 20, 2021

On Monday, 20 December 2021 at 21:11:28 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 20 December 2021 at 14:21:08 UTC, Paul Backus wrote:

>
import std.algorithm.searching;

if (["if", "while", "for"].canFind(keyword)) {
    // ...
}

You can always create a library construct, but then you should also argue in favour of removing AAs and "in". You either do it, or you don't. Why the half-measures?

There are a lot of features one could potentially add to D to improve it, and limited resources to devote to doing so. I'm not necessarily against adding in for slices, but since we already have a perfectly good library solution, I don't think it deserves a high priority.

Feel free to write the DIP yourself if you disagree. :)

December 21, 2021

On 12/20/21 6:04 AM, Dennis wrote:

>

On Sunday, 19 December 2021 at 18:31:46 UTC, Steven Schveighoffer wrote:

>

There's your answer -- make a custom type that allows in with an array-like structure.

For the record, I'm not arguing in favor of in for slices. I'm just baffled that whenever this comes up, the first thing people bring up is the 'time complexity' argument.

One thing not really being discussed is that there is a difference between "some library" defining slow in operators, or slow opIndex, and dlang/Phobos doing it.

D picked the path of trying to ensure complexity consistency for in, but it's more of a stylistic rule, not necessarily a requirement.

-Steve

December 21, 2021

On 12/19/21 2:34 PM, Ola Fosheim Grøstad wrote:

>

On Sunday, 19 December 2021 at 18:36:12 UTC, Steven Schveighoffer wrote:

>

That is what we claim (and what hash algorithms in general claim). amortized O(1) lookups and inserts.

Yes, if you don't shrink.

You need to prove that the claim holds for all possible orderings of inserts, deletions and lookups.

So you need to prove that you always do O(N) operations between rehashing, so you get O(N) operations of complexity O(1) + one rehash of complexity O(N) = O(N).

Please see existing research, this is not a new concept.

-Steve

December 21, 2021

On Monday, 20 December 2021 at 21:11:28 UTC, Ola Fosheim Grøstad wrote:

>

You can always create a library construct, but then you should also argue in favour of removing AAs and "in". You either do it, or you don't. Why the half-measures?

No, canFind and among search for a given value in a range. in checks if a given key exists and returns a pointer to the associated data (not a pointer to the matching key). This is why Phobos SortedRange does not implement opIn even though it would be sub-linear. Dennis is right that conceptually a key for an array is an index. An array element value is the data associated with an index.

December 22, 2021

On Tuesday, 21 December 2021 at 17:02:22 UTC, Steven Schveighoffer wrote:

>

On 12/19/21 2:34 PM, Ola Fosheim Grøstad wrote:

>

On Sunday, 19 December 2021 at 18:36:12 UTC, Steven Schveighoffer wrote:

>

That is what we claim (and what hash algorithms in general claim). amortized O(1) lookups and inserts.

Yes, if you don't shrink.

You need to prove that the claim holds for all possible orderings of inserts, deletions and lookups.

So you need to prove that you always do O(N) operations between rehashing, so you get O(N) operations of complexity O(1) + one rehash of complexity O(N) = O(N).

Please see existing research, this is not a new concept.

What do you mean by research, this is a trivial topic. In order to get O(1) amortised you have to delay shrinkage, you must prove that any possible sequence does not reallocate more frequently than N operations.