Jump to page: 1 2
Thread overview
If !in is inconsistent because of bool/pointer, then so is !
Feb 06, 2009
downs
Feb 06, 2009
bearophile
Feb 06, 2009
grauzone
Feb 06, 2009
Moritz Warning
Feb 06, 2009
Rainer Deyke
Feb 06, 2009
downs
Feb 06, 2009
Rainer Deyke
Feb 06, 2009
bearophile
Feb 07, 2009
Bill Baxter
Feb 07, 2009
bearophile
Feb 07, 2009
Rainer Deyke
Feb 07, 2009
Daniel Keep
Feb 07, 2009
Don
Feb 07, 2009
Rainer Deyke
Feb 07, 2009
Don
Feb 07, 2009
downs
Feb 08, 2009
Simen Kjaeraas
February 06, 2009
This has been brought up before as an argument against the !in operator (forcing us to resort to such workarounds as /notin/): that the !in operator would have inconsistent syntax with in, because in returns a pointer and !in would return a bool.

This is NOT a reason against !in. In fact, this so-called "inconsistency" is already present in the language. If we remember, !pointer already transforms it into a boolean, so it would actually be more consistent if !in changed the return type to bool.

Furthermore, many newcomers expect !in to work because it is intuitive. Violating such user expectations should be avoided wherever possible.
February 06, 2009
downs:
> Furthermore, many newcomers expect !in to work because it is intuitive. Violating such user expectations should be avoided wherever possible.

If the pressure coming from such silly limits/warts gets large enough, LDC may start fixing them independently from DMD (I don't know what LDC developers think about this idea).

Bye,
bearophile
February 06, 2009
vote++ for !in

It's one of those things, which are very annoying to programmers, but very easy to fix. Just do it, Walter.
February 06, 2009
On Fri, 06 Feb 2009 12:42:30 +0100, downs wrote:

> This has been brought up before as an argument against the !in operator (forcing us to resort to such workarounds as /notin/): that the !in operator would have inconsistent syntax with in, because in returns a pointer and !in would return a bool.
> 
> This is NOT a reason against !in. In fact, this so-called "inconsistency" is already present in the language. If we remember, !pointer already transforms it into a boolean, so it would actually be more consistent if !in changed the return type to bool.
> 
> Furthermore, many newcomers expect !in to work because it is intuitive. Violating such user expectations should be avoided wherever possible.

Having such operator/syntactic sugar would be very nice. None of the arguments against it have convinced me so far.
February 06, 2009
downs wrote:
> This is NOT a reason against !in. In fact, this so-called "inconsistency" is already present in the language. If we remember, !pointer already transforms it into a boolean, so it would actually be more consistent if !in changed the return type to bool.

I agree.  'a != b' is short for '!(a == b)'.  'a !is b' is short for
'!(a in b)'.  For consistency, 'a !in b ' should be short for
'!(a in b)'.  I'd even go so far as to say that 'a !+ b' should be short
for '!(a + b)', although I can't think of a use for the '!+' operator.

a !<op> b == !(a <op> b): simple, consistent pattern.
a !<op> b == !(a <op> b), but only for <op> in some limited set that
doesn't include all operators with which you might want to use the
pattern: less consistent; requires memorization.


-- 
Rainer Deyke - rainerd@eldwood.com
February 06, 2009
Rainer Deyke wrote:
> downs wrote:
>> This is NOT a reason against !in. In fact, this so-called "inconsistency" is already present in the language. If we remember, !pointer already transforms it into a boolean, so it would actually be more consistent if !in changed the return type to bool.
> 
> I agree.  'a != b' is short for '!(a == b)'.  'a !is b' is short for
> '!(a in b)'.  For consistency, 'a !in b ' should be short for
> '!(a in b)'.  I'd even go so far as to say that 'a !+ b' should be short
> for '!(a + b)', although I can't think of a use for the '!+' operator.
> 
> a !<op> b == !(a <op> b): simple, consistent pattern.
> a !<op> b == !(a <op> b), but only for <op> in some limited set that
> doesn't include all operators with which you might want to use the
> pattern: less consistent; requires memorization.
> 
> 

Hmm ...

A large part of the case for !in is that you can pronounce it "a *not in* b". !+, on the other hand, would be .. what? "a not plus b? does that mean a - b? " :)
February 06, 2009
downs wrote:
> A large part of the case for !in is that you can pronounce it "a *not in* b". !+, on the other hand, would be .. what? "a not plus b? does that mean a - b? " :)

It's a question of consistent patterns versus special cases.  If
'a !<op> b == !(a <op> b)', then the parser can rewrite all 'a !<op> b'
expressions as '!(a <op> b)' in a single place, without looking at what
<op> is.

(Of course '!=' (as the opposite of '==' as opposed to '=') is already a
special case, so perhaps defining the '!<op>' operators individually is
unavoidable.  'a !== b' as '!(a == b)' would work, but 'a != b' as '!(a
= b)' would be very weird and inconsistent with other languages.)

I'm not suggesting that anybody should actually /use/ the '!+' operator, even if it was defined.  That would be horrible.


-- 
Rainer Deyke - rainerd@eldwood.com
February 06, 2009
Rainer Deyke:
> It's a question of consistent patterns versus special cases.

You may think that for humans it's better to have a very orthogonal language, like for example Scheme.
There's also a famous quote about this, "Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary." But in practice the large part of programmers work with languages like Java, C, C++, C#, Python, modern basic variants, etc despite they have much more restrictions compared to Scheme.
This is a long thing to explain, and I don't have enough space in this tight post to explain it, but the short version is that removing "special cases" as allowing !+ makes the language worse, less easy to use, more bug-prone, and generally less good.

Bye,
bearophile
February 07, 2009
On Sat, Feb 7, 2009 at 8:54 AM, bearophile <bearophileHUGS@lycos.com> wrote:
> Rainer Deyke:
>> It's a question of consistent patterns versus special cases.
>
> You may think that for humans it's better to have a very orthogonal language, like for example Scheme.
> There's also a famous quote about this, "Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary." But in practice the large part of programmers work with languages like Java, C, C++, C#, Python, modern basic variants, etc despite they have much more restrictions compared to Scheme.
> This is a long thing to explain, and I don't have enough space in this tight post to explain it, but the short version is that removing "special cases" as allowing !+ makes the language worse, less easy to use, more bug-prone, and generally less good.

Note that D already has things like !>.   But quoth the spec:
"For floating point comparison operators, (a !op b)  is *NOT* the same
as !(a op b)."
[emphasis added]

But anyway I wholeheartedly agree that (a !in b) should exist and it
should be the same as !(a in b).

I think the principle of least surprise is generally a good one to
follow.  And I think most people are surprised that (a !is b) means
!(a is b),   while the same is not true of (a !in b).

--bb
February 07, 2009
Bill Baxter:
> (a !in b) should exist and it should be the same as !(a in b).

Of course.

Bye,
bearophile
« First   ‹ Prev
1 2