February 17, 2006
On Sat, 18 Feb 2006 07:44:13 +1100, Jarrett Billingsley <kb3ctd2@yahoo.com> wrote:

> "Derek Parnell" <derek@psych.ward> wrote in message
> news:op.s44h81h56b8z09@ginger.vic.bigpond.net.au...
>> Because they do not belong in the domain of numbers.
>> Because 'what is the square root of truth' is meaningless.
>> Because we don't allow '"cat" + "harry" / ( "bob" - "stupid")'.
>> Because 'false' is not zero, and 'true' is not one.
>> Because zero is not 'false' and one is not 'true'.
>
> I agree with everything but the last two lines.  If there's anything that
> irritates me, it's when a language makes it very difficult to convert
> between ints and bools.  While I will accept the fact that bools abstract
> the ideas of "true" and "false," I think it's still useful to allow them to
> be represented by their true, numerical representation.

'Truth' is not a number so why do you insist that 'true' is 1?
'Falsehood' is not a number so why do you insist that 'false' is 0?

The numbers 1 and 0 are used as one of the miriad of possible representations of truth and falsehood. There is nothing intrinsic about these numbers but they are practical. For example, some programming languages use -1 to *represent* truth.

I too would be irritated if I couldn't code the shorthand ...

    while( <numeric_expression> ) ...

but it is just that, *shorthand*, for ...

    while( <numeric_expression> != 0 ) ...

which evaluates to a boolean result. The <numeric_expression> itself is not truth or falsehood, but the equality test for that expression is.


-- 
Derek Parnell
Melbourne, Australia
February 17, 2006
"Derek Parnell" <derek@psych.ward> wrote in message news:op.s440fct36b8z09@ginger.vic.bigpond.net.au...
> 'Truth' is not a number so why do you insist that 'true' is 1? 'Falsehood' is not a number so why do you insist that 'false' is 0?
>
> The numbers 1 and 0 are used as one of the miriad of possible representations of truth and falsehood. There is nothing intrinsic about these numbers but they are practical. For example, some programming languages use -1 to *represent* truth.

The only argument I have against that is that to a computer, 1 means true and 0 means false.  And we're dealing with a computer here.  It has become such a standard that 1 represents true and 0 represents false that those languages which do use other representations (such as -1 in.. VB IIRC) end up being something of a pain to interface with from the myriad of other languages which use 1 for true and 0 for false.  What does this extra, minor layer of abstraction really gain you, anyway?


February 18, 2006
Jarrett Billingsley wrote:
> The only argument I have against that is that to a computer, 1 means true and 0 means false.  And we're dealing with a computer here.  It has become such a standard that 1 represents true and 0 represents false that those languages which do use other representations (such as -1 in.. VB IIRC) end up being something of a pain to interface with from the myriad of other languages which use 1 for true and 0 for false.  What does this extra, minor layer of abstraction really gain you, anyway? 
> 

Computers don't have inteligence nor understanding of our concepts so true and false mean nothing to them. Also: we are talking about a programming language that is meant to be writen by humans so we are actually talking about humans that have a concept of true and false and Boolean algebra. The actual representation of true and false from this point of view has no meaning.

I will also add here that i agree with most of what Derek said in his bool-related posts except:

if(17) <- this means nothing to me? is 17 true or false?

Although I know it is very unlikely to happen I would be happiest if 'if' and 'while' excepted *only* boolean arguments.

Maybe I know what 'if(number)' means but 'if(number != 0)' is much more understandable and maintainable.

February 18, 2006
"Jarrett Billingsley" <kb3ctd2@yahoo.com> wrote ...
> "Derek Parnell" <derek@psych.ward> wrote in message news:op.s440fct36b8z09@ginger.vic.bigpond.net.au...
>> 'Truth' is not a number so why do you insist that 'true' is 1? 'Falsehood' is not a number so why do you insist that 'false' is 0?
>>
>> The numbers 1 and 0 are used as one of the miriad of possible representations of truth and falsehood. There is nothing intrinsic about these numbers but they are practical. For example, some programming languages use -1 to *represent* truth.
>
> The only argument I have against that is that to a computer, 1 means true and 0 means false.  And we're dealing with a computer here.  It has become such a standard that 1 represents true and 0 represents false that those languages which do use other representations (such as -1 in.. VB IIRC) end up being something of a pain to interface with from the myriad of other languages which use 1 for true and 0 for false.  What does this extra, minor layer of abstraction really gain you, anyway?


If you look at CPU designs, you'll very often see a CPU-flag for zero/non-zero. Controlling that flag is typically supported by a few short 'test' instructions, whereas testing against 1 or some other value may involve an additional instruction (there are some notable exceptions, such as the H8 family). Thus it's not unusual to see false defined as 0, and true defined as !false, or ~false.

However, it's rather likely that D will assume the same values for true and false as DMC; whatever those are.


February 18, 2006
On Sat, 18 Feb 2006 10:45:34 +1100, Jarrett Billingsley <kb3ctd2@yahoo.com> wrote:

> "Derek Parnell" <derek@psych.ward> wrote in message
> news:op.s440fct36b8z09@ginger.vic.bigpond.net.au...
>> 'Truth' is not a number so why do you insist that 'true' is 1?
>> 'Falsehood' is not a number so why do you insist that 'false' is 0?
>>
>> The numbers 1 and 0 are used as one of the miriad of possible
>> representations of truth and falsehood. There is nothing intrinsic about
>> these numbers but they are practical. For example, some programming
>> languages use -1 to *represent* truth.
>
> The only argument I have against that is that to a computer, 1 means true
> and 0 means false.  And we're dealing with a computer here.  It has become
> such a standard that 1 represents true and 0 represents false that those
> languages which do use other representations (such as -1 in.. VB IIRC) end
> up being something of a pain to interface with from the myriad of other
> languages which use 1 for true and 0 for false.

I believe that we are actually dealing with a computer *programming language* and not a computer. It is the compiler that is dealing with the computer, not us. The compiler is free to assign 1 to truth and 0 to falsehood if its worthwhile. But a programming language is for people to use to express algortihms in a manner that is easier to do than other methods and to help us communicate that algorithm to other *people*. It is the compiler that does the hard work of converting that human-friendly expression into hardware-friendly code.

> What does this extra, minor layer of abstraction really gain you, anyway?

It encourages humans to think about the problem rather than the mechanics of the problem's implementation in machine code.


-- 
Derek Parnell
Melbourne, Australia
February 18, 2006
On Sat, 18 Feb 2006 11:07:03 +1100, Ivan Senji <ivan.senji_REMOVE_@_THIS__gmail.com> wrote:

> if(17) <- this means nothing to me? is 17 true or false?
>
> Although I know it is very unlikely to happen I would be happiest if 'if' and 'while' excepted *only* boolean arguments.
>
> Maybe I know what 'if(number)' means but 'if(number != 0)' is much more understandable and maintainable.

I tend to agree with the readibility aspect here and I also tend to write code like this out in full too. But there are other people who do not and it isn't that hard to mentally translate the intention, so I'll not complain too much about its continued existance.



-- 
Derek Parnell
Melbourne, Australia
February 18, 2006
Derek Parnell wrote:
> On Sat, 18 Feb 2006 11:07:03 +1100, Ivan Senji  <ivan.senji_REMOVE_@_THIS__gmail.com> wrote:
> 
>> if(17) <- this means nothing to me? is 17 true or false?
>>
>> Although I know it is very unlikely to happen I would be happiest if  'if' and 'while' excepted *only* boolean arguments.
>>
>> Maybe I know what 'if(number)' means but 'if(number != 0)' is much more  understandable and maintainable.
> 
> 
> I tend to agree with the readibility aspect here and I also tend to write  code like this out in full too. But there are other people who do not and  it isn't that hard to mentally translate the intention, so I'll not  complain too much about its continued existance.
> 

Nor will I (too much time spent in this NG :)
February 18, 2006
> Because they do not belong in the domain of numbers.
> Because 'what is the square root of truth' is meaningless.
> Because we don't allow '"cat" + "harry" / ( "bob" - "stupid")'.
> Because 'false' is not zero, and 'true' is not one.
> Because zero is not 'false' and one is not 'true'.
> 

This is one thing I really like about Ada. The compiler will just not let you do ANYTHING with a type that requires any kind of implied cast. Boolean is Boolean and nothing else. If you've got a Boolean variable, you can't add an integer to it, you can't assigned anything other than another Boolean to it, and you can't use anything other than a Boolean in it's place. You can't do something like "while(x)" unless x was declared a Boolean type. This extreme type safety in Ada takes a little getting used to, especially if you do a lot of "while(1)"-style shortcuts, but it really becomes your friend and saves you from a lot of unintentional bugs later (not that there's ever an *intentional* bug, but you know what I mean).
February 18, 2006
John Stoneham wrote:
>> Because they do not belong in the domain of numbers.
>> Because 'what is the square root of truth' is meaningless.
>> Because we don't allow '"cat" + "harry" / ( "bob" - "stupid")'.
>> Because 'false' is not zero, and 'true' is not one.
>> Because zero is not 'false' and one is not 'true'.
>>
> 
> This is one thing I really like about Ada. The compiler will just not let you do ANYTHING with a type that requires any kind of implied cast. Boolean is Boolean and nothing else. If you've got a Boolean variable, you can't add an integer to it, you can't assigned anything other than another Boolean to it, and you can't use anything other than a Boolean in it's place. You can't do something like "while(x)" unless x was declared a Boolean type. This extreme type safety in Ada takes a little getting used to, especially if you do a lot of "while(1)"-style shortcuts, but it really becomes your friend and saves you from a lot of unintentional bugs later (not that there's ever an *intentional* bug, but you know what I mean).

I think every language that claims to be type-safe should behave that way. Boolean is boolean and nothing else, and the way that it is implemented (byte, int, float, 0==true, 0==false) is of no importance.
February 18, 2006
Ivan Senji wrote:
> I think every language that claims to be type-safe should behave that way. Boolean is boolean and nothing else, and the way that it is implemented (byte, int, float, 0==true, 0==false) is of no importance.

Agreed. I would not like this to be allowed without a compiler error:

int x = 1;
bool B = x;

This should be required:
bool B = cast(bool)x;

Along the same lines, this currently bugs the hell out me as not raising a compile-time error:

long long_x = 4294967296; // int overflow
int some_int = long_x;

The value of some_int is now 0, definitely not a desired result. What if bool is implemented as an int, but "cast" is not required in an assignment? The value of long_x above is positive, so that should be evaluate to true after B = long_x, right? Well, not if bool is implemented as an int. Then it would be *false* because converting the long value of 4294967296 to an int results in a value of 0.

This can (and should, IMO) raise an error at compile time.