View mode: basic / threaded / horizontal-split · Log in · Help
December 23, 2011
Re: BigInt bool assign
Jonathan M Davis:

> I'd actually argue that that's a mistake.
> ...
> I see no reason to expand that 
> problem into BigInt. _int_ shouldn't have it, let alone BigInt.

I find implicit bool->int conversion handy, but... I like better the Pascal/Ada way of keeping ints and bools more distinct. I don't like this aspect of the C language that D has inherited.

So do you want to write an enhancement request to change the way D ints and bools behave? OK. When D ints and bools will be changed the way you say then I'll be happy to see BigInts changed back to refuse assignments from bools.

But now 99.9+% of the integral values you find in D programs are not bigInts, so you are "expanding" something tiny. You are not improving code, you are just making BigInts a bit weird compared to most other D code.

Bye,
bearophile
December 24, 2011
Re: BigInt bool assign
On 23/12/11 10:34 PM, Jonathan M Davis wrote:
> On Friday, December 23, 2011 17:19:26 bearophile wrote:
>> Derek Parnell:
>>> I'm with Don on this one because a boolean and an integer are not the
>>> same concept, and even though many programming languages implement
>>> booleans using integers, it still doesn't make them the same thing.
>>
>> D doesn't implement booleans with integers, D has a boolean type. But D
>> allows bools to implicitly cast to ints/longs.
>
> I'd actually argue that that's a mistake. Implicitly converting an int to a
> bool is one thing - that's useful in conditional expressions - but converting
> from bool to int is something else entirely. I see no reason to expand that
> problem into BigInt. _int_ shouldn't have it, let alone BigInt.

I agree that bool -> int is wrong, but I also think that inconsistency 
between int and BigInt is wrong.
December 24, 2011
Re: BigInt bool assign
On 12/24/2011 12:31 AM, Jonathan M Davis wrote:
> On Friday, December 23, 2011 23:52:00 Timon Gehr wrote:
>> There is really no problem with that. I have never seen anyone complain
>> about implicit bool ->  int conversion. Why do you think it is bad? Does
>> anyone have an example to back up the claim that it is bad?
>
> They're completely different types and mean completely different things. It's
> one thing to convert from a narrower integer to a wider one, but bool is _not_
> an integer. Would you implicitly convert a string to an int? No. It's not a
> number. I don't see any reason to treat bool any differently on that count.
> bool isn't a number either. It's true or it's false. The problem is that C
> conflated bool with int, and on some level that behavior still exists in D. But
> bool and int are two entirely different types and entirely different concepts.
>
> - Jonathan M Davis

Entirely different concepts? oO

bool and int are in no way 'entirely different concepts'. Both are 
fields. bool is (Z_2, ^, &) , int is (Z_(2^32), +, *). string is 
conceptually a monoid.

Boolean algebra is the algebra of two values. At least in computer 
science or digital design, those two values are 0 and 1. If there are 
implicit conversions in a language at all, implicit bool -> int is a 
natural thing to do. There is no such argument for string -> int.
December 24, 2011
Re: BigInt bool assign
On 12/23/2011 11:34 PM, Jonathan M Davis wrote:
> On Friday, December 23, 2011 17:19:26 bearophile wrote:
>> Derek Parnell:
>>> I'm with Don on this one because a boolean and an integer are not the
>>> same concept, and even though many programming languages implement
>>> booleans using integers, it still doesn't make them the same thing.
>>
>> D doesn't implement booleans with integers, D has a boolean type. But D
>> allows bools to implicitly cast to ints/longs.
>
> I'd actually argue that that's a mistake. Implicitly converting an int to a
> bool is one thing - that's useful in conditional expressions - but converting
> from bool to int is something else entirely. I see no reason to expand that
> problem into BigInt. _int_ shouldn't have it, let alone BigInt.
>
> - Jonathan M Davis

A: "Um, so why does bool implicitly convert to int but not to BigInt?"
B: "Because the language's design contains an error. It is a huge 
_problem_. Therefore we decided to keep it inconsistent. If you 
re-parenthesise your expression however, your code will compile."
A: "Awesome!!"
December 24, 2011
Re: BigInt bool assign
On Saturday, December 24, 2011 01:08:11 Timon Gehr wrote:
> bool and int are in no way 'entirely different concepts'. Both are
> fields. bool is (Z_2, ^, &) , int is (Z_(2^32), +, *). string is
> conceptually a monoid.
> 
> Boolean algebra is the algebra of two values. At least in computer
> science or digital design, those two values are 0 and 1. If there are
> implicit conversions in a language at all, implicit bool -> int is a
> natural thing to do. There is no such argument for string -> int.

Boolean has the values are true and false. The fact that it's implemented as 1 
and 0 is an implementation detail. Conceptually, a bool is _not_ a number any 
more than a string is. As such, it shouldn't implicitly convert to a number 
any more than a string does.

- Jonathan M Davis
December 24, 2011
Re: BigInt bool assign
On Sat, 24 Dec 2011 09:19:26 +1100, bearophile <bearophileHUGS@lycos.com>  
wrote:

> Derek Parnell:
>
>> I'm with Don on this one because a boolean and an integer are not the  
>> same
>> concept, and even though many programming languages implement booleans
>> using integers, it still doesn't make them the same thing.
>
> D doesn't implement booleans with integers, D has a boolean type. But D  
> allows bools to implicitly cast to ints/longs.
> Not allowing a BigInt to be initialized with a bool value introduce an  
> inconsistency that makes BigInts more complex because there is one more  
> rule to remember, less inter-operable with ints, and I don't think it  
> introduces advantages.

I agree that 'consistency' is a powerful argument. So it comes down to is  
D meant to be the best language or an adequate language.

I maintain that D would be a better language if it didn't allow implicit  
bool <-> int conversions. The most common thing that humans do to source  
code is read it, in order to understand it's purpose and/or intentions. We  
would do ourselves a service if we strive to make programing languages aid  
this activity. Some implicit conversions can mask a coder's intentions,  
and I believe that bool/int is one of those.


>> Using booleans as implicit integers can be seen as laziness (i.e. poor
>> documentation of coder's intent) or a legitimate mistake (i.e
>> unintentional usage by coder).
>
> In my code such mistakes are uncommon.

But not impossible.

>> By insisting that an explicit cast must be
>> used when one wants a boolean to behave as an integer allows the coder's
>> intent to become more apparent when reading their source code. This has
>> nothing to do with machine code generation, just source code legibility.
>
> Casts are powerful tools, they shut up the compiler and they assume the  
> programmer is perfectly right and has perfect knowledge of what's going  
> on.

Do you really believe that the purpose of casts are to "shut up the  
compiler"? Seriously?

> In practice my experience shows that the programmer (me too) sometimes  
> doesn't have perfect knowledge (usually because the code later was  
> modified, turning the cast into a bug because casts are often silent).

You realize that the exact argument can be made about implicit casts.

> This is why it's better to avoid casts, not requiring them in the first  
> place, unless they are useful. In this case I think a cast introduces  
> more danger than the risks caused by implicit bool->int conversions.

If we assume that explicit casts are required for bool->int conversion,  
can you show some code in which this could cause a problem?

-- 
Derek Parnell
Melbourne, Australia
December 24, 2011
Re: BigInt bool assign
On 24.12.2011 01:32, Timon Gehr wrote:
> On 12/23/2011 11:34 PM, Jonathan M Davis wrote:
>> On Friday, December 23, 2011 17:19:26 bearophile wrote:
>>> Derek Parnell:
>>>> I'm with Don on this one because a boolean and an integer are not the
>>>> same concept, and even though many programming languages implement
>>>> booleans using integers, it still doesn't make them the same thing.
>>>
>>> D doesn't implement booleans with integers, D has a boolean type. But D
>>> allows bools to implicitly cast to ints/longs.
>>
>> I'd actually argue that that's a mistake. Implicitly converting an int
>> to a
>> bool is one thing - that's useful in conditional expressions - but
>> converting
>> from bool to int is something else entirely. I see no reason to expand
>> that
>> problem into BigInt. _int_ shouldn't have it, let alone BigInt.
>>
>> - Jonathan M Davis
>
> A: "Um, so why does bool implicitly convert to int but not to BigInt?"
> B: "Because the language's design contains an error. It is a huge
> _problem_. Therefore we decided to keep it inconsistent. If you
> re-parenthesise your expression however, your code will compile."
> A: "Awesome!!"

As I said when I closed that post, it is _impossible_ for BigInt to 
always behave the same as int. One example:

byte c = x & 0x7F;

This compiles if x is an int. It doesn't compile if x is a BigInt.

BigInt's job is to behave like a Euclidean integer, not to be a drop-in 
replacement for built-in integer types.
December 24, 2011
Re: BigInt bool assign
On 24.12.2011 02:30, Derek wrote:
> On Sat, 24 Dec 2011 09:19:26 +1100, bearophile
> <bearophileHUGS@lycos.com> wrote:
>
>> This is why it's better to avoid casts, not requiring them in the
>> first place, unless they are useful. In this case I think a cast
>> introduces more danger than the risks caused by implicit bool->int
>> conversions.
>
> If we assume that explicit casts are required for bool->int conversion,
> can you show some code in which this could cause a problem?

I think stuff like
int z +=  x > y;
should ideally require a cast. That's a crazy operation.

The problem is compatibility with ancient C code (pre-C99), where you 
may find:

alias int BOOL;

BOOL b = x > y;

Although BOOL is typed as 'int', it really has the semantics of 'bool'.
We have an example of this in D1's opEquals().
I think this is reason why implicit conversion bool -> int exists.

BTW, great to see you again, Derek!
December 24, 2011
Re: BigInt bool assign
On 24.12.2011 12:33, Don wrote:
> On 24.12.2011 02:30, Derek wrote:
>> On Sat, 24 Dec 2011 09:19:26 +1100, bearophile
>> <bearophileHUGS@lycos.com> wrote:
>>
>>> This is why it's better to avoid casts, not requiring them in the
>>> first place, unless they are useful. In this case I think a cast
>>> introduces more danger than the risks caused by implicit bool->int
>>> conversions.
>>
>> If we assume that explicit casts are required for bool->int conversion,
>> can you show some code in which this could cause a problem?
>
> I think stuff like
> int z += x > y;
> should ideally require a cast. That's a crazy operation.
>
> The problem is compatibility with ancient C code (pre-C99), where you
> may find:
>
> alias int BOOL;
>
> BOOL b = x > y;
>
> Although BOOL is typed as 'int', it really has the semantics of 'bool'.
> We have an example of this in D1's opEquals().
> I think this is reason why implicit conversion bool -> int exists.
>
> BTW, great to see you again, Derek!

The D Programming Language, page 172:
for (; n >= iter * iter; iter += 2 - (iter == 2)) { ...
:)
December 24, 2011
Re: BigInt bool assign
Don:

> As I said when I closed that post, it is _impossible_ for BigInt to
> always behave the same as int. One example:
> 
> byte c = x & 0x7F;
> 
> This compiles if x is an int. It doesn't compile if x is a BigInt.
> 
> BigInt's job is to behave like a Euclidean integer, not to be a drop-in
> replacement for built-in integer types.


As I have said in the first post of this thread I am not asking for impossible things:

> While multi-precision numbers are not the fixed size integers, it is wise to give
> multi-precision numbers the same rules and usages of the normal fixed size integers
> _everywhere this is possible and handy_. This has some advantages like:
> - Reduces the cognitive burden to remember where they differ;
> - Allows for less work to adapt routines that work with integers to work with
> BigInts. This is handy for generic code and for manual translation of code.
> 
> I have said everywhere this is possible and handy, because this is not always
> possible. You can't use a BigInt to index an array, and there are some
> situations where BigInts require a different algorithm
> (example: http://d.puremagic.com/issues/show_bug.cgi?id=7102 ).
> So I am not asking
> BigInt to be a drop-in replacement for int in all cases.

Despite this code is currently not accepted:

BigInt x;
byte c = x & 0x7F;


Refusing this too introduces another useless difference between ints and BigInts:

void main() {
   BigInt b = true;
}


Introducing differences between the two types is acceptable if it's required by the semantic difference between the two types, or if it introduces some other improvement. But this is not the case. So this argument of yours is invalid.

-------------------------

Derek Parnell:

>> In my code such mistakes are uncommon.

> But not impossible.

Designing an engineering system like a programming language is often a matter of trade-offs. If in my code I find a problem (like integer overflows) quite more common than other ones (like bugs caused by implicit bool->int conversions) it is very right for me to desire the first ones issued first. Priorities are really important in engineering.


>> Casts are powerful tools, they shut up the compiler and they assume the  
>> programmer is perfectly right and has perfect knowledge of what's going  
>> on.

> Do you really believe that the purpose of casts are to "shut up the  
> compiler"? Seriously?

I believe that casts often "shut up the compiler" but I don't belive that's their purpose. One of their main purposes is to offer a standard way to break the static type system in specific points of the program. Every type system restricts the number of the acceptable programs. But programmers sometimes want to write some of those programs. To do this they sometimes use casts. D casts have other secondary purposes, like bit reinterpretation, etc.


>> In practice my experience shows that the programmer (me too) sometimes  
>> doesn't have perfect knowledge (usually because the code later was  
>> modified, turning the cast into a bug because casts are often silent).

> You realize that the exact argument can be made about implicit casts.

You are missing something important. Currently this code compiles, it performs a silent implicit cast:

bool foo() { return true; }
void main() {
   int x = foo();
}


Now you change the code, foo returns a double, the implicit cast stops being accepted and the compiler gives an error:

double foo() { return 1.5; }
void main() {
   int x = foo();
}


The same doesn't happen if you use an explicit cast. This is the original code if we now require a cast to assign a bool to an int:

bool foo() { return true; }
void main() {
   int x = cast(int)foo();
}


Now if you modify the code, so foo returns a double, the cast keeps silencing the compiler and this is a possible bug that goes unnoticed (you lose information doing double->int, while bit->int doesn't lose information):


double foo() { return 1.5; }
void main() {
   int x = cast(int)foo();
}

---------------------

Don:

> I think stuff like
> int z +=  x > y;
> should ideally require a cast. That's a crazy operation.

If D ints/bools change their semantics in that way, then I agree that BigInt should do the same. But until that moment...

Bye,
bearophile
1 2 3
Top | Discussion index | About this forum | D home