December 11, 2012
Walter Bright:

> Why stop at 64 bits? Why not make there only be one integral type, and it is of whatever precision is necessary to hold the value? This is quite doable, and has been done.

I think no one has asked for *bignums on default* in this thread.


> But at a terrible performance cost.

Nope, this is a significant fallacy of yours.
Common lisp (and OCaML) uses tagged integers on default, and they are very far from being "terrible". Tagged integers cause no heap allocations if they aren't large. Also the Common Lisp compiler in various situations is able to infer an integer can't be too much large, replacing it with some fixnum. And it's easy to add annotations in critical spots to ask the Common Lisp compiler to use a fixnum, to squeeze out all the performance.
The result is code that's quick, for most situations. But it's more often correct. In D you drive with eyes shut; sometimes for me it's hard to know if some integral overflow has occurred in a long computation.


> And, yes, in D you can create your own "BigInt" datatype which exhibits this behavior.

Currently D bigints don't have short int optimization. And even when this library problem is removed, I think the compiler doesn't perform on BigInts the optimizations it does on ints, because it doesn't know about bigint properties.

Bye,
bearophile
December 11, 2012
> Besides, at the end of the day, a half-approach would be to have a widest-signed-integral and a widest-unsigned-integral type and only play with those two.

Clarification: to have those two types as fundamental (ie: promotion-favourite) types, not the sole types in the language.
December 11, 2012
On 12/11/2012 10:36 AM, eles wrote:
> You really miss the point here. Nobody will ask you to promote those numbers to
> 64-bit or whatever *unless necessary*.

No, I don't miss the point. There are very few cases where the compiler could statically prove that something will fit in less than 32 bits.

Consider this:

  Integer foo(Integer i)
  {
    return i * 2;
  }

Tell me how many bits that should be.
December 11, 2012
On 12/11/12 1:36 PM, eles wrote:
> Until now the question received many backfights, but no answer.
>
> A bit shameful.

I thought my answer wasn't all that shoddy and not defensive at all.

Andrei
December 11, 2012
On 12/11/2012 10:45 AM, bearophile wrote:
> Walter Bright:
>
>> Why stop at 64 bits? Why not make there only be one integral type, and it
>> is of whatever precision is necessary to hold the value? This is quite
>> doable, and has been done.
>
> I think no one has asked for *bignums on default* in this thread.

I know they didn't ask. But they did ask for 64 bits, and the exact same
argument will apply to bignums, as I pointed out.

>> But at a terrible performance cost.
> Nope, this is a significant fallacy of yours. Common lisp (and OCaML) uses
> tagged integers on default, and they are very far from being "terrible".
> Tagged integers cause no heap allocations if they aren't large. Also the
> Common Lisp compiler in various situations is able to infer an integer can't
> be too much large, replacing it with some fixnum. And it's easy to add
> annotations in critical spots to ask the Common Lisp compiler to use a
> fixnum, to squeeze out all the performance.

I don't notice anyone reaching for Lisp or Ocaml for high performance applications.


> The result is code that's quick, for most situations. But it's more often
> correct. In D you drive with eyes shut; sometimes for me it's hard to know if
>  some integral overflow has occurred in a long computation.
>
>
>> And, yes, in D you can create your own "BigInt" datatype which exhibits
>> this behavior.
>
> Currently D bigints don't have short int optimization.

That's irrelevant to this discussion. It is not a problem with the language.
Anyone can improve the library one if they desire, or do their own.


> I think the compiler doesn't perform on BigInts the optimizations it does on
> ints, because it doesn't know about bigint properties.

I think the general lack of interest in bigints indicate that the builtin types work well enough for most work.
December 11, 2012
> I thought my answer wasn't all that shoddy and not defensive at all.

I step back. I agree. Thank you.
December 11, 2012
On 12/11/2012 10:44 AM, foobar wrote:
> All of the above relies on the assumption that the safety problem is due to the
> memory layout. There are many other programming languages that solve this by
> using a different point of view - the problem lies in the implicit casts and not
> the memory layout. In other words, the culprit is code such as:
> uint a = -1;
> which compiles under C's implicit coercion rules but _really shouldn't_.
> The semantically correct way would be something like:
> uint a = 0xFFFF_FFFF;
> but C/C++ programmers tend to think the "-1" trick is less verbose and "better".

Trick? Not at all.

1. -1 is the size of an int, which varies in C.

2. -i means "complement and then increment".

3. Would you allow 2-1? How about 1-1? (1-1)-1?

Arithmetic in computers is different from the math you learned in school. It's 2's complement, and it's best to always keep that in mind when writing programs.
December 11, 2012
On 12/11/12 5:07 PM, eles wrote:
>> I thought my answer wasn't all that shoddy and not defensive at all.
>
> I step back. I agree. Thank you.

Somebody convinced somebody else of something on the Net. This has good day written all over it. Time to open that champagne. Cheers!

Andrei
December 11, 2012
On Tuesday, 11 December 2012 at 21:57:38 UTC, Walter Bright wrote:
> On 12/11/2012 10:45 AM, bearophile wrote:
>> Walter Bright:
>>
>>> Why stop at 64 bits? Why not make there only be one integral type, and it
>>> is of whatever precision is necessary to hold the value? This is quite
>>> doable, and has been done.
>>
>> I think no one has asked for *bignums on default* in this thread.
>
> I know they didn't ask. But they did ask for 64 bits, and the exact same
> argument will apply to bignums, as I pointed out.
>

Agreed.

>>> But at a terrible performance cost.
>> Nope, this is a significant fallacy of yours. Common lisp (and OCaML) uses
>> tagged integers on default, and they are very far from being "terrible".
>> Tagged integers cause no heap allocations if they aren't large. Also the
>> Common Lisp compiler in various situations is able to infer an integer can't
>> be too much large, replacing it with some fixnum. And it's easy to add
>> annotations in critical spots to ask the Common Lisp compiler to use a
>> fixnum, to squeeze out all the performance.
>
> I don't notice anyone reaching for Lisp or Ocaml for high performance applications.
>

I don't know about common LISP performances, never tried it in something where that really matter. But OCaml is really very performant. I don't know how it handle integer internally.

> That's irrelevant to this discussion. It is not a problem with the language.
> Anyone can improve the library one if they desire, or do their own.
>

The library is part of the language. What is a language with no vocabulary ?

>> I think the compiler doesn't perform on BigInts the optimizations it does on
>> ints, because it doesn't know about bigint properties.
>
> I think the general lack of interest in bigints indicate that the builtin types work well enough for most work.

That argument is fallacious. Something more used don't really mean better. OR PHP and C++ are some of the best languages ever made.
December 11, 2012
> Somebody convinced somebody else of something on the Net.

About the non-defensiveness. As for the int's, I tend to consider that the matter is controversial, but the balance is more equilibrated than it seems (between drawbacks and advantages) of either choice.