June 09, 2013
Am Sun, 9 Jun 2013 01:53:23 +0200
schrieb Andrej Mitrovic <andrej.mitrovich@gmail.com>:

> On 6/9/13, bearophile <bearophileHUGS@lycos.com> wrote:
> > The size of "byte" is easy, it's 1 byte, but if you ask me a byte is unsigned. I have learnt to be careful with byte/ubyte in D
> 
> You, me, and Don, and probably others as well. It's easy to forget though that bytes are signed.

I found the integer names and sizes intuitive from the beginning! And there was no confusion for me ever.

It starts with a 'u', it is unsigned, signed otherwise.
And the sizes are common in other programming languages, too.
long being the only confusing one, but since int is  already
the 32-bit version it should be clear that it must be 64-bit.

Though I can understand people who think of a byte as an unsigned type, I was much more confused by C's signed chars. Oh really? A character can be negative? Is this programming or sociology?

-- 
Marco

June 09, 2013
On 09/06/13 14:03, Jonathan M Davis wrote:
> If I had been designing the language, I might have gone for int8, uint8,
> int16, uint16, etc. (in which case, _all_ of them would have had sizes with no
> aliases without - it seems overkill to me to have both), but I also don't
> think that it's a big deal for them to not have the numbers either, and I
> don't understand why anyone would think that it's all that hard to learn and
> remember what the various sizes are

It's the ghost of problems past when the sizes many of the various integer/natural types in C were "implementation dependent".  Maybe it only afflicts programmers over a certain age :-)

Platform dependent macros such as int32 mapping to the appropriate type for the implementation were a mechanism for making code portable and old habits die hard.

Peter
PS the numbered int/uint versions would allow "short" and "long" to be removed from the set of keywords (eventually).
PPS I think the numbering paradigm would be good for floating point types as well.  The mathematician in me is unsettled by a digital type called "real" as real numbers can't be represented in digital form - only approximated. So, if it wasn't already too late, I'd go for float32, float64 and float80.

June 09, 2013
On Sunday, June 09, 2013 15:40:40 Peter Williams wrote:
> On 09/06/13 14:03, Jonathan M Davis wrote:
> > If I had been designing the language, I might have gone for int8, uint8, int16, uint16, etc. (in which case, _all_ of them would have had sizes with no aliases without - it seems overkill to me to have both), but I also don't think that it's a big deal for them to not have the numbers either, and I don't understand why anyone would think that it's all that hard to learn and remember what the various sizes are
> 
> It's the ghost of problems past when the sizes many of the various integer/natural types in C were "implementation dependent".  Maybe it only afflicts programmers over a certain age :-)
> 
> Platform dependent macros such as int32 mapping to the appropriate type for the implementation were a mechanism for making code portable and old habits die hard.

I'm well aware of that. I work in C++ for a living and have to deal with variable-sized integral types all the time. But that doesn't mean that it's not easy to learn and remember that D made its integral types fixed size and what that size is for each of them.

> PPS I think the numbering paradigm would be good for floating point types as well.  The mathematician in me is unsettled by a digital type called "real" as real numbers can't be represented in digital form - only approximated. So, if it wasn't already too late, I'd go for float32, float64 and float80.

The size of real is implementation defined. You have no guarantee whatsoever that you even _have_ float80. real is defined to be the largest floating point type provided by the architecture or double - whichever is larger. On x86, that happens to be 80 bits, but it won't necessarily be 80 bits on other architectures.

- Jonathan M Davis
June 13, 2013
Slide 61:

> - Be open minded: faster CTFE opens doors
>   - Generating procedural content at compile time
>   - "If you build it, they will come"

There are many applications for compile-time computation, this small thread discusses about applications of a compile-time constraint solver:

http://lambda-the-ultimate.org/node/4762

The question:

>C's type system can be seen as a constraint checker; Haskell's as a constraint solver. If you had a general purpose constraint solver at compile-time what could you use it for other than type checking?<

Bye,
bearophile
June 14, 2013
On Saturday, 8 June 2013 at 22:55:20 UTC, Walter Bright wrote:
> On 6/8/2013 2:23 PM, bearophile wrote:
>> - D integer types have guaranteed sizes, but
>>   they're not obvious from the name
>> - Why not have int8, uint8, int32, uint32, etc. in
>>   default namespace, encourage their use?
>>
>> I agree. It's hard to guess the size and signedness of types as byte, ubyte,
>> wchar, dchar. I prefer names that are more clear.
>
> It would only be a trivial problem for 5 minutes, only for ex-C/C++ programmers who are used to suffering under variable sized ints, and will never be a problem again.
>
> Is it really a problem to guess the size of 'byte'? Or that 'ubyte' is unsigned? Come on, bearophile!
>
> And frankly, int32, long64, etc., have a large 'yech' factor for me.
>
> Besides, if you really do want them,
>
>     import core.stdc.stdint;

It was a relief to go from C++ to D and be guaranteed the sizes. No 'u' is signed, 'u' is unsigned, simple as simple gets. Took me a little while to learn and adjust, but not much.

I didn't know about core.stdc.stdint; I may use it for the few cases where the size is very important with no room for erroneously choosing the wrong one.

Slam dunk!

--rt
1 2
Next ›   Last »