February 20, 2012
On 2/20/2012 3:02 AM, Manu wrote:
> ? I must have misunderstood something... I've never seen a 64bit C compiler
> where 'int' is 64bits.

What are you using in C code for a most efficient integer type?
February 20, 2012
On 20 February 2012 13:16, Walter Bright <newshound2@digitalmars.com> wrote:

> On 2/20/2012 3:02 AM, Manu wrote:
>
>> ? I must have misunderstood something... I've never seen a 64bit C
>> compiler
>> where 'int' is 64bits.
>>
>
> What are you using in C code for a most efficient integer type?
>

#ifdef. No 2 C compilers ever seem to agree.
It's a major problem in C, hence bringing it up here. Even size_t is often
broken in C. I have worked on 64bit systems with 32bit pointers where
size_t was still 64bit, but ptrdiff_t was 32bit (I think PS3 is like this,
but maybe my memory fails me)

I want to be confident when I declare a numeric type that can interact with pointers, and also when I want the native type.


February 20, 2012
On 20-02-2012 09:31, Iain Buclaw wrote:
> On 19 February 2012 18:27, Manu<turkeyman@gmail.com>  wrote:
>> On 19 February 2012 20:07, Timon Gehr<timon.gehr@gmx.ch>  wrote:
>>>
>>> On 02/19/2012 03:59 PM, Manu wrote:
>>>>
>>>> Okay, so it came up a couple of times, but the questions is, what are we
>>>> going to do about it?
>>>>
>>>> size_t and ptrdiff_t are incomplete, and represent non-complimentary
>>>> signed/unsigned halves of the requirement.
>>>> There are TWO types needed, register size, and pointer size. Currently,
>>>> these are assumed to be the same, which is a false assumption.
>>>>
>>>> I propose size_t + ssize_t should both exist, and represent the native
>>>> integer size. Also something like ptr_t, and ptrdiff_t should also
>>>> exist, and represent the size of the pointer.
>>>>
>>>> Personally, I don't like the _t notation at all. It doesn't fit the rest
>>>> of the D types, but it's established, so I don't expect it can change.
>>>> But we do need the 2 missing types.
>>>>
>>>> There is also the problem that there is lots of code written using the
>>>> incorrect types. Some time needs to be taken to correct phobos too I
>>>> guess.
>>>
>>>
>>> Currently, size_t is defined to be what you call ptr_t, ptrdiff_t is
>>> present, and what you call size_t/ssize_t does not exist. Under which
>>> circumstances is it important to have a distinct type that denotes the
>>> register size? What kind of code requires such a type? It is unportable.
>>
>>
>> It is just as unportable as size_t its self. The reason you need it is to
>> improve portability, otherwise people need to create arbitrary version mess,
>> which will inevitably be incorrect.
>> Anything from calling convention code, structure layout/packing, copying
>> memory, basically optimising for 64bits at all... I can imagine static
>> branches on the width of that type to select different paths.
>> Even just basic efficiency, using 32bit ints on many 64bit machines require
>> extra sign-extend opcodes after every single load... total waste of cpu
>> time.
>>
>> Currently, if you're running a 64bit system with 32bit pointers, there is
>> absolutely nothing that exists at compile time to tell you you're running a
>> 64bit system, or to declare a variable of the machines native type, which
>> you're crazy if you say is not important information. What's the point of a
>> 64bit machine, if you treat it exactly like a 32bit machine in every aspect?
>
> gdc offers __builtin_machine_(u)int for word size, and
> __builtin_pointer_(u)int for pointer size via gcc.builtins module.
> Nevermind though, it's not quite a "standard" :~)
>

IMHO, it should be.

-- 
- Alex
February 20, 2012
On 20/02/2012 03:44, Artur Skawina wrote:
<snip>
>> Why would you want to do that, as opposed to use one of the pointer types (which is
>> indeed required for GC to work correctly)?
>
> That's how it can be used in *C*.
>
> And the reason it needs to be exposed to D code is for interoperability with C.

IINM, C calling conventions care only about the size of a type, not the name or intrinsic nature of it.  Therefore this is a non-issue.

Stewart.
February 20, 2012
On Mon, 20 Feb 2012 11:28:44 -0000, Manu <turkeyman@gmail.com> wrote:

> On 20 February 2012 13:16, Walter Bright <newshound2@digitalmars.com> wrote:
>
>> On 2/20/2012 3:02 AM, Manu wrote:
>>
>>> ? I must have misunderstood something... I've never seen a 64bit C
>>> compiler
>>> where 'int' is 64bits.
>>>
>>
>> What are you using in C code for a most efficient integer type?
>>
>
> #ifdef. No 2 C compilers ever seem to agree.
> It's a major problem in C, hence bringing it up here. Even size_t is often
> broken in C. I have worked on 64bit systems with 32bit pointers where
> size_t was still 64bit, but ptrdiff_t was 32bit (I think PS3 is like this,
> but maybe my memory fails me)
>
> I want to be confident when I declare a numeric type that can interact with
> pointers, and also when I want the native type.

I can imagine situations where you want to explicitly have a numeric type that can hold/interact with pointers, or you need /more/ width than the native/efficient int type.

But, in /all/ other cases surely we want the **compiler** to pick/use the native/most efficient int type/size.  Further, why should we state this explicitly, why shouldn't "int" just /be/ the native/most efficient type (as determined by the compiler during compilation of each/every block of code)... I know, I know, this goes in the face of one of D's initial design decisions - being sure of the width of your types without having to guess or dig in headers for defines etc.. but, remind me why this is a bad idea?

Because, it just seems to me that we want "int" to be the native/most efficient type and we want fixed sized types for special/specific cases (like in struct definitions where alignment/size matters, etc), i.e.

int a;   // native/efficient type
int16 b; // 16 bit int
int32 c; // 32 bit int
int64 d; // 64 bit int
..and so on..

But.. assuming that's not going to change any time soon, we might be able to go the other way.  What if we had a built-in "nint" type, which we could use everywhere we didn't care about integer type width, which resulted in the compiler picking the most efficient/native int width on a case by case basis (code inspection, etc.. not sure of the limits of this).

Regan

-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/
February 20, 2012
On 20 February 2012 11:14, Manu <turkeyman@gmail.com> wrote:
> On 20 February 2012 10:31, Iain Buclaw <ibuclaw@ubuntu.com> wrote:
>>
>> On 19 February 2012 18:27, Manu <turkeyman@gmail.com> wrote:
>> > On 19 February 2012 20:07, Timon Gehr <timon.gehr@gmx.ch> wrote:
>> >>
>> >> On 02/19/2012 03:59 PM, Manu wrote:
>> >>>
>> >>> Okay, so it came up a couple of times, but the questions is, what are
>> >>> we
>> >>> going to do about it?
>> >>>
>> >>> size_t and ptrdiff_t are incomplete, and represent non-complimentary
>> >>> signed/unsigned halves of the requirement.
>> >>> There are TWO types needed, register size, and pointer size.
>> >>> Currently,
>> >>> these are assumed to be the same, which is a false assumption.
>> >>>
>> >>> I propose size_t + ssize_t should both exist, and represent the native integer size. Also something like ptr_t, and ptrdiff_t should also exist, and represent the size of the pointer.
>> >>>
>> >>> Personally, I don't like the _t notation at all. It doesn't fit the
>> >>> rest
>> >>> of the D types, but it's established, so I don't expect it can change.
>> >>> But we do need the 2 missing types.
>> >>>
>> >>> There is also the problem that there is lots of code written using the incorrect types. Some time needs to be taken to correct phobos too I guess.
>> >>
>> >>
>> >> Currently, size_t is defined to be what you call ptr_t, ptrdiff_t is present, and what you call size_t/ssize_t does not exist. Under which circumstances is it important to have a distinct type that denotes the register size? What kind of code requires such a type? It is unportable.
>> >
>> >
>> > It is just as unportable as size_t its self. The reason you need it is
>> > to
>> > improve portability, otherwise people need to create arbitrary version
>> > mess,
>> > which will inevitably be incorrect.
>> > Anything from calling convention code, structure layout/packing, copying
>> > memory, basically optimising for 64bits at all... I can imagine static
>> > branches on the width of that type to select different paths.
>> > Even just basic efficiency, using 32bit ints on many 64bit machines
>> > require
>> > extra sign-extend opcodes after every single load... total waste of cpu
>> > time.
>> >
>> > Currently, if you're running a 64bit system with 32bit pointers, there
>> > is
>> > absolutely nothing that exists at compile time to tell you you're
>> > running a
>> > 64bit system, or to declare a variable of the machines native type,
>> > which
>> > you're crazy if you say is not important information. What's the point
>> > of a
>> > 64bit machine, if you treat it exactly like a 32bit machine in every
>> > aspect?
>>
>> gdc offers __builtin_machine_(u)int for word size, and
>> __builtin_pointer_(u)int for pointer size via gcc.builtins module.
>> Nevermind though, it's not quite a "standard" :~)
>
>
> That's beautiful though! Can we alias them, and produce a true D type that represents them? :)
>
> My basic issue with these size_t/c_int/core.stdc... stuff, is that it seems
> the intent is to go out of the way to maintain compatibility with C, at the
> expense of sucking C's messy and poorly defined types into D, which is a
> shame. It just results in D having the same crappy archaic typing problems
> as C.
> I appreciate that the C types should exist for interoperability with C (ie,
> their quirks should be preserved for any given compiler/architecture), but
> I'd also like to see strictly defined types in D with no respect to any C
> counterpart, guaranteed by the language to be exactly what they claim to be,
> and not confused depending which compiler you try to use.
>
> These 2 GCC intrinsics would appear to be precisely what I was looking for at the start of this thread...


Well, as Walter said, these could be aliased in core.stdc.config.

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
February 20, 2012
What if te compiler was allowed to optimist to larger types?  The only issue is if pulled rely on overflowing.  That could be fixed by add in a type with a minimum size specified.  This is kind of like C's fast int type. On Feb 20, 2012 8:20 AM, "Regan Heath" <regan@netmail.co.nz> wrote:

> On Mon, 20 Feb 2012 11:28:44 -0000, Manu <turkeyman@gmail.com> wrote:
>
>  On 20 February 2012 13:16, Walter Bright <newshound2@digitalmars.com>
>> wrote:
>>
>>  On 2/20/2012 3:02 AM, Manu wrote:
>>>
>>>  ? I must have misunderstood something... I've never seen a 64bit C
>>>> compiler
>>>> where 'int' is 64bits.
>>>>
>>>>
>>> What are you using in C code for a most efficient integer type?
>>>
>>>
>> #ifdef. No 2 C compilers ever seem to agree.
>> It's a major problem in C, hence bringing it up here. Even size_t is often
>> broken in C. I have worked on 64bit systems with 32bit pointers where
>> size_t was still 64bit, but ptrdiff_t was 32bit (I think PS3 is like this,
>> but maybe my memory fails me)
>>
>> I want to be confident when I declare a numeric type that can interact
>> with
>> pointers, and also when I want the native type.
>>
>
> I can imagine situations where you want to explicitly have a numeric type that can hold/interact with pointers, or you need /more/ width than the native/efficient int type.
>
> But, in /all/ other cases surely we want the **compiler** to pick/use the native/most efficient int type/size.  Further, why should we state this explicitly, why shouldn't "int" just /be/ the native/most efficient type (as determined by the compiler during compilation of each/every block of code)... I know, I know, this goes in the face of one of D's initial design decisions - being sure of the width of your types without having to guess or dig in headers for defines etc.. but, remind me why this is a bad idea?
>
> Because, it just seems to me that we want "int" to be the native/most efficient type and we want fixed sized types for special/specific cases (like in struct definitions where alignment/size matters, etc), i.e.
>
> int a;   // native/efficient type
> int16 b; // 16 bit int
> int32 c; // 32 bit int
> int64 d; // 64 bit int
> ..and so on..
>
> But.. assuming that's not going to change any time soon, we might be able to go the other way.  What if we had a built-in "nint" type, which we could use everywhere we didn't care about integer type width, which resulted in the compiler picking the most efficient/native int width on a case by case basis (code inspection, etc.. not sure of the limits of this).
>
> Regan
>
> --
> Using Opera's revolutionary email client: http://www.opera.com/mail/
>


February 20, 2012
On 2012-02-20 12:02, Manu wrote:
> On 20 February 2012 02:48, Walter Bright <newshound2@digitalmars.com
> <mailto:newshound2@digitalmars.com>> wrote:
>
>     On 2/19/2012 3:15 PM, Manu wrote:
>
>         Ultimately I don't care, I suspect the prior commitment to
>         size_t and ptrdiff_t
>         can not be changed (although redefining their meaning would not
>         be a breaking
>         change, it just might show some cases of inappropriate usages)
>         I agree that nativeInt should probably be in the standard
>         library, but I'm
>         really not into that name. It's really long and ugly. That said,
>         I basically
>         hate size_t too, it doesn't seem very D-ish, reeks of C
>         mischief... and C stuffs
>         up those types so much. It's not dependable what they actually
>         mean in C (ie.
>         ptr size/native word size) on all compilers I've come in contact
>         with :/
>
>
>     I really think that simply adding c_int and c_uint to
>     core.stdc.config will solve the issue. After all, is there any case
>     where the corresponding C int type would be different from a nativeInt?
>
>
> ? I must have misunderstood something... I've never seen a 64bit C
> compiler where 'int' is 64bits.

According to Wikipedia, two out of four 64-bit data models uses 64bit integers, ILP64 and SILP64:

http://en.wikipedia.org/wiki/64-bit#64-bit_data_models

-- 
/Jacob Carlborg
February 20, 2012
On 20 February 2012 16:03, Iain Buclaw <ibuclaw@ubuntu.com> wrote:

> On 20 February 2012 11:14, Manu <turkeyman@gmail.com> wrote:
> > On 20 February 2012 10:31, Iain Buclaw <ibuclaw@ubuntu.com> wrote:
> >>
> >> On 19 February 2012 18:27, Manu <turkeyman@gmail.com> wrote:
> >> > On 19 February 2012 20:07, Timon Gehr <timon.gehr@gmx.ch> wrote:
> >> >>
> >> >> On 02/19/2012 03:59 PM, Manu wrote:
> >> >>>
> >> >>> Okay, so it came up a couple of times, but the questions is, what
> are
> >> >>> we
> >> >>> going to do about it?
> >> >>>
> >> >>> size_t and ptrdiff_t are incomplete, and represent non-complimentary
> >> >>> signed/unsigned halves of the requirement.
> >> >>> There are TWO types needed, register size, and pointer size.
> >> >>> Currently,
> >> >>> these are assumed to be the same, which is a false assumption.
> >> >>>
> >> >>> I propose size_t + ssize_t should both exist, and represent the
> native
> >> >>> integer size. Also something like ptr_t, and ptrdiff_t should also exist, and represent the size of the pointer.
> >> >>>
> >> >>> Personally, I don't like the _t notation at all. It doesn't fit the
> >> >>> rest
> >> >>> of the D types, but it's established, so I don't expect it can
> change.
> >> >>> But we do need the 2 missing types.
> >> >>>
> >> >>> There is also the problem that there is lots of code written using
> the
> >> >>> incorrect types. Some time needs to be taken to correct phobos too I guess.
> >> >>
> >> >>
> >> >> Currently, size_t is defined to be what you call ptr_t, ptrdiff_t is present, and what you call size_t/ssize_t does not exist. Under which circumstances is it important to have a distinct type that denotes
> the
> >> >> register size? What kind of code requires such a type? It is unportable.
> >> >
> >> >
> >> > It is just as unportable as size_t its self. The reason you need it is
> >> > to
> >> > improve portability, otherwise people need to create arbitrary version
> >> > mess,
> >> > which will inevitably be incorrect.
> >> > Anything from calling convention code, structure layout/packing,
> copying
> >> > memory, basically optimising for 64bits at all... I can imagine static
> >> > branches on the width of that type to select different paths.
> >> > Even just basic efficiency, using 32bit ints on many 64bit machines
> >> > require
> >> > extra sign-extend opcodes after every single load... total waste of
> cpu
> >> > time.
> >> >
> >> > Currently, if you're running a 64bit system with 32bit pointers, there
> >> > is
> >> > absolutely nothing that exists at compile time to tell you you're
> >> > running a
> >> > 64bit system, or to declare a variable of the machines native type,
> >> > which
> >> > you're crazy if you say is not important information. What's the point
> >> > of a
> >> > 64bit machine, if you treat it exactly like a 32bit machine in every
> >> > aspect?
> >>
> >> gdc offers __builtin_machine_(u)int for word size, and
> >> __builtin_pointer_(u)int for pointer size via gcc.builtins module.
> >> Nevermind though, it's not quite a "standard" :~)
> >
> >
> > That's beautiful though! Can we alias them, and produce a true D type
> that
> > represents them? :)
> >
> > My basic issue with these size_t/c_int/core.stdc... stuff, is that it
> seems
> > the intent is to go out of the way to maintain compatibility with C, at
> the
> > expense of sucking C's messy and poorly defined types into D, which is a shame. It just results in D having the same crappy archaic typing
> problems
> > as C.
> > I appreciate that the C types should exist for interoperability with C
> (ie,
> > their quirks should be preserved for any given compiler/architecture),
> but
> > I'd also like to see strictly defined types in D with no respect to any C counterpart, guaranteed by the language to be exactly what they claim to
> be,
> > and not confused depending which compiler you try to use.
> >
> > These 2 GCC intrinsics would appear to be precisely what I was looking
> for
> > at the start of this thread...
>
>
> Well, as Walter said, these could be aliased in core.stdc.config.
>

I don't think they are 'standard c' though ;)


February 20, 2012
On 20 February 2012 16:20, Manu <turkeyman@gmail.com> wrote:
> On 20 February 2012 16:03, Iain Buclaw <ibuclaw@ubuntu.com> wrote:
>>
>> On 20 February 2012 11:14, Manu <turkeyman@gmail.com> wrote:
>> > On 20 February 2012 10:31, Iain Buclaw <ibuclaw@ubuntu.com> wrote:
>> >>
>> >> On 19 February 2012 18:27, Manu <turkeyman@gmail.com> wrote:
>> >> > On 19 February 2012 20:07, Timon Gehr <timon.gehr@gmx.ch> wrote:
>> >> >>
>> >> >> On 02/19/2012 03:59 PM, Manu wrote:
>> >> >>>
>> >> >>> Okay, so it came up a couple of times, but the questions is, what
>> >> >>> are
>> >> >>> we
>> >> >>> going to do about it?
>> >> >>>
>> >> >>> size_t and ptrdiff_t are incomplete, and represent
>> >> >>> non-complimentary
>> >> >>> signed/unsigned halves of the requirement.
>> >> >>> There are TWO types needed, register size, and pointer size.
>> >> >>> Currently,
>> >> >>> these are assumed to be the same, which is a false assumption.
>> >> >>>
>> >> >>> I propose size_t + ssize_t should both exist, and represent the
>> >> >>> native
>> >> >>> integer size. Also something like ptr_t, and ptrdiff_t should also
>> >> >>> exist, and represent the size of the pointer.
>> >> >>>
>> >> >>> Personally, I don't like the _t notation at all. It doesn't fit the
>> >> >>> rest
>> >> >>> of the D types, but it's established, so I don't expect it can
>> >> >>> change.
>> >> >>> But we do need the 2 missing types.
>> >> >>>
>> >> >>> There is also the problem that there is lots of code written using
>> >> >>> the
>> >> >>> incorrect types. Some time needs to be taken to correct phobos too
>> >> >>> I
>> >> >>> guess.
>> >> >>
>> >> >>
>> >> >> Currently, size_t is defined to be what you call ptr_t, ptrdiff_t is
>> >> >> present, and what you call size_t/ssize_t does not exist. Under
>> >> >> which
>> >> >> circumstances is it important to have a distinct type that denotes
>> >> >> the
>> >> >> register size? What kind of code requires such a type? It is
>> >> >> unportable.
>> >> >
>> >> >
>> >> > It is just as unportable as size_t its self. The reason you need it
>> >> > is
>> >> > to
>> >> > improve portability, otherwise people need to create arbitrary
>> >> > version
>> >> > mess,
>> >> > which will inevitably be incorrect.
>> >> > Anything from calling convention code, structure layout/packing,
>> >> > copying
>> >> > memory, basically optimising for 64bits at all... I can imagine
>> >> > static
>> >> > branches on the width of that type to select different paths.
>> >> > Even just basic efficiency, using 32bit ints on many 64bit machines
>> >> > require
>> >> > extra sign-extend opcodes after every single load... total waste of
>> >> > cpu
>> >> > time.
>> >> >
>> >> > Currently, if you're running a 64bit system with 32bit pointers,
>> >> > there
>> >> > is
>> >> > absolutely nothing that exists at compile time to tell you you're
>> >> > running a
>> >> > 64bit system, or to declare a variable of the machines native type,
>> >> > which
>> >> > you're crazy if you say is not important information. What's the
>> >> > point
>> >> > of a
>> >> > 64bit machine, if you treat it exactly like a 32bit machine in every
>> >> > aspect?
>> >>
>> >> gdc offers __builtin_machine_(u)int for word size, and
>> >> __builtin_pointer_(u)int for pointer size via gcc.builtins module.
>> >> Nevermind though, it's not quite a "standard" :~)
>> >
>> >
>> > That's beautiful though! Can we alias them, and produce a true D type
>> > that
>> > represents them? :)
>> >
>> > My basic issue with these size_t/c_int/core.stdc... stuff, is that it
>> > seems
>> > the intent is to go out of the way to maintain compatibility with C, at
>> > the
>> > expense of sucking C's messy and poorly defined types into D, which is a
>> > shame. It just results in D having the same crappy archaic typing
>> > problems
>> > as C.
>> > I appreciate that the C types should exist for interoperability with C
>> > (ie,
>> > their quirks should be preserved for any given compiler/architecture),
>> > but
>> > I'd also like to see strictly defined types in D with no respect to any
>> > C
>> > counterpart, guaranteed by the language to be exactly what they claim to
>> > be,
>> > and not confused depending which compiler you try to use.
>> >
>> > These 2 GCC intrinsics would appear to be precisely what I was looking
>> > for
>> > at the start of this thread...
>>
>>
>> Well, as Walter said, these could be aliased in core.stdc.config.
>
>
> I don't think they are 'standard c' though ;)

OK, I'm just having a trudge through druntime:

intptr_t and uintptr_t are guaranteed to match pointer size. https://bitbucket.org/goshawk/gdc/src/87241c8e754b/d/druntime/core/stdc/stdint.d#cl-70

c_long and c_ulong are guaranteed to match target long size (here
would also go c_int and c_uint ;-).
https://bitbucket.org/goshawk/gdc/src/87241c8e754b/d/druntime/core/stdc/config.d#cl-22

This needs fixing, as wchar_t may not be same size across all targets
(some change size of wchar_t based on compile time switches).
https://bitbucket.org/goshawk/gdc/src/87241c8e754b/d/druntime/core/stdc/stddef.d#cl-28

This needs fixing, as wint_t may not be same size across all targets. https://bitbucket.org/goshawk/gdc/src/87241c8e754b/d/druntime/core/stdc/wchar_.d#cl-29

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';