January 30, 2012
On 1/29/2012 4:30 PM, Jonathan M Davis wrote:
> But there are
> definitely arguments for having an integral type which is the most efficient for
> whatever machine that it's compiled on, and D doesn't really have that. You'd
> probably have to use something like c_long if you really wanted that.

I believe the notion of "most efficient integer type" was obsolete 10 years ago.

In any case, D is hardly deficient even if such is valid. Just use an alias.

C has varying size for builtin types and fixed size for aliases. D is just the reverse - fixed builtin sizes and varying alias sizes. My experience with both languages is that D's approach is far superior.

C's varying sizes makes it clumsy to write portable numeric code, and the varying size of wchar_t is such a disaster that it is completely useless - the C++11 had to come up with completely new basic types to support UTF.
January 30, 2012
On Sun, Jan 29, 2012 at 05:57:39PM -0800, Walter Bright wrote:
> On 1/29/2012 3:31 PM, H. S. Teoh wrote:
> >Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right.
> 
> size_t does have a C99 Standard official format %z. The trouble is,
> 
> 1. many compilers *still* don't implement it.

And C++ doesn't officially support C99. Prior to C++11 anyway, but I don't foresee myself doing any major projects in C++11 now that I have something better, i.e., D. I just can't see myself doing any more personal projects in C++, and at my day job we actually migrated from C++ to C a few years ago, and we're still happy we did so. (Don't ask, you don't want to know. When a single function call requires 6 layers of needless abstraction including a layer involving fwrite, fork, and exec, and when dtors do useful work other than cleanup, it's time to call it quits.)


> 2. that doesn't do you any good for any other typedef's that change size.
> 
> printf is the single biggest nuisance in porting code between 32 and 64 bits.
[...]

It could've been worse, though. We're lucky (most) compiler vendors decided not to make int 64 bits. That alone would've broken 90% of existing C code out there, some in obvious ways and others in subtle ways that you only find out after it's deployed on your client's production system.


T

-- 
Two wrongs don't make a right; but three rights do make a left...
January 30, 2012
On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
> On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
> >long double is 128-bit.
> 
> Sort of. It's 80 bits of useful data with 48 bits of unused padding.

Really?! Ugh. Hopefully D handles it better?


T

-- 
One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie. -- The Silicon Valley Tarot
January 30, 2012
On 29 January 2012 22:26, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
>> long long is 64-bit on 64-bit linux.
>
> Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking...
>
> Well, you're right. Now I'm seriously confused. Hmmm...
>
> long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining.
>
> In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems.
>
> - Jonathan M Davis

Can be turned on via compiler switch:

-m128bit-long-double

or set at the configure stage:

--with-long-double-128


Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
January 30, 2012
On 30 January 2012 03:17, Iain Buclaw <ibuclaw@ubuntu.com> wrote:
> On 29 January 2012 22:26, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
>> On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
>>> long long is 64-bit on 64-bit linux.
>>
>> Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking...
>>
>> Well, you're right. Now I'm seriously confused. Hmmm...
>>
>> long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining.
>>
>> In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems.
>>
>> - Jonathan M Davis
>
> Can be turned on via compiler switch:
>
> -m128bit-long-double
>
> or set at the configure stage:
>
> --with-long-double-128
>


Oh wait... I've just re-read that and realised it's to do with reals (must be 3am in the morning here).



-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
January 30, 2012
"H. S. Teoh" <hsteoh@quickfur.ath.cx> wrote in message news:mailman.172.1327892267.25230.digitalmars-d@puremagic.com...
>> Sort of. It's 80 bits of useful data with 48 bits of unused padding.
>
> Really?! Ugh. Hopefully D handles it better?
>

No.  D has to be abi compatible.


January 30, 2012
On 01/30/2012 03:59 AM, H. S. Teoh wrote:
> On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
>> On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
>>> long double is 128-bit.
>>
>> Sort of. It's 80 bits of useful data with 48 bits of unused padding.
>
> Really?! Ugh. Hopefully D handles it better?
>
>
> T
>

It is what the x86 hardware supports.
January 30, 2012
Am 30.01.2012, 03:59 Uhr, schrieb H. S. Teoh <hsteoh@quickfur.ath.cx>:

> On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
>> On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
>> >long double is 128-bit.
>>
>> Sort of. It's 80 bits of useful data with 48 bits of unused padding.
>
> Really?! Ugh. Hopefully D handles it better?
>
>
> T

From Wikipedia:

"On the x86 architecture, most compilers implement long double as the 80-bit extended precision type supported by that hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment)."

That's all there is to know I think.
January 30, 2012
On Mon, Jan 30, 2012 at 05:00:22PM +0100, Timon Gehr wrote:
> On 01/30/2012 03:59 AM, H. S. Teoh wrote:
> >On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
> >>On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
> >>>long double is 128-bit.
> >>
> >>Sort of. It's 80 bits of useful data with 48 bits of unused padding.
> >
> >Really?! Ugh. Hopefully D handles it better?
> >
> >
> >T
> >
> 
> It is what the x86 hardware supports.

I know, I was referring to the 48 bits of padding. Seems like such a waste.


T

-- 
What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
January 30, 2012
On 30/01/12 18:06, Marco Leise wrote:
> Am 30.01.2012, 03:59 Uhr, schrieb H. S. Teoh <hsteoh@quickfur.ath.cx>:
>
>> On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
>>> On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
>>> >long double is 128-bit.
>>>
>>> Sort of. It's 80 bits of useful data with 48 bits of unused padding.
>>
>> Really?! Ugh. Hopefully D handles it better?
>>
>>
>> T
>
>  From Wikipedia:
>
> "On the x86 architecture, most compilers implement long double as the
> 80-bit extended precision type supported by that hardware (sometimes
> stored as 12 or 16 bytes to maintain data structure alignment)."
>
> That's all there is to know I think.

Not quite all. An 80-bit double, padded with zeros to 128 bits, is binary compatible with a quadruple real.
(Not much use in practice, as far as I know).