April 04, 2005
Bob W wrote:
> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message 
>>The Itanium was before it's time, I guess.

> The Itanium never existed. Just ask any
> mechanic, houswive, lawyer or his secretary.
> Its either "Pentium inside" or some "..on"
> from the other company. The other company
> made a 64 bit to let Chipzilla suffer, so
> Chipzilla will have "64 bit inside" for the
> rest of us.

Actually, Chipzilla wanted to kill off x86 entirely. And AMD et al. They were becoming too cheap and ubiquitous, and there was too much competition. And carrying compatibility was an "unnecessary" expense. Skipping all that would give them massively reduced cost per chip, and ease development considerably. And they could charge unreasonable prices for their new chips.

In their delusions of grandeur they thought that having Bill recompile windows for it, and with a massive campaign targeted to software and computer manufacturers, they'd create a universal belief that x86 is going to disappear Real Soon Now.

And in secret, Chipzilla had a bag of patents that other chip makers would have to lease from them, after Conveniently Prolonged negotiations. Which they can now stuff up their chimney.

What they forgot was, that everyone else saw through this. Unix vendors, PC vendors, customers, even Bill had nothing to win here. All it would result in would be massive grief thanks to the discontinuity, porting, driver writing, confusion, and obsolesence of good programs.

AMD did what a man had to do: get quickly down to drawing board, and do it right. Right for everybody. And I guess Bill, customers, vendors, and everybody else is happy. Except Chipzilla.
April 04, 2005
"Georg Wrede" <georg.wrede@nospam.org> wrote in message news:42512B29.7010700@nospam.org...
> Bob W wrote:

> Actually, Chipzilla wanted to kill off x86 entirely. And AMD et al. They
> were becoming too cheap ...........................................
> AMD did what a man had to do: get quickly down to drawing board, and do it
> right. Right for everybody. And I guess Bill, customers, vendors, and
> everybody else is happy. Except Chipzilla.


Good post!

(My Outlook Express font shows me that you are
running some sort of an AMD 64 engine, right?  :)


April 04, 2005
Anders F Björklund wrote:
> Georg Wrede wrote:
> 
>> And I admit, mostly the L or not, makes no difference. So one ends up not using L. And the one day it makes a difference, one will look everywhere else for own bugs, D bugs, hardware bugs, before noticing it was the missing L _this_ time. And then one gets a hammer and bangs one's head real hard.
> 
> 
> D lives in a world of two schools. The string literals, for instance,
> they are untyped and only spring into existance when you actually do
> assign them to anything. But the two numeric types are "different"...
> 
> To be compatible with C they default to "int" and "double", and
> then you have to either cast them or use the 'L' suffixes to make
> them use "long" or "extended" instead. Annoying, but same as before ?
> 
> 
> BTW; In Java, you get an error when you do "float f = 1.0;"
>      I'm not sure that is all better, but more "helpful"...
>      Would you prefer it if you had to cast your constants ?

Any time I write

numerictype variable = 2.3;

I want the literal implicitly to be taken as being of "numerictype".

I don't want to decorate the literal ANYWHERE ELSE than when I for some reason want it to be "of unexpected" type.


What if I wrote

int v = -7;

and found out that "-7" is first converted to int16, then to int.

Would you blame me for murdering the compiler writer?

I just refuse to see what's so different here.

-------------

Shit! On proofreading I noticed that my "-7" example doesn't even work the way I meant. Maybe I should murder me, and let D have L's all over the source code.

Let's just assume v suddenly holds the value 65529.
April 04, 2005
Bob W wrote:
> "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:42512B29.7010700@nospam.org...
> 
>>Bob W wrote:
> 
> 
>>Actually, Chipzilla wanted to kill off x86 entirely. And AMD et al. They were becoming too cheap ...........................................
>>AMD did what a man had to do: get quickly down to drawing board, and do it right. Right for everybody. And I guess Bill, customers, vendors, and everybody else is happy. Except Chipzilla.
> 
> Good post!

Thanks!

> (My Outlook Express font shows me that you are
> running some sort of an AMD 64 engine, right?  :)

One day I will, for sure.

But currently have I several computers, none of which are more than 800MHz Pentiums, and there's enough horsepower for everything I do.

So I'd have to conjure up an excuse first. Maybe "get a playing machine for the kids" would do.  :-)

Hmm. Maybe buying one just to support and thank them would be the Right Thing to do.

Oh, and while I'm at it, i write all my posts to these newsgroups using W2k. So, no, I'm not a microsoft hater. I was, however, an Apple hater. Up until OS X.
April 04, 2005
Georg Wrede wrote:

> Any time I write
> 
> numerictype variable = 2.3;
> 
> I want the literal implicitly to be taken as being of "numerictype".

That's how D strings work...

 char[] s = "hello"; // s.ptr now holds: "hello\0"
wchar[] t = "hello"; // t.ptr now holds: "\0h\0e\0l\0l\0o\0\0"

And "hello" is simply untyped.

But it's *not* how numbers work...

> What if I wrote
> 
> int v = -7;
> 
> and found out that "-7" is first converted to int16, then to int.

-7, like all other such small integer literals, is of type "int".

Therefore, you can't assign it to - for instance - an "uint" ?
// cannot implicitly convert expression -7 of type int to uint

And for similar historical reasons, floating literals are "double".


Unfortunately, it's harder to tell double/extended apart than e.g.
byte/short/int/long. (or char/wchar/dchar, for character literals)

Another thing is that a Unicode string can be converted from char[] to
wchar[] without any loss, and the same is true for a (small) integer ?
But it's not true for extended.

For instance, the compiler "knows" e.g. that 0x0FFFFFFFF is an uint and
that 0x100000000 is a long. But it doesn't know what type that 1.0 has.

And since there no good way to tell, it simply picks the default one.


But if there was no C legacy, then D literals could probably always
default to "real" and "long", or even unnamed floating and unnamed
integer types - instead of the current choices of "double" and "int".
Then again, there is. (legacy)

--anders
April 04, 2005
Georg Wrede wrote:

> I was, however, an Apple hater. Up until OS X.

Ah, you mean you hate Apple (Mac OS 9)
But that you like NeXT... (Mac OS X)

Confusing, these days. :-)

See this link for a great history timeline:
http://www.kernelthread.com/mac/oshistory/

--anders

PS. Me, I'm an Apple guy. And Linux hacker.
    http://www.algonet.se/~afb/think.txt
April 04, 2005
Anders F Björklund wrote:
> Georg Wrede wrote:
> 
>> Any time I write
>>
>> numerictype variable = 2.3;
>>
>> I want the literal implicitly to be taken as being of "numerictype".
> 
> That's how D strings work...

Right!

> But it's *not* how numbers work...

They should.

>> What if I wrote
>>
>> int v = -7;
>>
>> and found out that "-7" is first converted to int16, then to int.
> 
> -7, like all other such small integer literals, is of type "int".

Yes. But what if they weren't. To rephrase, what if I wrote

uint w = 100000;

and found out that it gets the value 34464.

And the docs would say "decimal literals without decimal point and without minus, are read in as int16, and then cast to the needed type".

And the docs would say "this is for historical reasons bigger than us".

---

Heck, Walter's done bolder things in the past.

And this can't possibly be hard to implement either. I mean, either have the compiler parse it as what is "wanted" (maybe this can't be done with a context independent lexer, or whatever), or, have it parse them as the largest supported type. (This would slow down compilig, but not too much, IMHO.)
April 04, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2roel$2g1f$1@digitaldaemon.com...
> Georg Wrede wrote:
>
>> Any time I write
>>
>> numerictype variable = 2.3;
>>
>> I want the literal implicitly to be taken as being of "numerictype".
>
> That's how D strings work...
>
>  char[] s = "hello"; // s.ptr now holds: "hello\0"
> wchar[] t = "hello"; // t.ptr now holds: "\0h\0e\0l\0l\0o\0\0"
>
> And "hello" is simply untyped.
>
> But it's *not* how numbers work...


I cannot see why not:

float  f = 2.3;  // f now holds the float value of 2.3
double d = 2.3;  // d now holds D's default precison value
real   r = 2.3;  // r should now get what it deserves
otherprecision o = 2.3;  // even this should work (if implemented)

That's how numbers (could) work.




> -7, like all other such small integer literals, is of type "int".
>
> Therefore, you can't assign it to - for instance - an "uint" ? // cannot implicitly convert expression -7 of type int to uint
>
> And for similar historical reasons, floating literals are "double".


In C you could assign it to unsigned int.
You probably don't want that feature back in D
just for historical reasons.

But you can assign a value with impaired accuracy
(double -> extended) and the compiler stays mute.
For historical reasons?




> Unfortunately, it's harder to tell double/extended apart than e.g. byte/short/int/long. (or char/wchar/dchar, for character literals)


Now tell me how you can possibly tell the difference
between 1 1 and 1  (0x01  0x0001  0x00000001) ?
You cannot, but the 1 byte, 2 bytes or 4 bytes
somehow tend to find their correct detination.

Or just tell me why you possibly need to see the
difference between 2.3  2.3  and  2.3  (float double
extended) at compile time ?  Simple solution: parse
the values by assuming the highest implemented precision
and move them in the precision required. It's as simple
as that. Of course you would have to do away first
with that C compliant legacy perser.




> Another thing is that a Unicode string can be converted from char[] to wchar[] without any loss, and the same is true for a (small) integer ? But it's not true for extended.


But you can always convert an extended to float
or double without consequences for the resulting
value. That is what the compiler should do if
it is allowed to do so. Why restrict it to
just being able to assig a double to a float?




> For instance, the compiler "knows" e.g. that 0x0FFFFFFFF is an uint and that 0x100000000 is a long. But it doesn't know what type that 1.0 has.

Strange example. How does the compiler know that
0xFFFFFFFF is NOT going to be a long?

I can take your own example: The compiler knows
that 1e300 is a double and that 1e400 is extended.
But it does not know what type 1 has.




> And since there no good way to tell, it simply picks the default one.

And since there is no good way to tell it picks
(should pick) the maximum implemented precision
in case of 2.3 . Don't worry, it will produce
the proper double value if this is the required
type.




> But if there was no C legacy, then D literals could probably always default to "real" and "long", or even unnamed floating and unnamed integer types - instead of the current choices of "double" and "int". Then again, there is. (legacy)


Legacy? Why would we need or want this?

I have a huge choice of C compilers if I want legacy.
D however, I would want to see as user friendly as
possible or as modern as possible. This means that
it shouldn't be designed just to accomodate C veterans
(even including myself).



April 04, 2005
Bob W wrote:

>>But it's *not* how numbers work...
[...]
> That's how numbers (could) work.

True, just not how they do just yet.

> You probably don't want that feature back in D
> just for historical reasons.

No, not really :-) (Adding a 'U' is simple enough)

> Or just tell me why you possibly need to see the
> difference between 2.3  2.3  and  2.3  (float double
> extended) at compile time ?  Simple solution: parse
> the values by assuming the highest implemented precision
> and move them in the precision required. It's as simple
> as that. Of course you would have to do away first
> with that C compliant legacy perser.

You would have to ask Walter, or someone knowing the parser ?

Me personally, I don't have any problem whatsoever with
2.3 being parsed as the biggest available floating point.

I'm not sure if it's a problem if you have several such constants,
but then again I have just been using "double" for quite a while.

(I mean: if the compiler folds constant expressions, things like that)

> Strange example. How does the compiler know that
> 0xFFFFFFFF is NOT going to be a long?

Okay, it was somewhat farfetched (as my examples tend to be)

But the short answer is the same as with the floating point:
since it would be "0xFFFFFFFFL", if it was a long... :-P

And I wasn't defending it here, just saying that it crops up
with the other types as well - the default type / suffix thing.

> And since there is no good way to tell it picks
> (should pick) the maximum implemented precision
> in case of 2.3 . Don't worry, it will produce
> the proper double value if this is the required
> type.

I'm not worried, it has worked for float for quite some time.
(except in Java, but it tends to whine about a lot of things)

> Legacy? Why would we need or want this?

Beyond link compatibility, it beats me... But so it is...
All I know is a lot of D features is because "C does it" ?

--anders
April 04, 2005
Bob W wrote:

> I cannot see why not:
> 
> float  f = 2.3;  // f now holds the float value of 2.3
> double d = 2.3;  // d now holds D's default precison value
> real   r = 2.3;  // r should now get what it deserves
> otherprecision o = 2.3;  // even this should work (if implemented)
> 
> That's how numbers (could) work.

I think Walter mentioned in this thread that he had considered
adding such floating point literals, but hadn't had the time yet ?

  "Suppose it was kept internally with full 80 bit precision,
  participated in constant folding as a full 80 bit type, and was only
  converted to 64 bits when a double literal needed to be actually
  inserted into the .obj file?"

This would make the floating point work the same as the various
strings do now, without adding more suffixes or changing defaults.


And that I have no problem with, I was just pre-occupied with all
that other talk about the non-portable 80-bit float stuff... :-)

Although, one might want to consider keeping L"str" and 1.0f around,
simply because it is less typing than cast(wchar[]) or cast(float) ?

It would also be a nice future addition, for when we have 128-bit
floating point? Then those literals would still be kept at the max...


Sorry if my lame examples somehow suggested I thought it was *good*,
that the compiler truncates all floating point literals into doubles.

--anders