July 12, 2004
In article <ccqror$8c3$1@digitaldaemon.com>, Sigbjørn Lund Olsen wrote:
> Matthew wrote:
>> apint doesn't really grab one, does it?
> 
> apint (to me) sounds like arbitrary precision integer, but that's because I did a numerical analysis program using a C++ lib called apfloat :-o

Sounds like "a pint" to me...

Or something resembling apes.

-Antti

-- 
I will not be using Plan 9 in the creation of weapons of mass destruction to be used by nations other than the US.
July 12, 2004
Antti Sykäri wrote:

> In article <ccqror$8c3$1@digitaldaemon.com>, Sigbjørn Lund Olsen wrote:
> 
>>Matthew wrote:
>>
>>>apint doesn't really grab one, does it?
>>
>>apint (to me) sounds like arbitrary precision integer, but that's because I did a numerical analysis program using a C++ lib called apfloat :-o
> 
> 
> Sounds like "a pint" to me...

Hidden benefits!

Cheers,
Sigbjørn Lund Olsen
July 12, 2004
Stephen Waits wrote:
> 
> Hi all,
> 
> Like lots of you, for our portable C++ stuff, we use our own set of typedefs for ints and floats..  uint32, int32, uint16, and so on.
> 
> However, we only use these types when we actually require a specific size.  If, for example, we just need a loop counter or an array index, we always use "int" or "unsigned int" because we can be (fairly) certain that this will be the machine's "native" type and it won't have to go through some extra hoops to access it on, say, a 64 bit machine.  These sorts of unnatural accesses can add up to quite a few cycles.
> 
> [The sad part of this, in C++, is that we can only be "fairly certain" as I stated above.]
> 
> So, in D, we have these types:
> 
> http://www.digitalmars.com/d/type.html
> 
> Which are absolutely great, because in C/C++ we never REALLY knew what size anything was going to be - so that's wonderfully predictable now, and good.  [though the "it may be bigger on some platforms" thing is a bit uncomfortable]
> 
> But what would you use when you don't need something size specific, but instead, just want the most natural integer or floating point type for the target machine?

I think that we need to be careful here; the truth of the matter is that you DO care about the size of your int's, at least within a certain range.  For instance, if you have a loop that has 500 iterations, you might say, "I don't care about the size of my int" ... until you have to run on a machine where 8 bit ints are natural.

So, I would like to reconstruct your argument:

"There are many times where we care that an integer size cover at least a certain range, and don't care if the integer is larger than that."

A number of people have suggested syntaxes where you define the ranges and the compiler figures out the type.  I would suggest that we can arrive at what you're talking about with typedef's, something like (excuse my very-wordy names):

	typedef <whatever> fastest_uint_min16;
	typedef <whatever> fastest_int_min64;

etc.

Now you can write portable code that also has a hair of optimization; you know that you will always have AT LEAST a certain range (so you're safe) but you've also specified that it's ok to use a larger type if that is faster.

CAVEAT: I know that "fastest" may be hard to define...but this is a start, at least...

July 13, 2004
Russ Lewis wrote:

> Stephen Waits wrote:
>
>>
>> Hi all,
>>
>> Like lots of you, for our portable C++ stuff, we use our own set of typedefs for ints and floats..  uint32, int32, uint16, and so on.
>>
>> However, we only use these types when we actually require a specific size.  If, for example, we just need a loop counter or an array index, we always use "int" or "unsigned int" because we can be (fairly) certain that this will be the machine's "native" type and it won't have to go through some extra hoops to access it on, say, a 64 bit machine.  These sorts of unnatural accesses can add up to quite a few cycles.
>>
>> [The sad part of this, in C++, is that we can only be "fairly certain" as I stated above.]
>>
>> So, in D, we have these types:
>>
>> http://www.digitalmars.com/d/type.html
>>
>> Which are absolutely great, because in C/C++ we never REALLY knew what size anything was going to be - so that's wonderfully predictable now, and good.  [though the "it may be bigger on some platforms" thing is a bit uncomfortable]
>>
>> But what would you use when you don't need something size specific, but instead, just want the most natural integer or floating point type for the target machine?
>
>
> I think that we need to be careful here; the truth of the matter is that you DO care about the size of your int's, at least within a certain range.  For instance, if you have a loop that has 500 iterations, you might say, "I don't care about the size of my int" ... until you have to run on a machine where 8 bit ints are natural.
>
> So, I would like to reconstruct your argument:
>
> "There are many times where we care that an integer size cover at least a certain range, and don't care if the integer is larger than that."
>
> A number of people have suggested syntaxes where you define the ranges and the compiler figures out the type.  I would suggest that we can arrive at what you're talking about with typedef's, something like (excuse my very-wordy names):
>
>     typedef <whatever> fastest_uint_min16;
>     typedef <whatever> fastest_int_min64;
>
> etc.
>
> Now you can write portable code that also has a hair of optimization; you know that you will always have AT LEAST a certain range (so you're safe) but you've also specified that it's ok to use a larger type if that is faster.
>
> CAVEAT: I know that "fastest" may be hard to define...but this is a start, at least...
>
Or we could go the ada root, which allows you to specify the range of the data type.   We don't care what the internals are.  That has so many other advantages, I don't know why it never made it into D.

However I think that getting the largest efficient type for the current system is mainly useful for things like coping large chunks for data, not for iterating though loops (unless its used int the copy of course).   In these cases you still need to know the size of the data type your dealing with so that you can trim the algorithm off at the edges of the block.

-- 
-Anderson: http://badmama.com.au/~anderson/
July 17, 2004
> Or we could go the ada root, which allows you to specify the range of the data type.   We don't care what the internals are.  That has so many other advantages, I don't know why it never made it into D.

There's something to be said for this system. Pascal uses it too, for example

type myint = 1..10; //int with 10 elements numbered 1 - 10
tyep myint2 = -1..255; //byte alike with a single negative element

The compiler uses the smallest native type that the range fits into. Pascal also has:

byte - 8bit unsigned int
word - 16bit unsigned int
longword - 32bit unsigned int

also the less well named:

shortint - 8bit signed int
smallint - 16bit signed int
longint - 32bit signed int
int64 - 64bit signed integer (Delphi 4 onwards)

But also:

integer - native signed int (16bit on Win3.1, 32bit for Win32)
cardinal - native unsigned int (16bit on Win3.1, 32bit for Win32)

The last two change with each platform inline with the processor.

Matt

July 17, 2004
Isn't this something that can be done by a library?

"me" <memsom@interalpha.co.uk> wrote in message news:cd9rlk$p0c$1@digitaldaemon.com...
> > Or we could go the ada root, which allows you to specify the range of the data type.   We don't care what the internals are.  That has so many other advantages, I don't know why it never made it into D.
>
> There's something to be said for this system. Pascal uses it too, for example
>
> type myint = 1..10; //int with 10 elements numbered 1 - 10
> tyep myint2 = -1..255; //byte alike with a single negative element
>
> The compiler uses the smallest native type that the range fits into.
Pascal
> also has:
>
> byte - 8bit unsigned int
> word - 16bit unsigned int
> longword - 32bit unsigned int
>
> also the less well named:
>
> shortint - 8bit signed int
> smallint - 16bit signed int
> longint - 32bit signed int
> int64 - 64bit signed integer (Delphi 4 onwards)
>
> But also:
>
> integer - native signed int (16bit on Win3.1, 32bit for Win32)
> cardinal - native unsigned int (16bit on Win3.1, 32bit for Win32)
>
> The last two change with each platform inline with the processor.
>
> Matt
>


July 17, 2004
Matthew Wilson wrote:

>Isn't this something that can be done by a library?
>  
>

I can't see it being done neatly+efficiently by a library.  Particularly with static type checking, being able to use ranges as parameters to standard arrays, ect...

>"me" <memsom@interalpha.co.uk> wrote in message
>news:cd9rlk$p0c$1@digitaldaemon.com...
>  
>
>>>Or we could go the ada root, which allows you to specify the range of
>>>the data type.   We don't care what the internals are.  That has so many
>>>other advantages, I don't know why it never made it into D.
>>>      
>>>
>>There's something to be said for this system. Pascal uses it too, for
>>example
>>
>>type myint = 1..10; //int with 10 elements numbered 1 - 10
>>tyep myint2 = -1..255; //byte alike with a single negative element
>>
>>The compiler uses the smallest native type that the range fits into.
>>    
>>
>Pascal
>  
>
>>also has:
>>
>>byte - 8bit unsigned int
>>word - 16bit unsigned int
>>longword - 32bit unsigned int
>>
>>also the less well named:
>>
>>shortint - 8bit signed int
>>smallint - 16bit signed int
>>longint - 32bit signed int
>>int64 - 64bit signed integer (Delphi 4 onwards)
>>
>>But also:
>>
>>integer - native signed int (16bit on Win3.1, 32bit for Win32)
>>cardinal - native unsigned int (16bit on Win3.1, 32bit for Win32)
>>
>>The last two change with each platform inline with the processor.
>>
>>Matt
>>
>>    
>>
>
>
>  
>


-- 
-Anderson: http://badmama.com.au/~anderson/
July 17, 2004
"J Anderson" <REMOVEanderson@badmama.com.au> wrote in message news:cdatag$1788$1@digitaldaemon.com...
> Matthew Wilson wrote:
>
> >Isn't this something that can be done by a library?
> >
> >
>
> I can't see it being done neatly+efficiently by a library.  Particularly with static type checking, being able to use ranges as parameters to standard arrays, ect...

Please give more info. Can you show a couple of examples that support your case?


>
> >"me" <memsom@interalpha.co.uk> wrote in message news:cd9rlk$p0c$1@digitaldaemon.com...
> >
> >
> >>>Or we could go the ada root, which allows you to specify the range of the data type.   We don't care what the internals are.  That has so
many
> >>>other advantages, I don't know why it never made it into D.
> >>>
> >>>
> >>There's something to be said for this system. Pascal uses it too, for example
> >>
> >>type myint = 1..10; //int with 10 elements numbered 1 - 10
> >>tyep myint2 = -1..255; //byte alike with a single negative element
> >>
> >>The compiler uses the smallest native type that the range fits into.
> >>
> >>
> >Pascal
> >
> >
> >>also has:
> >>
> >>byte - 8bit unsigned int
> >>word - 16bit unsigned int
> >>longword - 32bit unsigned int
> >>
> >>also the less well named:
> >>
> >>shortint - 8bit signed int
> >>smallint - 16bit signed int
> >>longint - 32bit signed int
> >>int64 - 64bit signed integer (Delphi 4 onwards)
> >>
> >>But also:
> >>
> >>integer - native signed int (16bit on Win3.1, 32bit for Win32)
> >>cardinal - native unsigned int (16bit on Win3.1, 32bit for Win32)
> >>
> >>The last two change with each platform inline with the processor.
> >>
> >>Matt
> >>
> >>
> >>
> >
> >
> >
> >
>
>
> -- 
> -Anderson: http://badmama.com.au/~anderson/


July 18, 2004
> > I can't see it being done neatly+efficiently by a library.  Particularly with static type checking, being able to use ranges as parameters to standard arrays, ect...
>
> Please give more info. Can you show a couple of examples that support your case?

Excuse the Pascal...

//define a subscript
type myrange = -1..255; //new type defined - compiler will use the smallint
to store
//use this for your array
type myarray = array[myrange] of string; //257 strings
//create a var using array
var message_text: myarray = ('error', ...etc...., 'another string');

Pascal also allows:

type myenum = (meError, meUp, meDown, meLeft, meRight, meUndefined);
//meError's ordinal value is 0..
type myenumarray = array[myenum] of sometype;

another example... using the 'in' operator to test if a value is 'in' a range/set.

type myrange = 0..16;
var t = 15;

if (t in myrange) then
  ; //do something

I'm not clear how this could be cleanly done in a library...

Matt

July 19, 2004
Matthew Wilson wrote:

>"J Anderson" <REMOVEanderson@badmama.com.au> wrote in message
>news:cdatag$1788$1@digitaldaemon.com...
>  
>
>>Matthew Wilson wrote:
>>
>>    
>>
>>>Isn't this something that can be done by a library?
>>>
>>>
>>>      
>>>
>>I can't see it being done neatly+efficiently by a library.  Particularly
>>with static type checking, being able to use ranges as parameters to
>>standard arrays, ect...
>>    
>>
>
>Please give more info. Can you show a couple of examples that support your
>case?
>
>  
>

Something like:

range tank as int = 1..12;
range halftank as tanksize =  1..6;

tank tank1;
halftank tank2 = 4;

tank1 = tank2; //ok
tank2 = tank1; //compile time error

tank2 [heaftank] myarray;

myarray[tank2] = 8; //compile time error because of assignment
myarray[tank1] = 4; //compile time error because of index

tank2 [heaftank] myarray2;

myarray[halftank] = myarray2[halftank]; //copy

...

void GetRange (range r)
{
   return array[r];
}


I can't see how all this can be done neatly with a library at compile-time with compile-time checks.

>  
>
>>>"me" <memsom@interalpha.co.uk> wrote in message
>>>news:cd9rlk$p0c$1@digitaldaemon.com...
>>>
>>>
>>>      
>>>
>>>>>Or we could go the ada root, which allows you to specify the range of
>>>>>the data type.   We don't care what the internals are.  That has so
>>>>>          
>>>>>
>many
>  
>
>>>>>other advantages, I don't know why it never made it into D.
>>>>>
>>>>>
>>>>>          
>>>>>
>>>>There's something to be said for this system. Pascal uses it too, for
>>>>example
>>>>
>>>>type myint = 1..10; //int with 10 elements numbered 1 - 10
>>>>tyep myint2 = -1..255; //byte alike with a single negative element
>>>>
>>>>The compiler uses the smallest native type that the range fits into.
>>>>
>>>>
>>>>        
>>>>
>>>Pascal
>>>
>>>
>>>      
>>>
>>>>also has:
>>>>
>>>>byte - 8bit unsigned int
>>>>word - 16bit unsigned int
>>>>longword - 32bit unsigned int
>>>>
>>>>also the less well named:
>>>>
>>>>shortint - 8bit signed int
>>>>smallint - 16bit signed int
>>>>longint - 32bit signed int
>>>>int64 - 64bit signed integer (Delphi 4 onwards)
>>>>
>>>>But also:
>>>>
>>>>integer - native signed int (16bit on Win3.1, 32bit for Win32)
>>>>cardinal - native unsigned int (16bit on Win3.1, 32bit for Win32)
>>>>
>>>>The last two change with each platform inline with the processor.
>>>>
>>>>Matt
>>>>
>>>>
>>>>
>>>>        
>>>>
>>>
>>>
>>>      
>>>
>>-- 
>>-Anderson: http://badmama.com.au/~anderson/
>>    
>>
>
>
>  
>


-- 
-Anderson: http://badmama.com.au/~anderson/
1 2
Next ›   Last »