May 11, 2002
On Fri, 10 May 2002 14:10:18 -0700, Russell Borogove <kaleja@estarcion.com> wrote:
> 
> I use unsigned vs. signed to control the behavior of shifts, not the behavior of overflows.
> 
A shift is an operation performed on a bit pattern - not a number. It knows nothing of signed-ness.

Therfore:
    int val;
    val = -1;
    val >>= 2;   //shifts like C's unsigned
    val += 2;     // Adds like C's 'signed'

Shifting a signed is actually multiplication (or division)  and if that's what you want, thats what you should say.

Karl Bochert



May 11, 2002
On Fri, 10 May 2002 20:39:29 +0200, "OddesE" <OddesE_XYZ@hotmail.com> wrote:

> > Get rid of unsigned entirely. It adds nothing but confusion. Its very use implies that overflow behavior is important!
> >
> > Karl Bochert
> >

> 
> Mmmm....
> It doubles the range of a type without any loss!

That was important when memory was small.

> If a value is *always* positive (the index of an array)
> why not express this using an unsigned type?
> 

I used to feel the same way -- 'unsigned' was a sort of contract
with myself. Its probably better (but not good) to use a comment:
   int i;  //an index -- must be positive.

Its true that:

     int i;
    i -= 1;
    arr[i];

will produce odd results, but then so (probably) would:

    unsigned int i;
    i -= 1;
    arr[i];

The more meaningful distinction between 'index' and 'sum' is that the
former is (should be) an ordinal. If D used ordinal indexes, then it might be
useful to have an 'unsigned' to declare them, but even that would be
marginal.

Karl Bochert



May 11, 2002
Karl Bochert wrote:

> I used to feel the same way -- 'unsigned' was a sort of contract
> with myself. Its probably better (but not good) to use a comment:
>    int i;  //an index -- must be positive.

D allows you to make explicit contracts between yourself and the compiler, which I think is a Good Thing.  Isn't 'unsigned' just a contract with the compiler that the variable should never go negative?

> Its true that:
>
>      int i;
>     i -= 1;
>     arr[i];

Frankly, I think that if the compiler can detect that you're going to subtract from 0 on an unsigned number, it should register that as a contract violation.

> will produce odd results, but then so (probably) would:
>
>     unsigned int i;
>     i -= 1;
>     arr[i];
>
> The more meaningful distinction between 'index' and 'sum' is that the
> former is (should be) an ordinal. If D used ordinal indexes, then it might be
> useful to have an 'unsigned' to declare them, but even that would be
> marginal.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]


May 11, 2002
"Karl Bochert" <kbochert@ix.netcom.com> wrote in message news:1103_1021079639@bose...
> On Fri, 10 May 2002 14:10:18 -0700, Russell Borogove
<kaleja@estarcion.com> wrote:
> >
> > I use unsigned vs. signed to control the behavior of shifts, not the behavior of overflows.
> >
> A shift is an operation performed on a bit pattern - not a number. It knows nothing of signed-ness.
>
> Therfore:
>     int val;
>     val = -1;
>     val >>= 2;   //shifts like C's unsigned
>     val += 2;     // Adds like C's 'signed'
>
> Shifting a signed is actually multiplication (or division)  and if that's what you want, thats what you should say.
>
> Karl Bochert
>


I disagree. The effects of a shift might be the same as
a multiplication or a division, but the operation sure
is different. It is an old optimisation trick to use
shifts instead of multiplications to gain speed:

int i = 10;
// ... Draw some graphics
// Skip to the next scanline (on a 640x480 display)
i = i + (i << 9) + (i << 7);  // Same as i = i * 640 but faster

A lot of game graphics actually require that the width of textures or sprites be a multiple of 2, or even a power of two to ease the use of these kind of tricks.

The same goes for memory and unsigned numbers.
Sure we have got lots of memory these days, but our games
also require more and more. Why waste memory or range of
numbers when it is not necessary?


--
Stijn
OddesE_XYZ@hotmail.com
http://OddesE.cjb.net
_________________________________________________
Remove _XYZ from my address when replying by mail




May 11, 2002
OddesE wrote:
> "Karl Bochert" <kbochert@ix.netcom.com> wrote in message
> news:1103_1021079639@bose...
>>Shifting a signed is actually multiplication (or division)  and if that's
>>what you want, thats what you should say.
>>
> 
> I disagree. The effects of a shift might be the same as
> a multiplication or a division, but the operation sure
> is different. It is an old optimisation trick to use
> shifts instead of multiplications to gain speed:

I believe Karl's position is that such things are
for the compiler to optimize[1], not the programmer;
although it's possible for the programmer to know
that the divisor is a power of two in situations
where the compiler doesn't.

-Russell B

[1] And whose responsibility is it to optimize shift
versus multiply on a Pentium 4, where multiplies may
well be faster than shifts?






May 12, 2002
"Russ Lewis" <spamhole-2001-07-16@deming-os.org> wrote in message news:3CDD170F.80589F76@deming-os.org...
> Karl Bochert wrote:
>
> > I used to feel the same way -- 'unsigned' was a sort of contract
> > with myself. Its probably better (but not good) to use a comment:
> >    int i;  //an index -- must be positive.
>
> D allows you to make explicit contracts between yourself and the compiler,
which
> I think is a Good Thing.  Isn't 'unsigned' just a contract with the
compiler that
> the variable should never go negative?
>
> > Its true that:
> >
> >      int i;
> >     i -= 1;
> >     arr[i];
>
> Frankly, I think that if the compiler can detect that you're going to
subtract
> from 0 on an unsigned number, it should register that as a contract
violation.

Yes.  Walter has already acknowledged the advantages of range variables and IIRC agreed to put them in version 2. Unsigned is essentially a range restriction (>=0) on an integer.  (In fact, the ability to eliminate the whole "unsigned" issue and its correspoonding syntax is an argument for supporting ranges in version 1) Without any further syntax, range restrictions are a kind of shortcut for equivalent design by contract constructs and so any detected violation of such a range restriction should be treated equivalently to an assertion violation.

Note that with additional syntax, range variables become much more usefull, but that is a different topic.

One other note.  If it wasn't so bloody inconvenient, a language could require ranges on all its integer variables and thus eliminate the whole short/long/int/longlong/double/verylong/doublelong/ultralong/ etc. mess. Just specify the range and let the compiler figure out what how much storage it needs.

--
 - Stephen Fuld
   e-mail address disguised to prevent spam


May 12, 2002
> > Frankly, I think that if the compiler can detect that you're going to
> subtract
> > from 0 on an unsigned number, it should register that as a contract
> violation.
>
> Yes.  Walter has already acknowledged the advantages of range variables
and
> IIRC agreed to put them in version 2. Unsigned is essentially a range restriction (>=0) on an integer.  (In fact, the ability to eliminate the whole "unsigned" issue and its correspoonding syntax is an argument for supporting ranges in version 1) Without any further syntax, range restrictions are a kind of shortcut for equivalent design by contract constructs and so any detected violation of such a range restriction
should
> be treated equivalently to an assertion violation.

Maybe you could have it clamp the value to the limits of the target range instead of just lopping off the top bits, have it lop off the excess value. Note that this is more like the behavior of casting float to int or int to float (nevermind that mostly the hardware does the remapping)... something is remapping a value of one type into a possible value of the other type. Maybe some information has to be lost... I think the part of the value that goes outside the range should be lost, but it should turn into the most extreme value possible (the closest one can get to the original value) when this happens.  For ints, NaN and 0 are the same value, or perhaps you could use 0x80000000.  For floats, I'd have it measure the range and if necessary when converting if it's beyond the capabilities of the target int, it could turn into MAXINT (-0x80000000 thru 0x7FFFFFFF) instead of the low 32 bits of the integer representation of the float.  Or one could use the old cast behavior to get bits converted in the fastest way possible (usually by lopping off extra hi bits... which is useful for carving up data but loses a different kind of information, actually loses the most important part of the information in most cases)

You could do for instance

int a = 65536;
short b = saturate(short) a; // value is 32767
short c = cast(short) a;    // value is 0

or even

enum tristate { false, maybe, true };
tristate res1 = saturate(tristate) -45;  // value is false
tristate res2 = saturate(tristate) 57;  // value is true
tristate res3 = saturate(tristate) 1.1f;  // value is maybe

But actually it'd be nice if you could establish an attribute which lets the compiler know a particular variable is always needing to be clamped or saturated to the maximum range, that way you wouldn't need the cast, it'd be implicit.  Kinda like const or volatile in C++.

Does D do same syntax for dynamic casts as for static casts?  i.e.  if
(cast(ObjDerived)myobj)  I seem to recall it uses special properties like
"a.IsDerived(B)" or something.  Maybe saturated could be one of those
special properties.

Perhaps we can have some compile time mojo that works like so

byte a;
a.variable_saturated() = true;
int x;
a = x;   // saturates
a.variable_saturated() = false;
a = x;  // doesn't saturate

variable_saturated would have to be assigned a compile-time constant as its value, that would be our restriction.  tweaking this bit which actually exists in the compiler would actually change the subsequent semantic processing.  I don't know if you want your semantic processor to have state, or have that state modified by the program being compiled like this.

Another alternative is to use the same method public/private/etc use.

struct pixelRGBA
{
  saturated:
    ubyte R,G,B,A;
};

Anyone like this?

Sean

> Note that with additional syntax, range variables become much more
usefull,
> but that is a different topic.

Cool

> One other note.  If it wasn't so bloody inconvenient, a language could require ranges on all its integer variables and thus eliminate the whole short/long/int/longlong/double/verylong/doublelong/ultralong/ etc. mess. Just specify the range and let the compiler figure out what how much
storage
> it needs.

Good idea.


May 12, 2002
"Sean L. Palmer" <spalmer@iname.com> wrote in message news:abl6qj$1fd5$1@digitaldaemon.com...
> > > Frankly, I think that if the compiler can detect that you're going to
> > subtract
> > > from 0 on an unsigned number, it should register that as a contract
> > violation.
> >
> > Yes.  Walter has already acknowledged the advantages of range variables
> and
> > IIRC agreed to put them in version 2. Unsigned is essentially a range restriction (>=0) on an integer.  (In fact, the ability to eliminate the whole "unsigned" issue and its correspoonding syntax is an argument for supporting ranges in version 1) Without any further syntax, range restrictions are a kind of shortcut for equivalent design by contract constructs and so any detected violation of such a range restriction
> should
> > be treated equivalently to an assertion violation.
>
> Maybe you could have it clamp the value to the limits of the target range instead of just lopping off the top bits, have it lop off the excess
value.
> Note that this is more like the behavior of casting float to int or int to float (nevermind that mostly the hardware does the remapping)... something is remapping a value of one type into a possible value of the other type. Maybe some information has to be lost... I think the part of the value
that
> goes outside the range should be lost, but it should turn into the most extreme value possible (the closest one can get to the original value)
when
> this happens.  For ints, NaN and 0 are the same value, or perhaps you
could
> use 0x80000000.  For floats, I'd have it measure the range and if
necessary
> when converting if it's beyond the capabilities of the target int, it
could
> turn into MAXINT (-0x80000000 thru 0x7FFFFFFF) instead of the low 32 bits
of
> the integer representation of the float.  Or one could use the old cast behavior to get bits converted in the fastest way possible (usually by lopping off extra hi bits... which is useful for carving up data but loses
a
> different kind of information, actually loses the most important part of
the
> information in most cases)
>
> You could do for instance
>
> int a = 65536;
> short b = saturate(short) a; // value is 32767
> short c = cast(short) a;    // value is 0
>
> or even
>
> enum tristate { false, maybe, true };
> tristate res1 = saturate(tristate) -45;  // value is false
> tristate res2 = saturate(tristate) 57;  // value is true
> tristate res3 = saturate(tristate) 1.1f;  // value is maybe
>
> But actually it'd be nice if you could establish an attribute which lets
the
> compiler know a particular variable is always needing to be clamped or saturated to the maximum range, that way you wouldn't need the cast, it'd
be
> implicit.  Kinda like const or volatile in C++.
>
> Does D do same syntax for dynamic casts as for static casts?  i.e.  if
> (cast(ObjDerived)myobj)  I seem to recall it uses special properties like
> "a.IsDerived(B)" or something.  Maybe saturated could be one of those
> special properties.
>
> Perhaps we can have some compile time mojo that works like so
>
> byte a;
> a.variable_saturated() = true;
> int x;
> a = x;   // saturates
> a.variable_saturated() = false;
> a = x;  // doesn't saturate
>
> variable_saturated would have to be assigned a compile-time constant as
its
> value, that would be our restriction.  tweaking this bit which actually exists in the compiler would actually change the subsequent semantic processing.  I don't know if you want your semantic processor to have
state,
> or have that state modified by the program being compiled like this.
>
> Another alternative is to use the same method public/private/etc use.
>
> struct pixelRGBA
> {
>   saturated:
>     ubyte R,G,B,A;
> };
>
> Anyone like this?

Assuming that range is going to be a possible attribute in the declaration, it seems that saturated should be a "modifier" of range.  It modifies range in that it modifies what to do when the variable goes out of range (clamp it instead of throwing an exception).  That would allow saturating at a any value, both for the min and the max.

I don't know if saturating is enough of an advantage to be worth putting in the language, but it wouldn't be hard to implement if you are doing ranges already and shouldn't be too costly for the tests in the resulting code.

--
 - Stephen Fuld
   e-mail address disguised to prevent spam


1 2 3
Next ›   Last »