Jump to page: 1 24  
Page
Thread overview
New to group, but have some suggestions
Mar 22, 2002
Stephen Fuld
Mar 22, 2002
Russell Borogove
Mar 22, 2002
Walter
Mar 23, 2002
Pavel Minayev
Mar 22, 2002
Walter
Mar 22, 2002
Serge K
Mar 22, 2002
Walter
Mar 23, 2002
Russell Borogove
Mar 23, 2002
Walter
Mar 23, 2002
Pavel Minayev
Mar 23, 2002
Stephen Fuld
Mar 23, 2002
Walter
Mar 25, 2002
Russ Lewis
Mar 25, 2002
Russell Borogove
Mar 26, 2002
OddesE
Mar 23, 2002
Pavel Minayev
Mar 23, 2002
Stephen Fuld
Mar 23, 2002
Pavel Minayev
Mar 23, 2002
Walter
Mar 23, 2002
Pavel Minayev
Mar 25, 2002
Walter
Mar 23, 2002
Walter
Mar 23, 2002
Christophe Bouchon
Mar 23, 2002
Stephen Fuld
Mar 23, 2002
Sean L. Palmer
Mar 23, 2002
Pavel Minayev
Mar 23, 2002
Stephen Fuld
Mar 25, 2002
Sean L. Palmer
Mar 29, 2002
Walter
Mar 29, 2002
Walter
Mar 23, 2002
Pavel Minayev
Mar 23, 2002
Christophe Bouchon
March 22, 2002
I am relatively new to this group.  I read the article in DDJ and have been lurking here for a few weeks to get a feel for the way things are done here. I think D has great potential.  I am making the following suggestions not from the perspective of converting programs from C or C++ (sometimes aptly called C double cross), but from the perspective of sometime in the future when D is the prevalent first language and C and C++ are relegated to legacy status. :-)  If any of these have been hashed out before, my apologies.

1.    Variable type names.  I know that short, long, etc. are a C heritage, but they have and will lead to confusion.  There was confusion over how long int was when we went from 16 bit to 32 bit computers.  There was a lot of discussion in the C standards group over what to call 64 bit integers when they became prevalent.  I am quite sure that we will go through the same thing when 128 bit variables become common.  What will D call them? "LongLong", "DoubleLong", "Quad", "ExtraLong", "DoubleTallSkinnyDecaf"? :-) Given that you have already bitten the bullet and defined integers to be 8, 16, 32, etc. bits exactly (that is, you are ruling out support for systems with 36 bit words, etc.), why not just use names that reflect what they truly are.  That is, "Int16", Int32", etc.  If you did that, is there any doubt what the type name of a future 128 bit integer is?  Of course, you would add the prefix U for unsigned.  Similarly, you would have "Float32", etc.  I understand that this might be "jarring" to C programmers converting, but you could support the old forms as "deprecated" conversion aids.  Note that another advantage is that this also makes the potential future implementation of at least some "big num" stuff almost syntactically transparent.

2.    I agree that the printf holdover from C has lots of problems.  But even if it isn't replaced, and especially if it is, one of my pet gripes about most programming languages is that they make it hard for humans to read the values of large integers and non-exponentially notated floats.  For example, which of the following is easier to "get" a quick feel for the magnitude of

    The answer is 875639241357                        or

    The answer is 875,639,241,357

Most languages make it very hard to put in the comma separators that make reading so much easier.  In order to allow this without breaking anything, I propose that the format string be enhanced to allow a comma where the period is now.  Using a comma there would format the number with comma separation every three digits.  Of course, extra space would have to be allowed when calculating print spacing, etc., but that is pretty easy.

Along similar lines, and while we are discussing easy of reading of computer output by humans, consider the following.

    Amount Due $         36.20                            or

    Amount Due          $36.20

The second is easier to read.  This could be trivially implemented by again enhancing the format string.  Currently, if you put a zero before the type specifier, it left zero fills, instead of left blank fills to the value.  If you allowed a dollar sign in addition to the zero, it could indicate float the currency sign to the last non-blank position.

Some considerations would probably have to be made in these for internationalization.  These are the two most important and easy to implement additional formatting functions.  I am not suggesting that D implement all the flexibility of COBOL's picture clause, just the most significant.  However, if you want to do more there is more that could be done pretty easily.

3.    I took the liberty of posting a link on the comp.arch newsgroup.  One poster noted that the run time model for D currently prohibits its running on 64 bit systems (more properly on systems that allow greater than 32 bit pointers).  Given the desire to keep the run time model constant across architectures, and given the relatively imminent implementation of 64 bit X86 architectures, with which you would want to be compatible, I think you should fix that.  Allowing a few more bytes of memory is a pretty trivial price to pay for not having to worry about this for a long time.

Related to that, I think it would be useful to D if you, Walter, posted on comp.arch a request for comments and input.  there are a lot of people there who are quite knowledgeable about what language features allow/prevent compilers generating code that runs well on the latest and next generations of processors, have lots of experience in high performance computing, floating point handling with all of the special cases, exception handling, etc.  If you requested inputs on that stuff especially, as well as any other input, I think you would be vastly rewarded.


--
 - Stephen Fuld
   e-mail address disguised to prevent spam


March 22, 2002
Stephen Fuld wrote:
> 1.    Variable type names.  I know that short, long, etc. are a C heritage,
> but they have and will lead to confusion.  There was confusion over how long
> int was when we went from 16 bit to 32 bit computers. 
> [snip details of int8, int16, int32, int64, uint8...,
> float32... proposal]

Apart from your heretical use of capital letters in
language-defined types, I wholeheartedly support this
notion.


> 2.    I agree that the printf holdover from C has lots of problems.  But
> even if it isn't replaced, and especially if it is, one of my pet gripes
> about most programming languages is that they make it hard for humans to
> read the values of large integers and non-exponentially notated floats.  For
> example, which of the following is easier to "get" a quick feel for the
> magnitude of
> 
>     The answer is 875639241357                        or
> 
>     The answer is 875,639,241,357

If printf were to be retained, I'd suggest:
printf( "The answer is %,d\n", my_large_integer );

-Russell B


March 22, 2002
"Russell Borogove" <kaleja@estarcion.com> wrote in message news:3C9B7C54.5050707@estarcion.com...
> Stephen Fuld wrote:
> > 2.    I agree that the printf holdover from C has lots of problems.  But even if it isn't replaced, and especially if it is, one of my pet gripes about most programming languages is that they make it hard for humans to read the values of large integers and non-exponentially notated floats.
For
> > example, which of the following is easier to "get" a quick feel for the magnitude of
> >
> >     The answer is 875639241357                        or
> >
> >     The answer is 875,639,241,357
>
> If printf were to be retained, I'd suggest:
> printf( "The answer is %,d\n", my_large_integer );

That is quite a good idea.


March 22, 2002
"Stephen Fuld" <s.fuld.pleaseremove@att.net> wrote in message news:a7ftf8$1gpi$1@digitaldaemon.com...
> I am relatively new to this group.  I read the article in DDJ and have
been
> lurking here for a few weeks to get a feel for the way things are done
here.
> I think D has great potential.  I am making the following suggestions not from the perspective of converting programs from C or C++ (sometimes aptly called C double cross), but from the perspective of sometime in the future when D is the prevalent first language and C and C++ are relegated to
legacy
> status. :-)  If any of these have been hashed out before, my apologies.

Glad to see you posting here.

> 1.    Variable type names.  I know that short, long, etc. are a C
heritage,
> but they have and will lead to confusion.  There was confusion over how
long
> int was when we went from 16 bit to 32 bit computers.  There was a lot of discussion in the C standards group over what to call 64 bit integers when they became prevalent.  I am quite sure that we will go through the same thing when 128 bit variables become common.  What will D call them? "LongLong", "DoubleLong", "Quad", "ExtraLong", "DoubleTallSkinnyDecaf"?
:-)
> Given that you have already bitten the bullet and defined integers to be
8,
> 16, 32, etc. bits exactly (that is, you are ruling out support for systems with 36 bit words, etc.), why not just use names that reflect what they truly are.  That is, "Int16", Int32", etc.  If you did that, is there any doubt what the type name of a future 128 bit integer is?  Of course, you would add the prefix U for unsigned.  Similarly, you would have "Float32", etc.  I understand that this might be "jarring" to C programmers
converting,
> but you could support the old forms as "deprecated" conversion aids.  Note that another advantage is that this also makes the potential future implementation of at least some "big num" stuff almost syntactically transparent.

I can argue that you can create a list of aliases,
    alias int int32;
etc.


> Along similar lines, and while we are discussing easy of reading of
computer
> output by humans, consider the following.
>     Amount Due $         36.20                            or
>     Amount Due          $36.20
> The second is easier to read.  This could be trivially implemented by
again
> enhancing the format string.  Currently, if you put a zero before the type specifier, it left zero fills, instead of left blank fills to the value.
If
> you allowed a dollar sign in addition to the zero, it could indicate float the currency sign to the last non-blank position.

Internationalization of currency formatting is a real problem, but one I suggest is suited to a library class. Would you like to write one?

> 3.    I took the liberty of posting a link on the comp.arch newsgroup.
One
> poster noted that the run time model for D currently prohibits its running on 64 bit systems (more properly on systems that allow greater than 32 bit pointers).  Given the desire to keep the run time model constant across architectures, and given the relatively imminent implementation of 64 bit X86 architectures, with which you would want to be compatible, I think you should fix that.  Allowing a few more bytes of memory is a pretty trivial price to pay for not having to worry about this for a long time.

I don't know why it would be so limited. I'll check the newsgroup and see.

> Related to that, I think it would be useful to D if you, Walter, posted on comp.arch a request for comments and input.  there are a lot of people
there
> who are quite knowledgeable about what language features allow/prevent compilers generating code that runs well on the latest and next
generations
> of processors, have lots of experience in high performance computing, floating point handling with all of the special cases, exception handling, etc.  If you requested inputs on that stuff especially, as well as any
other
> input, I think you would be vastly rewarded.

That's a great idea!


March 22, 2002
> I can argue that you can create a list of aliases,
>     alias int int32;

or even better:

alias int32 int;


March 22, 2002
"Serge K" <skarebo@programmer.net> wrote in message news:a7gd14$m0$1@digitaldaemon.com...
> > I can argue that you can create a list of aliases,
> >     alias int int32;
> or even better:
> alias int32 int;

I was afraid someone would point that out <g>.

I suppose I need to point out just why I didn't pick int32. It's purely an aesthetic one, I just don't like the look of declarations with int32, int16, etc. It's a little awkward for me to type, too, as I can touch type the letters but not the numbers.

Try a global search/replace on some source code with int->int32, char->int8, etc. Is the result pleasing to the eye?


March 23, 2002
Walter wrote:
> I suppose I need to point out just why I didn't pick int32. It's purely an
> aesthetic one, I just don't like the look of declarations with int32, int16,
> etc. It's a little awkward for me to type, too, as I can touch type the
> letters but not the numbers.
> 
> Try a global search/replace on some source code with int->int32, char->int8,
> etc. Is the result pleasing to the eye?

Perhaps not, but IMO far more expressive. So what are you
going to call the 128-bit type?

March 23, 2002
"Stephen Fuld" Wrote:
> "LongLong", "DoubleLong", "Quad", "ExtraLong", "DoubleTallSkinnyDecaf"?
:-)
> Given that you have already bitten the bullet and defined integers to be
8,
> 16, 32, etc. bits exactly (that is, you are ruling out support for systems with 36 bit words, etc.), why not just use names that reflect what they truly are.  That is, "Int16", Int32", etc.
I always use int1...4 and uint1...4 (with typedefs in a types.h include, I use the byte size insteaad of the bit size for shorter and easier to type types) in my C/C++ projects so I completely agree.

>     The answer is 875639241357                        or
>     The answer is 875,639,241,357
Agreed but: in french ;-) (but also in other roman languages), ',' and '.'
use is reversed:
        La réponse est 875.639.241.357
so you have to think twice about internationalisation and possible
confusions.
I also like the possibility to insert '_' between digits in integers and
floating point constants (but only AFTER the first digit or after the digit
following the '.', else it's ambiguous with identifiers). This way, you can
use 875_639_241_357 in your source code, and also 0x1234_5678_9ABC.

> One poster noted that the run time model for D currently prohibits its
running
> on 64 bit systems (more properly on systems that allow greater than 32 bit
pointers).
Another usefull type: an intptr type being an integer garanted large enough
to contain a pointer on the target plateform.



March 23, 2002
"Russell Borogove" <kaleja@estarcion.com> wrote in message news:3C9BD148.6030203@estarcion.com...
> Walter wrote:
> > I suppose I need to point out just why I didn't pick int32. It's purely
an
> > aesthetic one, I just don't like the look of declarations with int32,
int16,
> > etc. It's a little awkward for me to type, too, as I can touch type the
> > letters but not the numbers.
> > Try a global search/replace on some source code with int->int32,
char->int8,
> > etc. Is the result pleasing to the eye?
> Perhaps not, but IMO far more expressive. So what are you going to call the 128-bit type?

cent?  (as kilo means 1024, cent should mean 128)
centint?
centurion? <g>


March 23, 2002
"Walter" <walter@digitalmars.com> wrote in message news:a7gb1q$30ej$2@digitaldaemon.com...
>
> "Stephen Fuld" <s.fuld.pleaseremove@att.net> wrote in message news:a7ftf8$1gpi$1@digitaldaemon.com...
> > I am relatively new to this group.  I read the article in DDJ and have
> been
> > lurking here for a few weeks to get a feel for the way things are done
> here.
> > I think D has great potential.  I am making the following suggestions
not
> > from the perspective of converting programs from C or C++ (sometimes
aptly
> > called C double cross), but from the perspective of sometime in the
future
> > when D is the prevalent first language and C and C++ are relegated to
> legacy
> > status. :-)  If any of these have been hashed out before, my apologies.
>
> Glad to see you posting here.

Thank you.  It seems like a friendly place.


>
> > 1.    Variable type names.

snip

> I can argue that you can create a list of aliases,
>     alias int int32;
> etc.

Sure, but then it would be my private practice.  I was arguing for better "hygene" for all users.  :-)

>
>
> > Along similar lines, and while we are discussing easy of reading of
> computer
> > output by humans, consider the following.
> >     Amount Due $         36.20                            or
> >     Amount Due          $36.20
> > The second is easier to read.  This could be trivially implemented by
> again
> > enhancing the format string.  Currently, if you put a zero before the
type
> > specifier, it left zero fills, instead of left blank fills to the value.
> If
> > you allowed a dollar sign in addition to the zero, it could indicate
float
> > the currency sign to the last non-blank position.
>
> Internationalization of currency formatting is a real problem, but one I suggest is suited to a library class. Would you like to write one?

I agree about the problem.  As for me helping to provide a solution, I may have a problem with that. In what language are such libraries written?  My first language was Fortran, learned in 1969.  And, while I know others, by the time C came along and got popular, I was into system architecture and product strategy so, while I have managed projects written in C, I never written in it and certainly wouldn't call myself a C programmer.  As for C++, I saw enough of it to say that I don't want to learn it at all.  There is some ratio of alphanumeric characters to special characters in a typical program that I consider a minimum for readability, and C++ is way below that minimum.  :-)


> > 3.    I took the liberty of posting a link on the comp.arch newsgroup.
> One
> > poster noted that the run time model for D currently prohibits its
running
> > on 64 bit systems (more properly on systems that allow greater than 32
bit
> > pointers).  Given the desire to keep the run time model constant across architectures, and given the relatively imminent implementation of 64
bit
> > X86 architectures, with which you would want to be compatible, I think
you
> > should fix that.  Allowing a few more bytes of memory is a pretty
trivial
> > price to pay for not having to worry about this for a long time.
>
> I don't know why it would be so limited.

The run time model has the pointer at offset 0 and the monitor at offset 4. This limits pointers to four bytes.  Other addresses similarly limit you.

> I'll check the newsgroup and see.

It is an interesting place too.

--
 - Stephen Fuld
   e-mail address disguised to prevent spam


« First   ‹ Prev
1 2 3 4