Thread overview | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
September 04, 2001 numerical code heaviness | ||||
---|---|---|---|---|
| ||||
REcently i have read a flame war on a forum after a news about numerical recepies. Most of the people sais that C sux for numerical computing, so happy Fortran 90. Even Java is declared too heavy because of it's lack of overloaded operator. So what can do D ? Does D need to have inifinite precision integer type (using 'inf' for infinite in the range) and a matrix type ? Or should we leave numerical calculation to Fortran ? nicO |
September 05, 2001 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to nicO | "nicO" <nicolas.boulay@ifrance.com> wrote in message news:3B957C4F.52C4D66A@ifrance.com... > REcently i have read a flame war on a forum after a news about numerical recepies. > > Most of the people sais that C sux for numerical computing, so happy Fortran 90. Even Java is declared too heavy because of it's lack of overloaded operator. > > So what can do D ? Does D need to have inifinite precision integer type (using 'inf' for infinite in the range) and a matrix type ? Or should we leave numerical calculation to Fortran ? > > nicO D's support for floating point will be better than C99's. For example, real numbers are by default initialized to NAN's, instead of to 0 or some random bit pattern. D doesn't have an infinite precision builtin type or a builtin matrix type. |
October 29, 2001 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | "Walter" <walter@digitalmars.com> wrote in message news:9n3s54$oqj$1@digitaldaemon.com... > D's support for floating point will be better than C99's. For example, real > numbers are by default initialized to NAN's, instead of to 0 or some random > bit pattern. I'm curious what the rationale for this decision was. Seems to make more sense to default initialize floats to zero, not NAN... if for no other reason than that the ints are all default-initialized to zero. How would you go about specifying a NAN float literal, anyway? > D doesn't have an infinite precision builtin type or a builtin matrix type. I think the lack of either a matrix type or a way to make one ourselves (operator overloading and member functions on structs) will make this language unsuitable for the computer graphics field. I don't see a big need for infinite precision math, I'd rather expose the machine's hardware capabilities. Sean |
October 29, 2001 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean L. Palmer | "Sean L. Palmer" wrote:
> How would you go about specifying a NAN float literal, anyway?
In C, you have to platform-specifically construct it, bitwise, in integer buffers, then cast. In D, I don't see a nan keyword, so I imagine you'd have to do it much the same way.
-RB
|
October 31, 2001 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russell Borogove | Russell Borogove wrote: > > "Sean L. Palmer" wrote: > > How would you go about specifying a NAN float literal, anyway? > > In C, you have to platform-specifically construct it, bitwise, in integer buffers, then cast. In D, I don't see a nan keyword, so I imagine you'd have to do it much the same way. My error. The D spec shows the "float.nan" construct. http://www.digitalmars.com/d/property.html -RB |
January 01, 2002 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean L. Palmer | "Sean L. Palmer" <spalmer@iname.com> wrote in message news:9ri9ss$1t33$1@digitaldaemon.com... > "Walter" <walter@digitalmars.com> wrote in message news:9n3s54$oqj$1@digitaldaemon.com... > > D's support for floating point will be better than C99's. For example, > real > > numbers are by default initialized to NAN's, instead of to 0 or some > random > > bit pattern. > I'm curious what the rationale for this decision was. Seems to make more sense to default initialize floats to zero, not NAN... if for no other reason than that the ints are all default-initialized to zero. By defaulting them to nan, then it forces the programmer to initialize them to something intended (as nan's will propagate through to any final result). |
January 01, 2002 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | Walter wrote:
> "Sean L. Palmer" <spalmer@iname.com> wrote in message news:9ri9ss$1t33$1@digitaldaemon.com...
> > "Walter" <walter@digitalmars.com> wrote in message news:9n3s54$oqj$1@digitaldaemon.com...
> > > D's support for floating point will be better than C99's. For example,
> > real
> > > numbers are by default initialized to NAN's, instead of to 0 or some
> > random
> > > bit pattern.
> > I'm curious what the rationale for this decision was. Seems to make more sense to default initialize floats to zero, not NAN... if for no other reason than that the ints are all default-initialized to zero.
>
> By defaulting them to nan, then it forces the programmer to initialize them to something intended (as nan's will propagate through to any final result).
And if integers had the equivalent of NAN, we'd be using that as well. Same goes with infinity. And the same applies to booleans, where some logic systems have "true", "false" and "undefined".
More and more we need our numeric representation systems (and all type systems for that matter) to contain additional state information, so we may then gain greater confidence in the results of calculations of all kinds. The notion of setting and using error values, or throwing exceptions, adds grossly too much "baggage" to the system, and is thus often ignored, or at least very much underused.
So, our floats can tell us "I am not a valid float value" with several shades of meaning. But conventional integers and booleans lack this ability.
IMHO, this is the main drive toward "pure" OO type systems (including "typeless" type systems), where all kinds of "other" information may be bundled as part of the "fundamental type". Every type should have an associated state field with values like "valid", "invalid", and any other extreme states that may need to be represented. (From this perspective, NAN is a kluge! But it is a huge step in the right direction.)
Such state information needs to be part of the type itself, and not a part of some completely separate error or exception system. We see this happening with all higher types in all "modern" languages, and the level of this support has been steadily percolating down the type hierarchy. Consider character strings as an ideal example. D has decided to bite the bullet and make "smarter strings" part of the language.
In the math domain this support can be pushed all the way to the hardware level. While ALUs have always had various condition codes to reflect the status of the result of operations, newer CPU/FPU/ALU architectures have independent condition code bits for each and every register. I'd rather not wait for the hardware to force the software to support "smarter" fundamental types. All "math" operations in "modern" high-level languages should use smarter fundamental types.
Objections to such systems arise from two primary communities: "Bit-twiddling" and "pointer math". Bit-twiddling, should be implemented via bit vectors or some other form of collected bits, and not overlaid with the numeric type system. And "pointer math" needs to be part of the "pointer type", and NOT overlaid with the rest of the integer numeric system. Explicit casting can be used to convert between the different domains (though it should only be needed to interface with external environments and systems that lack a robust type system). And yes, they are different domains!
Can this be done efficiently? Efficiently enough to support programming to the "bare metal"? In the era of multi-gigahertz CPUs, I'm certain the answer is "yes". There will always be resource-starved domains where "smart types" will not be applicable, such as the 64KB address space of an 8-bit processor. For such uses we will always have "legacy" languages such as C, and even assembler.
For everything else, including D, I'd very much like to see a "sane" type system top to bottom, especially where "fundamental" numeric types are concerned. We should not be forced to use heavyweight error and exception systems, or tortuous explicit program logic, to support "problems" encountered when using "fundamental" types! The type system itself should provide more help in this area.
And that's my $0.02. What's yours?
-BobC
|
January 01, 2002 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to Robert W. Cunningham | Robert W. Cunningham wrote: > (Lots of stuff on additional state on fundamental types snipped) > Can this be done efficiently? Efficiently enough to support programming to the > "bare metal"? In the era of multi-gigahertz CPUs, I'm certain the answer is > "yes". There will always be resource-starved domains where "smart types" will > not be applicable, such as the 64KB address space of an 8-bit processor. For > such uses we will always have "legacy" languages such as C, and even assembler. As a console game programmer, I find that no matter how big the system gets, the demands of some applications will leave you resource-starved - be it a 64K address space on an 8-bit processor like the original Nintendo system, or 32MB on a 32/64-bit processor such as the Playstation 2. I want the convenience of some of D's constructs (okay, actually, I want the dynamic arrays and the associative arrays and anything else can go hang) without giving up too much efficiency. In fact, I'm not gonna be able to use D for realtime games for the foreseeable future, because of the GC time hit. > For everything else, including D, I'd very much like to see a "sane" type > system top to bottom, especially where "fundamental" numeric types are > concerned. We should not be forced to use heavyweight error and exception > systems, or tortuous explicit program logic, to support "problems" encountered > when using "fundamental" types! The type system itself should provide more > help in this area. It has to be optional. Period. int32 for a raw integer, Int32Object for a smart type that supports introspection and whatnot. Otherwise, the overhead of allocating a bunch of 'em in an array is unacceptable. > And that's my $0.02. What's yours? There it is. :) -RB |
January 01, 2002 "Type" versus "Class" | ||||
---|---|---|---|---|
| ||||
Posted in reply to Robert W. Cunningham | > IMHO, this is the main drive toward "pure" OO type systems (including "typeless" type systems), where all kinds of "other" information may be bundled as part of the "fundamental type". Every type should have an associated state field with values like "valid", "invalid", and any other extreme states that may need to be represented. (From this perspective, NAN is a kluge! But it is a huge step in the right direction.)
I think it's important to make a comment here.
It will help the discussion if people will start to distinguish between
"type" and "class" (or as it is sometimes called, "run time type").
"Type" is a compile time notion - syntactic elements in the language
have a "type". "Class" is a run time notion - values at run time
have a "class" (or, if you prefer, a "run time type"). The two notions
are fundamentally different.
In early programming languages there was a one-to-one correspondence between compile time types and run time types, so it made sense to equate the two. In newer languages, and in particular any language that includes some form of inheritance, there is no longer a simple correspondence between the two notions. Thus it has become important to distinguish between them.
|
January 02, 2002 Re: numerical code heaviness | ||||
---|---|---|---|---|
| ||||
Posted in reply to Robert W. Cunningham | IMHO, numerical operations that result in a NAN ought to throw an exception. NANs are errors, plain and simple. By default, we want numbers to mean something, even if it's 'zilch'. Zero. Zip. Nada. More similar to the way ints work. If you're going to specify a default, have it be something useful, not something that forces you to override the default. What good is the bleeping default then? Just make it a compile time error not to explicitly initialize a float if that's what you're after. Sean "Robert W. Cunningham" <rwc_2001@yahoo.com> wrote in message news:3C323036.C7E6802F@yahoo.com... > Walter wrote: > > > "Sean L. Palmer" <spalmer@iname.com> wrote in message news:9ri9ss$1t33$1@digitaldaemon.com... > > > "Walter" <walter@digitalmars.com> wrote in message news:9n3s54$oqj$1@digitaldaemon.com... > > > > D's support for floating point will be better than C99's. For example, > > > real > > > > numbers are by default initialized to NAN's, instead of to 0 or some > > > random > > > > bit pattern. > > > I'm curious what the rationale for this decision was. Seems to make more > > > sense to default initialize floats to zero, not NAN... if for no other reason than that the ints are all default-initialized to zero. > > > > By defaulting them to nan, then it forces the programmer to initialize them > > to something intended (as nan's will propagate through to any final result). > > And if integers had the equivalent of NAN, we'd be using that as well. Same > goes with infinity. And the same applies to booleans, where some logic systems > have "true", "false" and "undefined". > > More and more we need our numeric representation systems (and all type systems > for that matter) to contain additional state information, so we may then gain > greater confidence in the results of calculations of all kinds. The notion of > setting and using error values, or throwing exceptions, adds grossly too much > "baggage" to the system, and is thus often ignored, or at least very much underused. > > So, our floats can tell us "I am not a valid float value" with several shades > of meaning. But conventional integers and booleans lack this ability. > > IMHO, this is the main drive toward "pure" OO type systems (including "typeless" type systems), where all kinds of "other" information may be bundled > as part of the "fundamental type". Every type should have an associated state > field with values like "valid", "invalid", and any other extreme states that > may need to be represented. (From this perspective, NAN is a kluge! But it is > a huge step in the right direction.) > > Such state information needs to be part of the type itself, and not a part of > some completely separate error or exception system. We see this happening with > all higher types in all "modern" languages, and the level of this support has > been steadily percolating down the type hierarchy. Consider character strings > as an ideal example. D has decided to bite the bullet and make "smarter strings" part of the language. > > In the math domain this support can be pushed all the way to the hardware level. While ALUs have always had various condition codes to reflect the status of the result of operations, newer CPU/FPU/ALU architectures have independent condition code bits for each and every register. I'd rather not > wait for the hardware to force the software to support "smarter" fundamental > types. All "math" operations in "modern" high-level languages should use smarter fundamental types. > > Objections to such systems arise from two primary communities: "Bit-twiddling" > and "pointer math". Bit-twiddling, should be implemented via bit vectors or > some other form of collected bits, and not overlaid with the numeric type system. And "pointer math" needs to be part of the "pointer type", and NOT > overlaid with the rest of the integer numeric system. Explicit casting can be > used to convert between the different domains (though it should only be needed > to interface with external environments and systems that lack a robust type > system). And yes, they are different domains! > > Can this be done efficiently? Efficiently enough to support programming to the > "bare metal"? In the era of multi-gigahertz CPUs, I'm certain the answer is > "yes". There will always be resource-starved domains where "smart types" will > not be applicable, such as the 64KB address space of an 8-bit processor. For > such uses we will always have "legacy" languages such as C, and even assembler. > > For everything else, including D, I'd very much like to see a "sane" type system top to bottom, especially where "fundamental" numeric types are concerned. We should not be forced to use heavyweight error and exception systems, or tortuous explicit program logic, to support "problems" encountered > when using "fundamental" types! The type system itself should provide more > help in this area. > > > And that's my $0.02. What's yours? > > -BobC > > |
Copyright © 1999-2021 by the D Language Foundation