Search
```On 4/14/12, F i L <witte2008@gmail.com> wrote:
>      auto f = 1.0f; // is float not Float

UFCS in 2.059 to the rescue:

struct Float
{
}

@property Float f(float val) { return Float(val); }

void main()
{
auto f = 1.0.f;
}
```
```On Saturday, 14 April 2012 at 18:02:57 UTC, Andrej Mitrovic wrote:
> On 4/14/12, F i L <witte2008@gmail.com> wrote:
>>      auto f = 1.0f; // is float not Float
>
> UFCS in 2.059 to the rescue:
>
> struct Float
> {
> }
>
> @property Float f(float val) { return Float(val); }
>
> void main()
> {
>     auto f = 1.0.f;
> }

You're a scholar and and gentlemen!
```
```
On 14/04/12 18:38, F i L wrote:
> On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:
>>
>>
>> On 14/04/12 16:47, F i L wrote:
>>> Jerome BENOIT wrote:
>>>> Why would a compiler set `real' to 0.0 rather then 1.0, Pi, .... ?
>>>
>>> Because 0.0 is the "lowest" (smallest, starting point, etc..)
>>
>> quid -infinity ?
>
> The concept of zero is less meaningful than -infinity. Zero is the logical starting place because zero represents nothing (mathematically)

zero is not nothing in mathematics, on the contrary !

0 + x = 0 // neutral for addition
0 * x = 0 // absorbing for multiplication
0 / x = 0 if  (x <> 0) // idem
| x / 0 | = infinity if (x <> 0)
0 / 0 = NaN // undefined

, which is inline with how pointers behave (only applicable to memory, not scale).

pointer value are also bounded.

>
>
>> numerical value. Pi is the corner case and obviously has to be explicitly set.
>>>
>>> If you want to take this further, chars could even be initialized to spaces or newlines or something similar. Pointers/references need to be defaulted to null because they absolutely must equal an explicit value before use. Value types don't share this limitation.
>>>
>>
>> CHAR set are bounded, `real' are not.
>
> Good point, I'm not so convinced char should default to " ". I think there are arguments either way, I haven't given it much thought.
>
>
>>>> The more convenient default set certainly depends on the underlying mathematics,
>>>> and a compiler cannot (yet) understand the encoded mathematics.
>>>> NaN is certainly the certainly the very choice as whatever the involved mathematics,
>>>> they will blow up sooner or later. And, from a practical point of view, blowing up is easy to trace.
>>>
>>> Zero is just as easy for the runtime/compiler to default to;
>>
>> Fortran age is over.
>> D compiler contains a lot of features that are not easy to set up by the compiler BUT meant for easing coding.
>>
>>
>> and bugs can be introduce anywhere in the code, not just definition.
>>
>> so the NaN approach discard one source of error.
>
> Sure, the question then becomes "does catching bugs introduced by inaccurately defining a variable outweigh the price of inconsistency and learning curve."  My opinion is No, expected behavior is more important.

From a numerical point of view, zero is not a good default (see above), and as such setting 0.0 as default for real is not an expected behaviour.
Considering the NaN blow up behaviour, for a numerical folk the expected behaviour is certainly setting NaN as default for real.
Real number are not meant here for coders, but for numerical folks: D applies here a rule gain along experiences from numerical people.
So your opinion is good, but you misplaced the inconsistency: 0 is inaccurate here and NaN is accurate, not the contrary.

Especially when I'm not sure I've ever heard of someone in C# having bugs that would have been helped by defaulting to NaN. I mean really, how does:
>
> float x; // NaN
> ...
> x = incorrectValue;
> ...
> foo(x); // first time x is used
>
> differ from:
>
> float x = incorrectValue;
> ...
> foo(x);
>
> in any meaning full way? Except that in this one case:
>
> float x; // NaN
> ...
> foo(x); // uses x, resulting in NaNs
> ...
> x = foo(x); // sets after first time x is used
>
> you'll get a "more meaningful" error message, which, assuming you didn't just write a ton of FP code, you'd be able to trace to it's source faster.
>
> It just isn't enough to justify defaulting to NaN, IMO. I even think the process of hunting down bugs is more straight forward when defaulting to zero, because every numerical bug is pursued the same way, regardless of type. You don't have to remember that FP specifically causes this issues in only some cases.

For numerical works, because 0 behaves nicely most of the time, non properly initialized variables may not detected because the output data can sound resoneable;
on the other hand, because NaN blows up, such detection is straight forward: the output will be a NaN output which will jump to your face very quickly.
This is a numerical issue, not a coding language issue. Personally in my C code, I have taken the habit to initialise real numbers (doubles) with NaN:
in the GSL library there is a ready to use macro: GSL_NAN. (Concerning, integers I used extreme value as INT_MIN, INT_MAX, SIZE_MAX. ...).
I would even say that D may go further by setting a kind of NaN for integers (and for chars).
```
```On Saturday, 14 April 2012 at 18:07:41 UTC, Jerome BENOIT wrote:
> On 14/04/12 18:38, F i L wrote:
>> On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:
>>> On 14/04/12 16:47, F i L wrote:
>>>> Jerome BENOIT wrote:
>>>>> Why would a compiler set `real' to 0.0 rather then 1.0, Pi, .... ?
>>>>
>>>> Because 0.0 is the "lowest" (smallest, starting point, etc..)
>>>
>>> quid -infinity ?
>>
>> The concept of zero is less meaningful than -infinity. Zero is the logical starting place because zero represents nothing (mathematically)
>
> zero is not nothing in mathematics, on the contrary !
>
> 0 + x = 0 // neutral for addition
> 0 * x = 0 // absorbing for multiplication
> 0 / x = 0 if  (x <> 0) // idem
> | x / 0 | = infinity if (x <> 0)

Just because mathematical equations behave differently with zero doesn't change the fact that zero _conceptually_ represents "nothing"

It's default for practical reason. Not for mathematics sake, but for the sake of convenience. We don't all study higher mathematics but we're all taught to count since we where toddlers. Zero makes sense as the default, and is compounded by the fact that Int *must* be zero.

> 0 / 0 = NaN // undefined

Great! Yet another reason to default to zero. That way, "0 / 0" bugs have a very distinct fingerprint.

> , which is inline with how pointers behave (only applicable to memory, not scale).
>
> pointer value are also bounded.

I don't see how that's relevant.

> Considering the NaN blow up behaviour, for a numerical folk the expected behaviour is certainly setting NaN as default for real.
> Real number are not meant here for coders, but for numerical folks:

Of course FP numbers are meant for coders... they're in a programming language. They are used by coders, and not every coder that uses FP math *has* to be well trained in the finer points of mathematics simply to use a number that can represent fractions in a conceptually practical way.

> D applies here a rule gain along experiences from numerical people.

I'm sorry I can't hear you over the sound of how popular Java and C# are. Convenience is about productivity, and that's largely influence by how much prior knowledge someone needs before being able to understand a features behavior.

(ps. if you're going to use Argumentum ad Verecundiam, I get to use Argumentum ad Populum).

> For numerical works, because 0 behaves nicely most of the time, non properly initialized variables may not detected because the output data can sound resoneable;
> on the other hand, because NaN blows up, such detection is straight forward: the output will be a NaN output which will jump to your face very quickly.

I gave examples which address this. This behavior is only [debatably] beneficial in corner cases on FP numbers specifically. I don't think that sufficient justification in light of reasons I give above.

> This is a numerical issue, not a coding language issue.

No, it's both. We're not Theoretical physicists we're Software Engineers writing a very broad scope of different programs.

> Personally in my C code, I have taken the habit to initialise real numbers (doubles) with NaN:
> in the GSL library there is a ready to use macro: GSL_NAN. (Concerning, integers I used extreme value as INT_MIN, INT_MAX, SIZE_MAX. ...).

Only useful because C defaults to garbage.

> I would even say that D may go further by setting a kind of NaN for integers (and for chars).

You may get your with if Arm64 takes over.
```
```F i L wrote:

> You can't force new D programmers to follow a 'guidline'

By exposing a syntax error for every missed explicit initialization the current guideline would be changed into an insurmountable barrier, forcing every "new D programmers to follow" the 'guidline'.

-manfred
```
```On 4/14/12, Jerome BENOIT <g6299304p@rezozer.net> wrote:
> I would even say that D may go further by setting a kind of NaN for integers.

That's never going to happen.
```
```On 14/04/12 16:52, F i L wrote:
>> The initialization values chosen are also determined by the underlying
>> hardware implementation of the type. Signalling NANs
>> (http://en.wikipedia.org/wiki/NaN#Signaling_NaN) can be used with floats
>> because they are implemented by the CPU, but in the case of integers or
>> strings their aren't really equivalent values.
>
> I'm sure the hardware can just as easily signal zeros.

The point is not that the hardware can't deal with floats initialized to zero. The point is that the hardware CAN'T support an integer equivalent of NaN.  If it did, D would surely use it.

> Like I said before, this is backwards thinking. At the end of the day, you
> _can_ use default values in D. Given that ints are defaulted to usable values,
> FP Values should be as well for the sake of consistency and convenience.

Speaking as a new user (well, -ish), my understanding of D is that its design philosophy is that _the easy thing to do should be the safe thing to do_, and this concept is pervasive throughout the design of the whole language.

So, ideally (as bearophile says) you'd compel the programmer to explicitly initialize variables before using them, or explicitly specify that they are not being initialized deliberately.  Enforcing that may be tricky (most likely not impossible, but tricky, and there are bigger problems to solve for now), so the next best thing is to default-initialize variables to something that will scream at you "THIS IS WRONG!!" when the program runs, and so force you to correct the error.

For floats, that means NaN.  For ints, the best thing you can do is zero.  It's a consistent decision -- not consistent as you frame it, but consistent with the language design philosophy.

> You can't force new D programmers to follow a 'guidline' no matter how loudly the documentation shouts it

No, but you can drop very strong hints as to good practice.  Relying on default values for variables is bad programming.  The fact that it is possible with integers is a design fault forced on the language by hardware constraints.  As a language designer, do you compound the fault by making floats also init to 0 or do you enforce good practice in a way which will probably make the user reconsider any assumptions they may have made for ints?

Novice programmers need support, but support should not extend to pandering to bad habits which they would be better off unlearning (or never learning in the first place).
```
```
On 14/04/12 20:51, F i L wrote:
> On Saturday, 14 April 2012 at 18:07:41 UTC, Jerome BENOIT wrote:
>> On 14/04/12 18:38, F i L wrote:
>>> On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:
>>>> On 14/04/12 16:47, F i L wrote:
>>>>> Jerome BENOIT wrote:
>>>>>> Why would a compiler set `real' to 0.0 rather then 1.0, Pi, .... ?
>>>>>
>>>>> Because 0.0 is the "lowest" (smallest, starting point, etc..)
>>>>
>>>> quid -infinity ?
>>>
>>> The concept of zero is less meaningful than -infinity. Zero is the logical starting place because zero represents nothing (mathematically)

>>
>> zero is not nothing in mathematics, on the contrary !
>>
>> 0 + x = 0 // neutral for addition
>> 0 * x = 0 // absorbing for multiplication
>> 0 / x = 0 if (x <> 0) // idem
>> | x / 0 | = infinity if (x <> 0)
>
> Just because mathematical equations behave differently with zero doesn't change the fact that zero _conceptually_ represents "nothing"

You are totally wrong: here we are dealing with key concept of the group theory.

>
> It's default for practical reason. Not for mathematics sake, but for the sake of convenience. We don't all study higher mathematics  but we're all taught to count since we where toddlers. Zero makes sense as the default, and is compounded by the fact that Int *must* be zero.
>

The sake of convenience here is numerical practice, not coding practice: this is the point:
from numerical folks, zero is a very bad choice; NaN is a very good one.

>> 0 / 0 = NaN // undefined
>
> Great! Yet another reason to default to zero. That way, "0 / 0" bugs have a very distinct fingerprint.

While the other (which are by far more likely) are bypassed: here you are making a point against yourself:

NaN + x = NaN
NaN * x = NaN
x / NaN = NaN
NaN / x = NaN

>
>
>> , which is inline with how pointers behave (only applicable to memory, not scale).
>>
>> pointer value are also bounded.
>
> I don't see how that's relevant.

Because then zero is a meaningful default for pointers.

>
>
>> Considering the NaN blow up behaviour, for a numerical folk the expected behaviour is certainly setting NaN as default for real.
>> Real number are not meant here for coders, but for numerical folks:
>
> Of course FP numbers are meant for coders... they're in a programming language. They are used by coders, and not every coder that uses FP math *has* to be well trained in the finer points of mathematics simply to use a number that can represent fractions in a conceptually practical way.
>
The above is not finer points, but basic ones.
Otherwise, float and double are rather integers than by fractions.

>
>> D applies here a rule gain along experiences from numerical people.
>
> I'm sorry I can't hear you over the sound of how popular Java and C# are.

Sorry, I can't hear you over the sound of mathematics.

Convenience is about productivity, and that's largely influence by how much prior knowledge someone needs before being able to understand a features behavior.

Floating point calculus basics are easy to understand.

>
> (ps. if you're going to use Argumentum ad Verecundiam, I get to use Argumentum ad Populum).

So forget coding !

>
>
>> For numerical works, because 0 behaves nicely most of the time, non properly initialized variables may not detected because the output data can sound resoneable;
>> on the other hand, because NaN blows up, such detection is straight forward: the output will be a NaN output which will jump to your face very quickly.
>
> I gave examples which address this. This behavior is only [debatably] beneficial in corner cases on FP numbers specifically. I don't think that sufficient justification in light of reasons I give above.

This is more than sufficient because the authority for floating point (aka numerical) stuff is hold by numerical folks.

>
>
>> This is a numerical issue, not a coding language issue.
>
> No, it's both.

So a choice has to be done: the mature choice is NaN approach.

We're not Theoretical physicists

I am

we're Software Engineers writing a very broad scope of different programs.

Does floating point calculation belong to the broad scope ?
Do engineers relay on numerical mathematician skills  when they code numerical stuff, or on pre-calculus books for grocers ?

>
>
>> Personally in my C code, I have taken the habit to initialise real numbers (doubles) with NaN:
>> in the GSL library there is a ready to use macro: GSL_NAN. (Concerning, integers I used extreme value as INT_MIN, INT_MAX, SIZE_MAX. ...).
>
> Only useful because C defaults to garbage.

It can be initialized by 0.0 as well.
>
>
>> I would even say that D may go further by setting a kind of NaN for integers (and for chars).
>
> You may get your with if Arm64 takes over.
```
```Forums are messing up, so I'll try and respond in sections.

</test>
```
```Jerome BENOIT wrote:
>> Just because mathematical equations behave differently with zero doesn't change the fact that zero _conceptually_ represents "nothing"
>
> You are totally wrong: here we are dealing with key concept of the group theory.

Zero is the starting place for any (daily used) scale. Call it
what you want, but it doesn't change the fact that we *all*
understand "zero" in a _basic_ way. And that IS my point here. It
is natural to us to start with zero because that's what the
majority of us have done throughout our entire lives. I respect
the fact that you're a Theoretical Physicist and that your
experience with math must be much different than mine. I'm also
convinced, because of that fact, that you're more part of the
corner case of programmers rather than the majority.

>> It's default for practical reason. Not for mathematics sake, but for the sake of convenience. We don't all study higher mathematics  but we're all taught to count since we where toddlers. Zero makes sense as the default, and is compounded by the fact that Int *must* be zero.
>>
>
> The sake of convenience here is numerical practice, not coding practice: this is the point:
> from numerical folks, zero is a very bad choice; NaN is a very good one.

I disagree. Coding is much broader than using code to write
mathematical equations, and the default should reflect that. And
even when writing equations, explicitly initializing variables to
NaN as a debugging practice makes more sense than removing the
convenience of having a usable default in the rest of your code.

>>> 0 / 0 = NaN // undefined
>>
>> Great! Yet another reason to default to zero. That way, "0 / 0" bugs have a very distinct fingerprint.
>
> While the other (which are by far more likely) are bypassed: here you are making a point against yourself:
>
> NaN + x = NaN
> NaN * x = NaN
> x / NaN = NaN
> NaN / x = NaN

This was intended more as tongue-in-cheek than an actual
either. That is: Debugging incorrectly-set values is actually
more complicated by having NaN always propagate in only some
areas. I gave code examples before showing how variables can just
as easily be incorrectly set directly after initialization, and
will therefor leave a completely different fingerprint for a
virtually identical issue.

>>> , which is inline with how pointers behave (only applicable to memory, not scale).
>>>
>>> pointer value are also bounded.
>>
>> I don't see how that's relevant.
>
> Because then zero is a meaningful default for pointers.

I'm not trying to be difficult here, but I still don't see what
you're getting at. Pointers are _used_ differently than values,
so a more meaningful default is expected.

>>> Considering the NaN blow up behaviour, for a numerical folk the expected behaviour is certainly setting NaN as default for real.
>>> Real number are not meant here for coders, but for numerical folks:
>>
>> Of course FP numbers are meant for coders... they're in a programming language. They are used by coders, and not every coder that uses FP math *has* to be well trained in the finer points of mathematics simply to use a number that can represent fractions in a conceptually practical way.
>>
> The above is not finer points, but basic ones.
> Otherwise, float and double are rather integers than by fractions.

I don't understand what you wrote. Typo?

NaN as default is purely a debugging feature. It's designed so
that you don't miss setting a [floating point] variable (but can
still incorrectly set it). My entire arguments so far has been
about the expected behavior of default values being usable vs.
debugging features.

>>> D applies here a rule gain along experiences from numerical people.
>>
>> I'm sorry I can't hear you over the sound of how popular Java and C# are.
>
> Sorry, I can't hear you over the sound of mathematics.

That doesn't make any sense... My bringing up Java and C# is
because they're both immensely popular, modern languages with
zero-defaulting FP variables. If D's goal is to become more
mainstream, it could learn from their successful features; which
are largely based around their convenience.

We already have great unittest debugging features in D (much
better than C#), we don't need D to force us to debug in
impractical areas.

>  Convenience is about productivity, and that's largely influence by how much prior knowledge someone needs before being able to understand a features behavior.
>
> Floating point calculus basics are easy to understand.

Sure. Not having to remember them at all (for FP variables only)
is even easier.

>>> For numerical works, because 0 behaves nicely most of the time, non properly initialized variables may not detected because the output data can sound resoneable;
>>> on the other hand, because NaN blows up, such detection is straight forward: the output will be a NaN output which will jump to your face very quickly.
>>
>> I gave examples which address this. This behavior is only [debatably] beneficial in corner cases on FP numbers specifically. I don't think that sufficient justification in light of reasons I give above.
>
> This is more than sufficient because the authority for floating point (aka numerical) stuff is hold by numerical folks.

Again, it's about debugging vs convenience. The "authority"
should be what the majority _expect_ a variable's default to be.
Given the fact that variables are created to be used, and that
Int defaults to zero, and that zero is used in *everyone's* daily
lives (conceptually), I think usable values (and zero) makes more
sense.

Default to NaN explicitly, as a debugging technique, when you're
writing mathematically sensitive algorithms.

>>> This is a numerical issue, not a coding language issue.
>>
>> No, it's both.
>
> So a choice has to be done: the mature choice is NaN approach.

The convenient choice is defaulting to usable values. The logical
choice for the default is zero. NaN is for debugging, which
should be explicitly defined.

>  We're not Theoretical physicists
>
> I am

That commands an amount of respect from me, but it also increases
my belief that you're perspective on this issue is skewed. D
should be as easy to use and understand as possible without
sacrificing efficiency.

>  we're Software Engineers writing a very broad scope of different programs.
>
> Does floating point calculation belong to the broad scope ?

Yes. You only need an elementary understanding of math to use a
fraction.

> Do engineers relay on numerical mathematician skills  when they code numerical stuff, or on pre-calculus books for grocers ?

Depends on what they're writing. Again, it's not a mathematical
issue, but a debugging vs convenience one.

>>> Personally in my C code, I have taken the habit to initialise real numbers (doubles) with NaN:
>>> in the GSL library there is a ready to use macro: GSL_NAN. (Concerning, integers I used extreme value as INT_MIN, INT_MAX, SIZE_MAX. ...).
>>
>> Only useful because C defaults to garbage.
>
> It can be initialized by 0.0 as well.

My point was that in C you're virtually forced to explicitly
initialize your values, because they're unreliable otherwise. D
doesn't suffer this, and could benefit from a more usable default.
```