On 8/19/22 9:01 PM, Walter Bright wrote:
> On 8/19/2022 12:09 PM, Steven Schveighoffer wrote:
> But it defaults to a completely useless value (NaN).
It is not useless. NaN's have many uses, one of which is to not have silent bugs where you forgot to initialize a double.
Knowing that somewhere, in some code, someone didn't initialize a double, is not useful. And that's if you notice it.
In fact, NaN creates silent bugs.
> > This is unexpected, and commonly leads to hours of head-scratching (see Adam's Ruppe's 2020 live coding session, where he couldn't figure out for much of the stream why his game wasn't working).
It's fewer hours then tracking down why it is 0 instead of 6, because 0 doesn't leave a trail.
That's not what happens. You see, when you do DrawSquare(x, y)
, nothing happens. No exception thrown, no "Bad Square" drawn to the screen, it's like your function didn't get called. You start questioning just about every other aspect of your code (Am I calling the function? Is the library calling my function? Is there a bug in the library?). You don't get any indication that x is NaN. Whereas, if it's zero, and that's wrong, you see a square in the wrong spot, and fix it.
In other words, NaN is silent. You can't even assert(x != double.init)
. You have to use an esoteric function isNaN
for that.
But all the code that does expect 0 to be the default would become useful. So there's upsides both ways -- the silent nature of NaN bugs goes away, and now your code doesn't have to always assign a value to a float buried in a struct.
> > D used default values to prevent errors in not initializing values, but default initialization to a very commonly-expected value turns to be incredibly useful.
I've lost days in debugging issues with forgetting to initialize a variable.
I mostly do not initialize variables when I know the default value is correct.
But even having NaN as a default is classes above C where it not only might not have the correct value, the value can be garbage. That can cause days of searching.
> > The design choices for float/double were made based on the fact that a value that means "this isn't initialized" existed. It didn't for int, so meh, 0 was chosen. Walter could have easily have just chosen int.min or something, and then we would possibly not ever be used to it. But now we are used to it, so it has become irksome that doubles/floats are the outlier here.
If there was a NaN value for int, I would have used it as the default. int.min is not really a NaN.
Why not just require initialization? I'm mostly curious, because I don't think it's possible to do now, but why didn't you do that originally?
> The NaN value for char is 0xFF, and for pointers is null. Both work well. (0xFF is specified as an illegal code point in Unicode.) It's integers that are the outliers :-)
the default for char is odd, but mostly esoteric. It crops up with things like static char arrays, which are not common. Honestly, having a default of 0 would be better there too, because of C's requirement to null-terminate. But that value isn't terrible (and it doesn't have the problems of NaN, e.g. ++ on it will not just leave it at 0xff).
But the default for pointers being null is perfect. Using a null pointer is immediately obvious since it crashes the program where it is used.
-Steve