April 29

On Monday, 29 April 2024 at 17:52:12 UTC, monkyyy wrote:

>

On Monday, 29 April 2024 at 14:58:00 UTC, bachmeier wrote:

>

Setting x to 0 is no better for correctness than setting it to a random number. Either way, you're pretending the compiler knows what it should be, and that's impossible.

>

no better

trade offs and downsides do not imply relativism

"different countrys use different drinking ages, lol laws have no right answers, better legalize cannibalism"

This misses the point. You don't use drinking ages to determine the "correct" age at which drinking should be allowed to start. A computer program should produce the correct answer every time it runs. Having variables initialized to a particular value is better than nothing, but better than that is if double x; fails to compile and you explicitly set it to what you need.

>

0 is correct for sum, 1 is correct for products, random numbers are even ok; yes so there tradeoffs between any of these; but nan is a black hole specifically designed to break everything

It's no more of a black hole than 0 or 1. It can be used to represent missing data, which is just as valid as 0 or 1. Does that mean it's a good choice for initialization of a double? I don't think so, but that does not justify using 0.

April 29

On Monday, 29 April 2024 at 17:52:55 UTC, jmh530 wrote:

>

If this is so annoying...

double x = 0;

why not

struct MyDouble
{
    double x = 0;
    alias this = x;
}

This made me realize you can force the compiler to throw an error rather than initializing variables:

import std;

void main() {
    MyDouble z = 1.0;
    writeln(z);
}

struct MyDouble {
    double x;
    alias this = x;

    @disable this();

    this(double _x) {
        x = _x;
    }
}

The same can be done to prevent ugly bugs like we discussed in an earlier thread:

struct Foo {
    struct Bar {
        double x;
    }
    Bar bar;

    @disable this();

    alias this = bar;
}

// Neither line will compile now
void main() {
  Foo foo;
  auto foo = new Foo;
}
April 29

On Monday, 29 April 2024 at 18:53:24 UTC, bachmeier wrote:

>

It's no more of a black hole than 0 or 1. It can be used to represent missing data, which is just as valid as 0 or 1. Does that mean it's a good choice for initialization of a double? I don't think so, but that does not justify using 0.

There's only one correct value to initialise the float too and only the programmer knows what that is. The chances are that 0 is alot more likely to be correct by accident than NaN is.

So the only argument for NaN is that it's easier to trace forgotten initializations. (my experience is that it's no better than 0)

But if you buy that argument then it logically follows that a compiler error on missing initialization would be far more effective at locating the bug.

Walters argument against that is that people will just slap zero in there to silence the error. I would argue that they will do exactly the same if it's NaN initialized, it'll just involve a lot more effort to actually trace back to find out where to slap the zero in.

April 29

On Monday, 29 April 2024 at 12:04:00 UTC, Bruce Carneal wrote:

>

On Monday, 29 April 2024 at 10:15:46 UTC, evilrat wrote:

>

On Monday, 29 April 2024 at 08:12:22 UTC, Monkyyy wrote:

>

On Monday, 29 April 2024 at 03:51:09 UTC, Bruce Carneal wrote:

>

If they're silent how do you know they failed?

Because you get to debug a silent runtime error all the time, and at this point I just assume it's a float being nan

I'm pretty sure this is just the gpus reaction to nan, draw calls being ignored

Yep, rule #1 for me, when see extern C/C++ type/function - always add zero initializer for float/float arrays.

Very few libraries handle nans correctly from my limited experience.

If you live in a world where no one checks for NaN at any point at any time and/or where a default zero will never hide an error then yep, you're gonna dislike NaN init.

If you're converting float to int, which is essentially all the time for a software rendering, then NaNs usually end up as 0xFFFFFFFF, which is no better than zero.

And you literally dont want to litter the fast path with NaN checks. You probably could do with asserts but it'd be far more useful to just get a compiler error on uninitialised floats.

I mean if the whole argument is default init to NaN helps catch uninitialized variables, there's a far more effective solution. That would actually be useful.

Default init to NaN is not useful, its not worse than zero, but it's not better. And you might actual want to init to zero sometimes.

April 29

On Monday, 29 April 2024 at 18:53:24 UTC, bachmeier wrote:

>

A computer program should produce the correct answer every time it runs.

not in videogames and rendering; the goal is often make pretty colors; bugs are often features

> >

0 is correct for sum, 1 is correct for products, random numbers are even ok; yes so there tradeoffs between any of these; but nan is a black hole specifically designed to break everything

It's no more of a black hole than 0 or 1.

In dynamic systems, as as the usual example of foxes and rabbits where a fox needs to eat 3 rabbit every year and every year rabbits double, therefor blah blah blah write out some math

given a vector field there are sources, sinks and black holes; theres some equaliumlam where foxes and rabbits are in balance and there will be an rough orbit around a "sink", theres also sources such as when rabbits are out of balence with foxes

then there are black holes of 0 where either int went extinct/0

nan is a black hole for basically every primitive function (by design) and therefor as I pull dynamic systems out of my ass and hope they make nice animations or systems or pretty colors, nan is basically always a black hole unless I specifically designed each and every pathway to avoid it in a way 0 or 1 never are, 0 is only a black hole for multiplication and its an identity for adds, so its not even comparable

April 29

On Monday, 29 April 2024 at 19:22:54 UTC, claptrap wrote:

>

On Monday, 29 April 2024 at 12:04:00 UTC, Bruce Carneal wrote:

>

If you live in a world where no one checks for NaN at any point at any time and/or where a default zero will never hide an error then yep, you're gonna dislike NaN init.

If you're converting float to int, which is essentially all the time for a software rendering, then NaNs usually end up as 0xFFFFFFFF, which is no better than zero.

Yes, if you transform away from FP you either check beforehand or lose any benefit.

>

And you literally dont want to litter the fast path with NaN checks. You probably could do with asserts but it'd be far more useful to just get a compiler error on uninitialised floats.

Yeah. I don't think anyone does NaN checks in the fast path (unless they're trying to locate the index of some NaN after being alerted with a less localized NaN). NaN being designed to be sticky/cumulative helps.

>

I mean if the whole argument is default init to NaN helps catch uninitialized variables, there's a far more effective solution. That would actually be useful.

Default init to NaN is not useful, its not worse than zero, but it's not better. And you might actual want to init to zero sometimes.

It may not be useful to you, sure, but it is useful to others.

I would join the zero-init contingent if I never planned on using NaN, not even as a weakened backstop in unit testing; if I exited the FP domain before getting any use of it, say, or if all my programs worked correctly with zero init (or at least I thought they did) or ...?

Also, as noted earlier, people weigh the pros and cons differently. As for me, I'm happy that we have NaN init.

April 29
On Monday, April 29, 2024 1:10:35 PM MDT bachmeier via Digitalmars-d wrote:
> This made me realize you *can* force the compiler to throw an error rather than initializing variables:
>
> ```
> import std;
>
> void main() {
>      MyDouble z = 1.0;
>      writeln(z);
> }
>
> struct MyDouble {
>      double x;
>      alias this = x;
>
>      @disable this();
>
>      this(double _x) {
>          x = _x;
>      }
> }
> ```
>
> The same can be done to prevent ugly bugs like we discussed in an earlier thread:
>
> ```
> struct Foo {
>      struct Bar {
>          double x;
>      }
>      Bar bar;
>
>      @disable this();
>
>      alias this = bar;
> }
>
> // Neither line will compile now
> void main() {
>    Foo foo;
>    auto foo = new Foo;
> }


If you really want to control what's going on, you could make a floating point equivalent to std.checkedint's Checked.

- Jonathan M Davis



April 29

On Monday, 29 April 2024 at 20:01:18 UTC, monkyyy wrote:

>

On Monday, 29 April 2024 at 18:53:24 UTC, bachmeier wrote:

>

A computer program should produce the correct answer every time it runs.

not in videogames and rendering; the goal is often make pretty colors; bugs are often features

If you're fine with bugs I imagine you'd be in the zero-init camp. Not much point to NaN as a debug aide if FP correctness is defined loosely enough.

> > >

0 is correct for sum, 1 is correct for products, random numbers are even ok; yes so there tradeoffs between any of these; but nan is a black hole specifically designed to break everything

It's no more of a black hole than 0 or 1.

snip...

then there are black holes of 0 where either int went extinct/0

nan is a black hole for basically every primitive function (by design) and therefor as I pull dynamic systems out of my ass and hope they make nice animations or systems or pretty colors, nan is basically always a black hole unless I specifically designed each and every pathway to avoid it in a way 0 or 1 never are, 0 is only a black hole for multiplication and its an identity for adds, so its not even comparable

I agree, they are not comparable. NaN was designed to be a universal "something went wrong" return value, a sticky one. If you don't need/want that then you're probably a zero-init devotee. If you do want it, to represent general failure as well as failure to init, then you're gonna want NaN init. A single, reliable, automatic, unambiguous, "black hole".

April 30

On Monday, 29 April 2024 at 19:22:54 UTC, claptrap wrote:

>

On Monday, 29 April 2024 at 12:04:00 UTC, Bruce Carneal wrote:

>

[...]

If you're converting float to int, which is essentially all the time for a software rendering, then NaNs usually end up as 0xFFFFFFFF, which is no better than zero.

And you literally dont want to litter the fast path with NaN checks. You probably could do with asserts but it'd be far more useful to just get a compiler error on uninitialised floats.

I mean if the whole argument is default init to NaN helps catch uninitialized variables, there's a far more effective solution. That would actually be useful.

Default init to NaN is not useful, its not worse than zero, but it's not better. And you might actual want to init to zero sometimes.

My main problem against NaN is that it puts NaN on registers instead of checking and throwing.

April 30

On Friday, 26 April 2024 at 18:41:11 UTC, ryuukk_ wrote:

>

https://loglog.games/blog/leaving-rust-gamedev/

Discussion here: https://news.ycombinator.com/item?id=40172033

We should encourage these people to check out D

This reminds me of my experience of converting a simple imperative-style Monte-Carlo simulation program from D to Haskell, to learn and test the limits of the latter.

It took far more time than writing the D program.

Contrary to the stereotype, Haskell can 100% imperative programming and raw mutable arrays. With the right library abstractions, it's even good at that. But, to do that efficiently you have to be experienced and understand a lot of abstractions inside out.

This is because the language primitives are immutable and pure mathematics-style entities, I/O and mutability is treated as a special library abstraction. At least in principle, I feel it's the correct way to go about things. Pure code is the recommended default in pretty much any language after all, when there is no specific reason to do otherwise.

But man does that steepen the learning curve for some tasks. Ironically, you don't need to know monads nearly as well to excel pipes-and-filters-style programming as you have to excel with mutable arrays.

That's what Haskell is like: you have to jump through many hoops to understand how you can get things done, but when you finally do the solution is almost without fail extremely powerful and to-the-book, and usually there is a solution no matter what you're trying to achieve. By the way, that also what Nix package manager is like, compared to the traditional alternatives.

To me the article reads like Rust is pretty similar. Whether that's a good or a bad thing, comes mostly down to preference.