December 02
On Saturday, 1 December 2018 at 19:02:54 UTC, H. S. Teoh wrote:
>
> In the above contrived example, Artin's conjecture is implied by the Riemann hypothesis, so the second if statement would only run if p is initialized. But there is no way the compiler is going to be able to deduce this, especially not during compile time. So it is not possible to correctly flag p as being initialized or not when it is dereferenced.
>
> Therefore, leaving it up to the compiler to detect uninitialized variables is unreliable, and therefore any code that depends on this cannot be trusted. Code like the above could be exploited by a sufficiently sophisticated hack to make the uninitialized value of p coincide with something that will open a security hole, and the compiler would not be able to reliably warn the programmer of this problem.
>
> Uninitialized variables are *not* a good thing, contrary to what the author of the article might wish to believe.
>
>
> T

If a compiler were to issue warnings/error for uninitialized variables. Then that example would be a compiler error. The logic would just be that not all code paths lead to an initialized variable, therefor *p++ is not guaranteed to be initialized - i.e. error. Swift takes this approach.

Cheers,
- Ali
December 03
On Saturday, 1 December 2018 at 19:02:54 UTC, H. S. Teoh wrote:

>
> But that's precisely the problem. It's not always possible to tell whether a variable has been initialized. E.g.:

To me, the possibility of a "false positive" doesn't preclude the use of a warning unless that possibility is large. Besides using a compiler option or pragma to get rid of it, the warning also goes away if you assign NULL or (X *) 0. Surprisingly, clang (gcc 6.3 does not give the warning) is not smart enough to then issue a "possibly dereferencing null pointer" warning.

>
> Therefore, leaving it up to the compiler to detect uninitialized variables is unreliable, and therefore any code that depends on this cannot be trusted. Code like the above could be exploited by a sufficiently sophisticated hack to make the uninitialized value of p coincide with something that will open a security hole, and the compiler would not be able to reliably warn the programmer of this problem.

I don't know that "leaving it up to the compiler" is a correct characterization. I don't see the programmer doing anything different with the warning capability in the compiler than if it wasn't there. In either case, the programmer will attempt to supply values to all the variables they have declared and are intending to use, and in the correct order.





December 03
On 22.11.18 16:19, Steven Schveighoffer wrote:
> 
> In terms of language semantics, I don't know what the right answer is. If we want to say that if an optimizer changes program behavior, the code must be UB, then this would have to be UB.
> 
> But I would prefer saying something like -- if a segfault occurs and the program continues, the system is in UB-land, but otherwise, it's fine. If this means an optimized program runs and a non-optimized one crashes, then that's what it means. I'd be OK with that result. It's like Schrodinger's segfault!
> 
> I don't know what it means in terms of compiler assumptions, so that's where my ignorance will likely get me in trouble :)

This is called nondeterministic semantics, and it is a good idea if you want both efficiency and memory safety guarantees, but I don't know how well our backends would support it.

(However, I think it is necessary anyway, e.g. to give semantics to pure functions.)
December 04
> Nulls/Nones are always a big gap in a language's type system. A common alternative is to have some Option/Maybe type like Rust or Haskell or D's Variant. How about making that required to plug the null gap?

There are others too who feel like that too:

https://news.ycombinator.com/item?id=18588239
December 04
On Monday, 19 November 2018 at 21:23:31 UTC, Jordi Gutiérrez Hermoso wrote:
> When I was first playing with D, I managed to create a segfault by doing `SomeClass c;` and then trying do something with the object I thought I had default-created, by analogy with C++ syntax. Seasoned D programmers will recognise that I did nothing of the sort and instead created c is null and my program ended up dereferencing a null pointer.
>
> I'm not the only one who has done this. I can't find it right now, but I've seen at least one person open a bug report because they misunderstood this as a bug in dmd.
>
> I have been told a couple of times that this isn't something that needs to be patched in the language, but I don't understand. It seems like a very easy way to generate a segfault (and not a NullPointerException or whatever).
>
> What's the reasoning for allowing this?

This is because you're transfering what you know from C++ to D, directly. You shouldn't do that, check out how the specific feature works in the particular language. I used C# mostly and Foo f; wouldn't make sense for me, it's not allocated, ti's null. So right away I used Foo f = new Foo();
Next ›   Last »
1 2 3 4 5 6 7 8