November 22, 2017
On Wednesday, 22 November 2017 at 04:55:39 UTC, codephantom wrote:
> Consider the Goldbach Conjecture, that every even positive integer greater than 2 is the sum of two (not necessarily distinct) primes. According to the principle of bivalence, this should be either true or false.

«The Goldbach conjecture verification project reports that it has computed all primes below 4×10^18»

Which is more than you'll ever need in any regular programming context.

Next problem?

November 23, 2017
On Wednesday, 22 November 2017 at 22:02:11 UTC, Ola Fosheim Grøstad wrote:
> On Wednesday, 22 November 2017 at 04:55:39 UTC, codephantom wrote:
>> Consider the Goldbach Conjecture, that every even positive integer greater than 2 is the sum of two (not necessarily distinct) primes. According to the principle of bivalence, this should be either true or false.
>
> «The Goldbach conjecture verification project reports that it has computed all primes below 4×10^18»
>
> Which is more than you'll ever need in any regular programming context.
>
> Next problem?

Come on. Really?

"It's true as far as we know" != "true"

true up to a number < n  ... does not address the conjecture correctly.

Where it the 'proof' that the conjecture is 'true'.

hint. It's not a problem that mathmatics can solve.
November 23, 2017
On Thursday, 23 November 2017 at 00:06:49 UTC, codephantom wrote:
> true up to a number < n  ... does not address the conjecture correctly.

So what? We only need to a proof up to N for regular programming, if at all.

> hint. It's not a problem that mathmatics can solve.

By what proof? And what do you mean by mathematics?


November 23, 2017
On Wednesday, 22 November 2017 at 18:16:16 UTC, Wyatt wrote:
> "Need"?  Perhaps not.  But so far, I haven't seen any arguments that refute the utility of mitigating patterns of human error.
>

Ok. that's a good point. But there is more than one way to address human error without having to further regulate human behaviour.

How about we change the way we think...for example.

I 'expect' bad people to try to do 'bad stuff' using my code. It's the first thing I think about when I start typing.

This perspectives alone, really changes the way I write code. It's not perfect, but it's alot better than if I didn't have that perspective. And all it required was to think differently. No language change, no further regulation.

So yeah, you can change the language.. or you can change the way people think about their code. When they think differently, their code will change accordingly.

My point about sophisticated IDE's and AI like compilers, is that they don't seem to have addressed the real issue - that is, changing the way people think about their code. If anything, they've introduced so many distractions and so much automation, that people are just not thinking about their code anymore. So now, language designers are being forced to step in and start regulating programmer behaviour. I don't like that approach.

You rarely hear anything about defensive programming these days, but it's more important now, than it ever was. I'd make it the number one priority for new developers. But you won't even find the concept being taught at our universities. They're too busy teaching students to program in Python ..hahha...the future is looking pretty bleak ;-(

Where are the 'Secure Coding Guidelines for Programming in D' (I'm not saying they don't exist. I'm just not aware of them).

What if I did a security audit on DMD or PHOBOS. What would I discover?

What if I did a security audit on all the D code at github. What would I discover?

Sophisticated IDE's and AI like compilers have not rescued us from this inherent flaw in programming. The flaw, is a human flaw. A flaw in the way we think.

November 23, 2017
On Thursday, 23 November 2017 at 00:15:56 UTC, Ola Fosheim Grostad wrote:
> On Thursday, 23 November 2017 at 00:06:49 UTC, codephantom wrote:
>> true up to a number < n  ... does not address the conjecture correctly.
>
> So what? We only need to a proof up to N for regular programming, if at all.
>

That's really the point I was making.

It's the reason you'll never be able to put your complete trust in a compiler.

The compiler can only ever know something, about something, up to a point.

That's why we have the concept of 'undefined behaviour'.

November 23, 2017
On Thursday, 23 November 2017 at 00:15:56 UTC, Ola Fosheim Grostad wrote:
> By what proof? And what do you mean by mathematics?

A mathematical claim, that cannot be proven or disproven, is neither true or false.

What you are left with, is just a possibility.

Thus, it will always remain an open question as to whether the conjecture is true, or not.

November 23, 2017
On Wednesday, 22 November 2017 at 00:39:21 UTC, codephantom wrote:
> On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom wrote:
>> Its seems to be, that you prefer to rely on the type system, during compilation, for safety. This is very unwise.
>>
>
> To demonstrate my point, using code from a 'safe' language (C#):
> (i.e. should this compile?)
>
> // --------------------------------------------------
>
> using System;
>
> public class Program
> {
>
>
>     public static int Main()
>     {
>         Foo();
>         return 0;
>     }
>
>     static void Foo()
>     {
>         const object x = null;
>
>         //if (x != null)
>         //{
>             Console.WriteLine(x.GetHashCode());
>         //}
>     }
>
> }
>
> // --------------------------------------------------


Here is another demonstation of why you can trust your compiler:

using code from a 'safe' language (C#):
(i.e. should this compile?)


// -------------------------------------

using System;
using System.IO;

public class Program
{
    public static int Main()
    {
        Console.WriteLine( divInt(Int32.MinValue,-1) );
        return 0;
    }

    static int divInt (int a, int b)
    {
        int ret = 0;

        //if ( (b != 0) && (!((a == Int32.MinValue) && (b == -1))) )
        //{
            ret = a / b;
        //}
        //else
        //{
        //   throw new InvalidOperationException("Sorry.. no can do!");
        //}

        return ret;
    }

}


// -------------------------------------------------------

November 23, 2017
On Thursday, 23 November 2017 at 06:32:30 UTC, codephantom wrote:
>
> Here is another demonstation of why you can trust your compiler:
>

Why you "can't" ... is what i meant to say.

I love not being able to edit posts. It's so convenient.
November 23, 2017
On Thursday, 23 November 2017 at 01:33:39 UTC, codephantom wrote:
> On Thursday, 23 November 2017 at 00:15:56 UTC, Ola Fosheim Grostad wrote:
>> By what proof? And what do you mean by mathematics?
>
> A mathematical claim, that cannot be proven or disproven, is neither true or false.
>
> What you are left with, is just a possibility.

And how is this a problem? If your program relies upon the unbounded version you will have to introduce it explicitky as an axiom. But you dont have to, you can use bounded quantifiers.

What you seem to be saying is that one should accept all unproven statements as axioms implicitly. Why have a type system at all then?

> Thus, it will always remain an open question as to whether the conjecture is true, or not.

Heh, has the Goldbach conjecture been proven undecidable?


November 23, 2017
On Thursday, 23 November 2017 at 01:16:59 UTC, codephantom wrote:
> That's why we have the concept of 'undefined behaviour'.

Errr, no.  High level programming languages don't have undefined behaviour. That is a C concept related to the performance of the executable. C tries to get as close to machine language as possible.