June 18, 2014
On 6/18/2014 2:16 AM, Rikki Cattermole wrote:
>  From my experience, one of the reasons I haven't had much to do with the
> development of dmd is simply to compile dmd, druntime is not as straight forward
> as it could be.

You don't need to actually hack on dmd or druntime to make valuable contributions. Getting problem reports and reproducible test cases into bugzilla is much needed work, for example.
June 19, 2014
On Wednesday, 18 June 2014 at 21:57:40 UTC, deadalnix wrote:
> Granted infinite resources. Good, now that we ruled that thing
> out, can we talk about the subject, or do we need to continue
> talking about imaginary things ?

No, finite resources, and that is only a worst case upper bound. It does not preclude the possibility of doing better. It also does not say how much work it is in the typical case e.g. the kind of programs you wrote when you try to make code @safe.

What it does prove is that the problem is computable, that is a major difference. Please note that the halting problem does not address difficulty but analytical computability.

In the real world you work with typical programs that run on finite resources guided by heuristics. There is no proof that you cannot have @safe. So leave that line of arguing. It is fundamentally flawed.
June 19, 2014
On 6/18/2014 10:18 AM, H. S. Teoh via Digitalmars-d wrote:
> Everyone talks about it, but nobody does anything about it, so here goes
> nothing:
>
> 	https://issues.dlang.org/show_bug.cgi?id=12941
>
> Obviously, I have no idea what should go in that list, so everyone who
> cares about this issue, please chime in. Thanks!
>
> P.S. I've also tagged a whole bunch of issues with 'safe'. It's just
> based on a quick skim over all issues that turned up when I searched for
> 'safe', so I may have mistagged some, and missed others, but this is
> just to get the ball rolling. Please fix wrong/missing tags if you find
> them. :)

Thanks!

June 19, 2014
On 06/18/14 18:42, Dicebot via Digitalmars-d wrote:
> On Wednesday, 18 June 2014 at 11:09:14 UTC, Artur Skawina via Digitalmars-d wrote:
>> The number of potential contributors is
>> already low enough, and the fuzzy licensing situation drives away the
>> most valuable ones (since those will often be the ones which treat the
>> legal aspects seriously and the risks involved when dealing with this
>> kind of mixed free and proprietary code within one repo are too high).
> 
> Wait what? Do you know a single person who decided to not work on DMD FE because of kind of formally (but not practically) non-free backend?

Well, do you think I would have said what I did if this issue didn't affect /me/? [1]

The problem is a very practical one. Walter's complaint about the
lack of constructive contributions is justified; most of the language
related discussions happening here are nothing more than wish lists.
There are many different causes, I'm pointing out one.
What's unique about this one is that there's only one person in the
world that can do something about it -- the same task done by someone
other than Walter Bright would not be equally valuable.

The shared frontend+backend setup might not be just convenience, but a deliberate attempt to discourage certain developments -- even then, I think it does much more harm than good. It practically inhibits "casual" contributions, while not really restricting usages that bring few benefits to the original project.

And, yes, some people really always check licenses, even before fully determining what a software project actually is/does. Because if the license is problematic then everything else is irrelevant -- the project simply is unusable, and any time spent looking at it would be wasted.

That is fortunately not a problem for dmdfe, as boost/gpl should be
ok for (almost) everyone. But the cost of having to deal with another
license, for a bundled part, that you're never going to use and are not
even interested in, is there. The cost of scratching-an-itch also
becomes higher. Depending on person/context, these costs can be
prohibitive.

artur

[1] If I knew I would be replying like this, I would have worded that
    quoted statement a bit differently; I'm not quite that arrogant. :)
June 19, 2014
On 6/19/2014 4:12 AM, Artur Skawina via Digitalmars-d wrote:
> On 06/18/14 18:42, Dicebot via Digitalmars-d wrote:
>> On Wednesday, 18 June 2014 at 11:09:14 UTC, Artur Skawina via Digitalmars-d wrote:
>>> The number of potential contributors is
>>> already low enough, and the fuzzy licensing situation drives away the
>>> most valuable ones (since those will often be the ones which treat the
>>> legal aspects seriously and the risks involved when dealing with this
>>> kind of mixed free and proprietary code within one repo are too high).
>>
>> Wait what? Do you know a single person who decided to not work on DMD FE because of kind of formally (but not practically) non-free backend?
>
> Well, do you think I would have said what I did if this issue didn't
> affect /me/? [1]

The front end is now Boost licensed.

June 19, 2014
On Thursday, 19 June 2014 at 04:06:51 UTC, Ola Fosheim Grøstad wrote:
> On Wednesday, 18 June 2014 at 21:57:40 UTC, deadalnix wrote:
>> Granted infinite resources. Good, now that we ruled that thing
>> out, can we talk about the subject, or do we need to continue
>> talking about imaginary things ?
>
> No, finite resources, and that is only a worst case upper bound. It does not preclude the possibility of doing better. It also does not say how much work it is in the typical case e.g. the kind of programs you wrote when you try to make code @safe.
>
> What it does prove is that the problem is computable, that is a major difference. Please note that the halting problem does not address difficulty but analytical computability.
>
> In the real world you work with typical programs that run on finite resources guided by heuristics. There is no proof that you cannot have @safe. So leave that line of arguing. It is fundamentally flawed.

It's not. Since the resources to verify a property are limited, too.

If I need to enumerate all possible program states, there will always exists a program that can run on my box but will not be verifiable on it (since I lack the resources).

It's also not enough for @safe to be verifiable, it should not slow down the compilation time too much as well.

June 19, 2014
On Thu, Jun 19, 2014 at 06:24:59PM +0000, Tobias Pankrath via Digitalmars-d wrote:
> On Thursday, 19 June 2014 at 04:06:51 UTC, Ola Fosheim Grøstad wrote:
> >On Wednesday, 18 June 2014 at 21:57:40 UTC, deadalnix wrote:
> >>Granted infinite resources. Good, now that we ruled that thing out, can we talk about the subject, or do we need to continue talking about imaginary things ?
> >
> >No, finite resources, and that is only a worst case upper bound. It does not preclude the possibility of doing better. It also does not say how much work it is in the typical case e.g. the kind of programs you wrote when you try to make code @safe.
> >
> >What it does prove is that the problem is computable, that is a major difference. Please note that the halting problem does not address difficulty but analytical computability.
> >
> >In the real world you work with typical programs that run on finite resources guided by heuristics. There is no proof that you cannot have @safe. So leave that line of arguing. It is fundamentally flawed.
> 
> It's not. Since the resources to verify a property are limited, too.
> 
> If I need to enumerate all possible program states, there will always exists a program that can run on my box but will not be verifiable on it (since I lack the resources).
> 
> It's also not enough for @safe to be verifiable, it should not slow down the compilation time too much as well.

Exactly.  Just because something is *finite*, doesn't necessarily mean it's practical.  Anything that requires enumeration of all possible states is impractical, because it has an exponential worst case bound, which is unacceptable for compilation time (or for anything, really, except in the most trivial cases).

To get an idea of how intractible it can get, suppose your entire program state is fully described by, say, 1 MB worth of variables (that's pretty conservative, considering the size of programs these days and the size of the data they deal with). The upper bound to enumerating all program states is 2^(1 MB) = 2^2^20 = ... (a number with 324,936 digits). Note that that's not 324,936 *states*, but that's the number of *digits* in the number of states! It's a finite number, sure, but good luck enumerating all program states within the lifetime of the universe (don't even talk about within your lifetime!).

Even if you run it on a hypothetical distributed supercomputer with 100 billion nodes, that barely even begins to make a dent in the number of states you need to evaluate (the number of states per node is a number with "only" 324,925 digits, haha).

And mind you, 1 MB of state is pathetically small. Real programs these days require GB's of memory, and that will cause the number of states to explode far, far beyond any imaginable computing capabilities of the human race, past, present, or future -- unless we manage to invent a device that exploits time contraction near a black hole's horizon to compute the halting problem in finite time (i.e., it's pure fiction).


T

-- 
Question authority. Don't ask why, just do it.
June 19, 2014
On Thursday, 19 June 2014 at 18:25:00 UTC, Tobias Pankrath wrote:
> It's not. Since the resources to verify a property are limited, too.
>
> If I need to enumerate all possible program states, there will always exists a program that can run on my box but will not be verifiable on it (since I lack the resources).

Nobody have said you should enumerate all possible states.

There's a significant difference between a proof and an algorithm.
June 19, 2014
On Thursday, 19 June 2014 at 18:50:27 UTC, H. S. Teoh via Digitalmars-d wrote:
> Exactly.  Just because something is *finite*, doesn't necessarily mean
> it's practical.  Anything that requires enumeration of all possible
> states is impractical, because it has an exponential worst case bound,
> which is unacceptable for compilation time (or for anything, really,
> except in the most trivial cases).

You are being silly. You either discuss computability or you discuss complexity. Please don't mix the two topics.

You either discuss a useful verification of safe features or you don't.

It's not at all impossible to verify memory safety for real time applications. Real time applications have an upper bound on how long they should run. This even holds for TMs.

The problem "does the TM terminate within N steps" is decidable.
June 19, 2014
On Thu, Jun 19, 2014 at 07:12:52PM +0000, via Digitalmars-d wrote:
> On Thursday, 19 June 2014 at 18:50:27 UTC, H. S. Teoh via Digitalmars-d wrote:
> >Exactly.  Just because something is *finite*, doesn't necessarily mean it's practical.  Anything that requires enumeration of all possible states is impractical, because it has an exponential worst case bound, which is unacceptable for compilation time (or for anything, really, except in the most trivial cases).
> 
> You are being silly. You either discuss computability or you discuss complexity. Please don't mix the two topics.
> 
> You either discuss a useful verification of safe features or you don't.
> 
> It's not at all impossible to verify memory safety for real time applications. Real time applications have an upper bound on how long they should run. This even holds for TMs.
> 
> The problem "does the TM terminate within N steps" is decidable.

Decidable doesn't mean feasible. You have to run a simulator of the program for N steps in order to determine whether it terminates. If N is large, this means compilation will be impractically slow.

The halting problem is not just about needing infinite time, it's also about the non-existence of algorithms that can analyze every possible program. Even if a given TM never terminates, as long as it fits into an analysable pattern, we can prove its non-termination in finite time. The analysing algorithm is able to "compress" the proof into finite length. But the non-solvability of the halting problem means that there is no algorithm that can compress every possible program. This means there is no such algorithm that can determine termination within N steps for arbitrary N, except the one that enumerates all possible states up to N steps. If I have code like this:

	while (!finished) {
		if (arbitarilyComplexFalseCondition)
			unsafeOperation();
		else
			safeOperation();

		finished = arbitrarilyComplexStoppingCondition;
	}

you cannot, in general, prove safety without enumerating all possible states (since you can't predict which branch the if statement will take). Since enumerating all possible states is impractical, this means your N-step termination problem is practically unsolvable, even if it's decidable in theory.


T

-- 
The early bird gets the worm. Moral: ewww...