January 20, 2014
On Monday, 20 January 2014 at 12:20:58 UTC, Ola Fosheim Grøstad wrote:
> But when you have an explicit nullptr test you will have to mask it before testing the zero-flag in the control register.

And just to make it explicit: you will have to add the masking logic to all comparisons of nullable pointers too if you want to allow comparison of null to be valid.

aptr==bptr => ((aptr&MASK==0)&&(bptr&MASK==0))||aptr==bptr

or something like that.
January 20, 2014
On Sunday, 19 January 2014 at 08:41:23 UTC, Timon Gehr wrote:
> This is not a plausible assumption. What you tend to know is that the program is unlikely to fail because otherwise it would not have been shipped, being safety critical.

I feel that this "safety critical" argument is ONE BIG RED HERRING.

It is completely irrelevant to the topic which is whether it is useful to have recovery mechanisms for null-pointers or not. It is, but not if you are forced to use them. Nobody suggests that you should be forced to recover from a null dereference?

Nevertheless: If your application is safety critical and not proven correct or have not undergone exhaustive testing (all combinations of input) then it most likely is a complex system which is likely to contain bugs. You can deal with this by partitioning the system into independent subsystems (think functional programming) which you in a domain specific manner control (e.g. you can have multiple algorithms and select the median value or the most conservative estimate, spinning down subsystems reverting to a less optimal state (more resource demanding), running a verifier on the result etc etc.

> I.e. when it fails, you don't know that it is unlikely to be caused by something. It could be hardware failure, and even a formal correctness proof does not help with that.

But hardware failure is not a valid issue when discussing programming language constructs?

Of course the system design should account for hardware failure, extreme weather that makes sensors go out of range and a drunken sailor pressing all buttons at once.

Not a programming language construct topic.

> Irrelevant. He is arguing for stopping the system once it has _become clear_ that the _current execution_ might not deliver the expected results.

Then you have to DEFINE what you mean by expected results. Which is domain specific, not a programming language construct issue in a general programming language.

If the expected result is defined be: having a result is better than no result, then stopping the system is the worst thing you could do.

If the expected result controls N different systems then it might be better to fry 1 system and keep N-1 systems running than to fry N systems. That's a domain specific choice the system designer should have the right to make. Sacrifice one leg to save the other limbs.

Think about the effect of this: 1 router detects a bug, by the logic in this thread it should then notify all routers running the same software and tell them to shut down immediately. Result: insta-death to entire Internet.
January 20, 2014
On 1/20/2014 6:18 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
> Think about the effect of this: 1 router detects a bug, by the logic in this
> thread it should then notify all routers running the same software and tell them
> to shut down immediately. Result: insta-death to entire Internet.

Not only has nobody suggested this, I have explicitly written here otherwise, more than once.

I infer you think that my arguments here are based on something I dreamed up in 5 minutes of tip-tapping at my keyboard. They are not. They are what Boeing and the aviation industry use extremely successfully to create incredibly safe airliners, and the track record is there for anyone to see.

It's fine if believe you've found a better way. But there's a high bar of existing practice and experience to overcome with a better way, and a need to start from a position of understanding that successful existing practice first.
January 20, 2014
On Monday, 20 January 2014 at 19:27:31 UTC, Walter Bright wrote:
> I infer you think that my arguments here are based on something I dreamed up in 5 minutes of tip-tapping at my keyboard. They are not. They are what Boeing and the aviation industry use extremely successfully to create incredibly safe airliners, and the track record is there for anyone to see.

No, but I think you are conflating narrow domains with a given practice and broader programming application development needs and I wonder what the relevance is to this discussion of having other options than bailing out. I assume that you were making a point that is of relevance to application programmers?? I assume that there is more to this than an anecdote?

And… no it is not ok to say that one should accept a buggy program to be in a inconsistent state until the symptoms surface and only then do a headshot, which is the reasoning behind one-buggy-implementation-running-until-null-crash. "you aren't ill until you look ill"?

But that is not what Boeing does, is it? Because they use a comparator: "we may be ill, but we are unlikely to show the same symptoms, so if we agree we assume that we are well." Which is a much more acceptable "excuse" (in terms of probability of damage). Why? Because a 0.001% chance of implementation related failure could be reduced to say 0.000000001% (depending on the resolution of the output etc).

Ideally safe-D should conceptually give you isolates so that an application can call a third party library that loads a corrupted file and crash on a null-ptr (because that code path has never been run before) and you catch that crash and continue. Yes, the library is buggy and only handles consistent files well, but as an application programmer that is fine. I only want to successfully load non-corrupt files, there is no need to fix that library. Iff the library/language assures that it behaves like it is run as an isolate (no side effects on the rest of the program). Wasting resources on handling corrupt files gracefully is pointless if you can isolate and contain the problem.

It is fine if HALT is the default in D, defaults should be conservative. It is not fine if the attitude is that HALT is the best option if the programmer thinks otherwise and anticipates trouble.
January 20, 2014
On Monday, 20 January 2014 at 20:01:58 UTC, Ola Fosheim Grøstad wrote:
> Ideally safe-D should conceptually give you isolates so that an application can call a third party library that loads a corrupted file and crash on a null-ptr (because that code path has never been run before) and you catch that crash and continue. Yes, the library is buggy and only handles consistent files well, but as an application programmer that is fine.

The point is: for true isolation you'll need another process. If you are aware that it could die: let it be. Just restart it or throw the file away or whatever.
So given true isolation hlt on null ptr dereference isn't an issue.


January 20, 2014
On 01/20/2014 01:44 AM, Michel Fortin wrote:
>
> That's one way to do it. Note that this means you can't assign null to
> 'a' inside the 'if' branch.  ...

Such an assignment could downgrade the type again. An alternative would be to not use flow analysis at all and require eg:

A? a=foo();
if(A b=a){
    // use b of type 'A' here
}

A solution that sometimes allows A? to be dereferenced will likely have issues with eg. IFTI.
January 20, 2014
On Monday, 20 January 2014 at 20:11:33 UTC, Tobias Pankrath wrote:
> The point is: for true isolation you'll need another process. If you are aware that it could die: let it be. Just restart it or throw the file away or whatever.

That is not an option. I started looking at D in early 2006 because I was looking for a language to create an experimental virtual world server. C++ is out of the question because of all the deficiencies (except for some simulation parts that have to be bug-free) and D could have been a good fit.

Forking is too slow and tedious. File loading was just an example. The "isolate" should have read access to global state (measured in gigabytes), but not write access.

If you cannot have "probable" isolates in safe D, then it is not suitable for "application level" server designs that undergo evolutionary development. I am not expecting true isolates, but "probable". Meaning: the system is more likely to go down for some other reason than a leaky isolate.

Isolates and futures are also very simple and powerful abstractions for getting multi-threading in web services in a trouble free manner.

> So given true isolation hlt on null ptr dereference isn't an issue.

You don't need hardware isolation to do this in a way that works in practice. It should be sufficient to do static analysis and get a list of trouble areas which you can inspect.
6 7 8 9 10 11 12 13 14 15 16
Next ›   Last »