March 06, 2012
On 2012-03-06 10:53:19 +0000, Jacob Carlborg <doob@me.com> said:

> On Mac OS X the runtime would only need to catch any exception (as it already does) and print the stack trace. But also re-throw the exception to let the OS handle the logging of the exception (at least I hope that will work).

Actually if you want a useful crash log the exception shouldn't be caught at all, because reaching the catch handler requires unwinding the stack which will ruin the stack trace for the log file. Printing the stack trace should be done in the exception handling code if no catch handler can be found, after which it can crash and let the OS do its thing. And for that to work there should be no catch block around the call to D main in the runtime initialization code.

-- 
Michel Fortin
michel.fortin@michelf.com
http://michelf.com/

March 06, 2012
On 2012-03-06 13:48, Chad J wrote:
> On 03/06/2012 02:54 AM, Jacob Carlborg wrote:
>> On 2012-03-06 08:53, Jacob Carlborg wrote:
>>> On 2012-03-06 03:04, Steven Schveighoffer wrote:
>>>> Certainly for Mac OS X, it should do the most informative appropriate
>>>> thing for the OS it's running on. Does the above happen for D programs
>>>> currently on Mac OS X?
>>>
>>> When an exception if thrown and uncaught it will print the stack trace
>>> to in the terminal (if run in the terminal). If the program ends with a
>>> segmentation fault the stack trace will be outputted to a log file.
>>>
>>
>> Outputting to a log file is handle by the OS and not by druntime.
>>
>
> It sounds like what you'd want to do is walk the stack and print a trace
> to stderr without actually jumping execution. Well, check for being
> caught first. Once that's printed, then trigger an OS error and quit.

I'm just writing how it works, not what I want to do.

-- 
/Jacob Carlborg
March 06, 2012
On Tuesday, 6 March 2012 at 09:11:07 UTC, James Miller wrote:
> If you have a possible null, then check for it *yourself* sometimes
> you know its null, sometimes you don't have any control. However, the
> compiler has no way of knowing that. Its basically an all-or-nothing
> thing with the compiler.
>
> However, the compiler can (and I think does) warn of possible
> null-related errors. It doesn't fail, because, again, it can't be
> certain of what is an error and what is not. And it can't know, since
> that is the Halting Problem.
>
> I'm not sure what the fuss is here, we cannot demand that every little
> convenience be packed into D, at some point we need to face facts that
> we are still programming, and sometimes things go wrong. The best
> arguments I've seen so far is to install a handler that catches the
> SEGFAULT in linux, and does whatever SEH stuff it does in windows and
> print a stacktrace. If this happens in a long-running process, then,
> to be blunt, tough. Unless you're telling me that the only way to
> reproduce the bug is to run the program for the same amount of time in
> near-identical conditions, then sir, you fail at being a detective.
>
> If you have a specific need for extreme safety and no sharp corners,
> use Java, or some other VM language, PHP comes to mind as well. If you
> want a systems programming language that is geared for performance,
> with modern convenience then stick around, do I have the language for
> you! Stop thinking in hypotheticals, because no language can cover
> every situation; "What if this is running in a space ship for 12 years
> and the segfault is caused by space bees?!" is not something we should
> be thinking about. If a process fails, then it fails, you try to
> figure out what happened (you do have logging on this mysterious
> program right?" then fix it.
>
> Its not easy, but if it was easy, we'd be out of jobs.
>
> </rant>
>
> --
> James Miller

The only halting problem I see here is trying to find any logic in the above misplaced rant.

The compiler can implement non-nullable types and prevent NPE bugs with zero run-time cost by employing the type system. This is a simple concept that has nothing to do with VMs and implementations for it do exist in other languages.

Even the inventor of the pointer concept himself confesses that null-ability was a grave mistake. [I forgot his name but the i'm sure Google can find the video]

I really wish that people would stop comparing everything to c/c++. Both are ancient pieces of obsolete technology and the trade-offs they provide are irrelevant today. This is why we use new languages such as D.


March 06, 2012
On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote:

> This is quite close, but real support for non-nullable types means that they are the default and checked statically, ideally using data flow analysis.

I agree that non-nullable types should be made the default and statically checked but data flow analysis here is redundant.
consider:
T foo = ..; // T is not-nullable
T? bar = ..; // T? is nullable
bar = foo; // legal implicit coercion T -> T?
foo = bar; // compile-time type mismatch error
//correct way:
if (bar) { // make sure bar isn't null
  // compiler knows that cast(T)bar is safe
  foo = bar;
}

of course we can employ additional syntax sugar such as:
foo = bar || <default_value>;

furthermore:
foo.method(); // legal
bar.method(); // compile-time error

it's all easily implementable in the type system.
March 06, 2012
On Mar 6, 2012, at 3:14 AM, Don Clugston <dac@nospam.com> wrote:
> 
> Responding to traps is one of the very few examples I know of, where Windows got it completely right,
> and *nix got it absolutely completely wrong. Most notably, the hardware is *designed* for floating point traps to be fully recoverable. It makes perfect sense to catch them and continue.
> But unfortunately the *nix operating systems are completely broken in this regard and there's nothing we can do to fix them.

Does SEH allow recovery at the point of error like signals do?  Sometimes I think it would be enough if the Posix spec were worded in a way that allowed exceptions to be thrown from signal handlers.
March 06, 2012
On Tue, 06 Mar 2012 14:22:09 +0100, bearophile <bearophileHUGS@lycos.com> wrote:

> Sandeep Datta:
>
>> I am just going to leave this here...
>>
>> *Fast Bounds Checking Using Debug Register*
>>
>> http://www.ecsl.cs.sunysb.edu/tr/TR225.pdf
>
> Is this idea usable in DMD to speed up D code compiled in non-release mode?
>
> Bye,
> bearophile

Array accesses are already bounds checked in a much more reliable way than
what the paper proposes. Furthermore the solution in the paper needs kernel support.
March 06, 2012
On Tue, 06 Mar 2012 12:14:56 +0100, Don Clugston <dac@nospam.com> wrote:

> On 04/03/12 04:34, Walter Bright wrote:
>> On 3/3/2012 6:53 PM, Sandeep Datta wrote:
>>>> It's been there for 10 years, and turns out to be a solution looking
>>>> for a
>>>> problem.
>>>
>>> I beg to differ, the ability to catch and respond to such asynchronous
>>> exceptions is vital to the stable operation of long running software.
>>>
>>> It is not hard to see how this can be useful in programs which depend
>>> on plugins
>>> to extend functionality (e.g. IIS, Visual Studio, OS with drivers as
>>> plugins
>>> etc). A misbehaving plugin has the potential to bring down the whole
>>> house if
>>> hardware exceptions cannot be safely handled within the host
>>> application. Thus
>>> the inability of handling such exceptions undermines D's ability to
>>> support
>>> dynamically loaded modules of any kind and greatly impairs modularity.
>>>
>>> Also note hardware exceptions are not limited to segfaults there are
>>> other
>>> exceptions like division by zero, invalid operation, floating point
>>> exceptions
>>> (overflow, underflow) etc.
>>>
>>> Plus by using this approach (SEH) you can eliminate the software null
>>> checks and
>>> avoid taking a hit on performance.
>>>
>>> So in conclusion I think it will be worth our while to supply
>>> something like a
>>> NullReferenceException (and maybe NullPointerException for raw
>>> pointers) which
>>> will provide more context than a simple segfault (and that too without
>>> a core
>>> dump). Additional information may include things like a stacktrace (like
>>> Vladimir said in another post) with line numbers, file/module names
>>> etc. Please
>>> take a look at C#'s exception hierarchy for some inspiration (not that
>>> you need
>>> any but it's nice to have some consistency across languages too). I am
>>> just a
>>> beginner in D but I hope D has something like exception chaining in C#
>>> using
>>> which we can chain exceptions as we go to capture the chain of events
>>> which led
>>> to failure.
>>
>> As I said, it already does that (on Windows). There is an access
>> violation exception. Try it on windows, you'll see it.
>>
>> 1. SEH isn't portable. There's no way to make it work under non-Windows
>> systems.
>>
>> 2. Converting SEH to D exceptions is not necessary to make a stack trace
>> dump work.
>>
>> 3. Intercepting and recovering from seg faults, div by 0, etc., all
>> sounds great on paper. In practice, it is almost always wrong. The only
>> exception (!) to the rule is when sandboxing a plugin (as you
>> suggested). Making such a sandbox work is highly system specific, and
>> doesn't always fit into the D exception model (in fact, it never does
>> outside of Windows).
>
> Responding to traps is one of the very few examples I know of, where Windows got it completely right,
> and *nix got it absolutely completely wrong. Most notably, the hardware is *designed* for floating point traps to be fully recoverable. It makes perfect sense to catch them and continue.
> But unfortunately the *nix operating systems are completely broken in this regard and there's nothing we can do to fix them.
>
Yeah, it's true for FPU traps. You need signals+longjmp to handle them.
Though with SEH you shouldn't forget to fninit before continuing or your
FPU stack might overflow.
March 06, 2012
On 03/06/2012 04:46 PM, foobar wrote:
> On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote:
>
>> This is quite close, but real support for non-nullable types means
>> that they are the default and checked statically, ideally using data
>> flow analysis.
>
> I agree that non-nullable types should be made the default and
> statically checked but data flow analysis here is redundant.
> consider:
> T foo = ..; // T is not-nullable
> T? bar = ..; // T? is nullable
> bar = foo; // legal implicit coercion T -> T?
> foo = bar; // compile-time type mismatch error
> //correct way:
> if (bar) { // make sure bar isn't null
> // compiler knows that cast(T)bar is safe
> foo = bar;
> }
>

Right. This example already demonstrates some simplistic data flow analysis.


> of course we can employ additional syntax sugar such as:
> foo = bar || <default_value>;
>
> furthermore:
> foo.method(); // legal
> bar.method(); // compile-time error
>
> it's all easily implementable in the type system.

Actually it requires some thinking because making initialization of non-null fields safe is not entirely trivial.

For example:
http://pm.inf.ethz.ch/publications/getpdf.php/bibname/Own/id/SummersMuellerTR11.pdf

CTFE and static constructors solve that issue for static data.
March 06, 2012
On Tuesday, 6 March 2012 at 15:46:54 UTC, foobar wrote:
> On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote:
>
>> This is quite close, but real support for non-nullable types means that they are the default and checked statically, ideally using data flow analysis.
>
> I agree that non-nullable types should be made the default and statically checked but data flow analysis here is redundant.
> consider:
> T foo = ..; // T is not-nullable
> T? bar = ..; // T? is nullable
> bar = foo; // legal implicit coercion T -> T?
> foo = bar; // compile-time type mismatch error
> //correct way:
> if (bar) { // make sure bar isn't null
>   // compiler knows that cast(T)bar is safe
>   foo = bar;
> }
>
> of course we can employ additional syntax sugar such as:
> foo = bar || <default_value>;
>
> furthermore:
> foo.method(); // legal
> bar.method(); // compile-time error
>
> it's all easily implementable in the type system.

I agree with the above and would also suggest something along the lines of:
assert (bar) { // make sure it isn't null in debug builds
   bar.method(); // legal
}

The branchy null-check would then disappear in build configurations with asserts disabled.
March 06, 2012
06.03.2012 8:04, Chad J пишет:
> On 03/06/2012 12:07 AM, Jonathan M Davis wrote:
>>
>> If you dereference a null pointer, there is a serious bug in your program.
>> Continuing is unwise. And if it actually goes so far as to be a segfault
>> (since the hardware caught it rather than the program), it is beyond a doubt
>> unsafe to continue. On rare occasion, it might make sense to try and recover
>> from dereferencing a null pointer, but it's like catching an AssertError. It's
>> rarely a good idea. Continuing would mean trying to recover from a logic error
>> in your program. Your program obviously already assumed that the variable
>> wasn't null, or it would have checked for null. So from the point of view of
>> your program's logic, you are by definition in an undefined state, and
>> continuing will have unexpected and potentially deadly behavior.
>>
>> - Jonathan M Davis
>
> This could be said for a lot of things: array-out-of-bounds exceptions, file-not-found exceptions, conversion exception, etc.  If the programmer thought about it, they would have checked the array length, checked for file existence before opening it, been more careful about converting things, etc.
>
It's different: with array-out-of-bounds there's no hardware detection, so its either checked in software or unchecked (in best case you'll have access violation or segfault, but otherwise going past the bounds of array leads to undefined behavior). Both file-not-found and conv exceptions often rely on user's input, in which case they do not necessarily mean bug in a program.

> To me, the useful difference between fatal and non-fatal things is how well isolated the failure is.  Out of memory errors and writes into unexpected parts of memory are very bad things and can corrupt completely unrelated sections of code.  The other things I've mentioned, null-dereference included, cannot do this.
>
> Null-dereferences and such can be isolated to sections of code.  A section of code might become compromised by the dereference, but the code outside of that section is still fine and can continue working.
>
> Example:
> [...]
> And if riskyShenanigans were to modify global state... well, it's no longer so well isolated anymore.  This is just a disadvantage of global state, and it will be true with many other possible exceptions too.
>
> Long story short: I don't see how an unexpected behavior in one part of a program will necessarily create unexpected behavior in all parts of the program, especially when good encapsulation is practiced.
>
> Thoughts?
>
If riskyShenanigans nullifies reference in a process, then it must check it before dereferencing. There's obviously a bug, and if program will leave a proper crash log you shouldn't have problems finding and fixing this bug. If you don't have access to function's source, then you cannot guarantee it's safeness and isolation, so recovering from exception is unsafe.