January 07, 2014
On 1/6/2014 7:20 PM, deadalnix wrote:
> On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
>> Or you could amend the documentation to say that null checks will not be
>> removed even if they occur after a dereference.
> Which won't be true with LDC and GDC.

You're assuming that LDC and GDC are stuck with C semantics.
January 07, 2014
On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright wrote:
> On 1/6/2014 7:20 PM, deadalnix wrote:
>> On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
>>> Or you could amend the documentation to say that null checks will not be
>>> removed even if they occur after a dereference.
>> Which won't be true with LDC and GDC.
>
> You're assuming that LDC and GDC are stuck with C semantics.

Unless we plan to rewrite our own optimizer, they are to some extent.
January 07, 2014
On 1/6/2014 8:55 PM, deadalnix wrote:
> On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright wrote:
>> On 1/6/2014 7:20 PM, deadalnix wrote:
>>> On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
>>>> Or you could amend the documentation to say that null checks will not be
>>>> removed even if they occur after a dereference.
>>> Which won't be true with LDC and GDC.
>>
>> You're assuming that LDC and GDC are stuck with C semantics.
>
> Unless we plan to rewrite our own optimizer, they are to some extent.

I don't buy that. The back ends are built to compile multiple languages, hence they'll have multiple sets of requirements to contend with.
January 07, 2014
On 7 January 2014 06:03, Walter Bright <newshound2@digitalmars.com> wrote:
> On 1/6/2014 8:55 PM, deadalnix wrote:
>>
>> On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright wrote:
>>>
>>> On 1/6/2014 7:20 PM, deadalnix wrote:
>>>>
>>>> On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
>>>>>
>>>>> Or you could amend the documentation to say that null checks will not
>>>>> be
>>>>> removed even if they occur after a dereference.
>>>>
>>>> Which won't be true with LDC and GDC.
>>>
>>>
>>> You're assuming that LDC and GDC are stuck with C semantics.
>>
>>
>> Unless we plan to rewrite our own optimizer, they are to some extent.
>
>
> I don't buy that. The back ends are built to compile multiple languages, hence they'll have multiple sets of requirements to contend with.

Half and half.  In GCC, though the default is to follow C semantics, the front-end language is allowed to overrule the optimiser with its own semantics at certain stages of the compilation.
January 07, 2014
On Tuesday, 7 January 2014 at 06:03:55 UTC, Walter Bright wrote:
> On 1/6/2014 8:55 PM, deadalnix wrote:
>> On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright wrote:
>>> On 1/6/2014 7:20 PM, deadalnix wrote:
>>>> On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
>>>>> Or you could amend the documentation to say that null checks will not be
>>>>> removed even if they occur after a dereference.
>>>> Which won't be true with LDC and GDC.
>>>
>>> You're assuming that LDC and GDC are stuck with C semantics.
>>
>> Unless we plan to rewrite our own optimizer, they are to some extent.
>
> I don't buy that. The back ends are built to compile multiple languages, hence they'll have multiple sets of requirements to contend with.

Another case where D is "inherently faster" than C? ;-)
January 07, 2014
On Monday, 6 January 2014 at 23:13:14 UTC, Walter Bright wrote:
> On 1/6/2014 3:02 PM, Alex Burton wrote:
>> All the others should result in an exception at some point.
>> Exceptions allow stack unwinding, which allows people to write code that doesn't
>> leave things in undefined states in the event of an exception.
>
> Hardware exceptions allow for the same thing.

I am not sure what you mean by the above.
To be clear: the below program does not unwind at least on linux. Same result using dmd or gdc : Segmentation fault (core dumped).
When I see this from a piece of software I think : ABI problem or Amatuer programmer ?

void main()
{
   class Foo
  {
     void bar() {}
  };
  try {
    Foo f;
    f.bar();
  }catch
  {
     writefln("Sorry something went wrong");
  }
}

In my code the vast majority of the references to classes can be relied on to point to an instance of the class.
Where it is optional for a reference to be valid, I am happy to explicitly state that with a new type like Optional!Foo f or Nullable!Foo f;

The phisolsophy of D you have applied in other areas, says design is chosen so that code is correct and common mistakes are prevented and unwanted inherited features from C are discarded.

In my view it would be consistent to make class references difficult to leave or make null by default. I am sure you could still cast a null in there if you tried, but the default natural language should not do this.

In code where changing this would make a compiler error, in my experience the code is fragile and prone to bugs anyway, so without a counter example I think the worst that could happen if D changed in this way would be people would fix their code and probably find some potential bugs they were not aware of.

pointers to structs would still be valuable for interfacing to C libraries, and implementing efficient data structures, but the high level day to day code of the average user where objects are classes by default would benefit from having the compiler prevent null class references.

January 07, 2014
On Tuesday, 7 January 2014 at 11:29:18 UTC, alex burton wrote:
>>
>> Hardware exceptions allow for the same thing.
>
> I am not sure what you mean by the above.

You can trap the segfault and access a OS-specific data structure which tells you where it happened, then recover if the runtime supports it.
January 07, 2014
On Tuesday, 7 January 2014 at 11:36:50 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 7 January 2014 at 11:29:18 UTC, alex burton wrote:
>>>
>>> Hardware exceptions allow for the same thing.
>>
>> I am not sure what you mean by the above.
>
> You can trap the segfault and access a OS-specific data structure which tells you where it happened, then recover if the runtime supports it.

Thanks for this.

I tested the same code on Windows and it appears that you can catch exceptions of unknown type using catch with no exception variable. The stack is unwound properly and scope(exit) calls work as expected etc.

After reading about signal handling in unix and structured exception handling on Windows, it sounds possible though difficult to implement a similar system on unix to introduce an exception by trapping the seg fault signal, reading the data structure you mention and then using assembler jump instructions to jump into the exception mechanism.

So I take Walters statement to mean that :
hardware exceptions (AKA non software exceptions / SEH on windows) fix the problem - where programmers have put catch unknown exception statements after their normal catch statements in the appropriate places.
And that a seg fault exception should result on linux it just happens that it is not yet implmented, which is why we just get the signal and crash.
January 07, 2014
On Tuesday, 7 January 2014 at 12:51:51 UTC, alex burton wrote:
> After reading about signal handling in unix and structured exception handling on Windows, it sounds possible though difficult to implement a similar system on unix to introduce an exception by trapping the seg fault signal, reading the data structure you mention and then using assembler jump instructions to jump into the exception mechanism.

If you are on linux and add this file to your project:
dmd2/src/druntime/import/etc/linux/memoryerror.d

(it is part of the regular dmd zip)

you might have to import it and call registerMemoryErrorHandler() but then it will do the magic tricks to turn a segfault into a D exception.

But this is a bit unreliable so it isn't in any default build.
2 3 4 5 6 7 8 9 10 11 12
Next ›   Last »