October 20, 2005
Ben Hinkle wrote:
> "Jarrett Billingsley" <kb3ctd2@yahoo.com> wrote in message news:dj95ji$964$1@digitaldaemon.com...
> 
>>"Ben Hinkle" <bhinkle@mathworks.com> wrote in message news:dj8tdj$212$1@digitaldaemon.com...
>>
>>>I'm able to get both Visual Studio and gdb to break at seg-v's. I'm pretty sure windbg also breaks. Is the reason for wanting an assert exception to get the file and line number without having to start a debugger?
>>
>>Mostly, yes.  It's such a common mistake to try to access a null reference that it should be checked for by the debug build.
>>
>>That, and we've already done away with the need to use the debugger to find segfaults caused by invalid array indices; why not take it the rest of the way?
> 
> 
> Array bounds errors are not caught by the OS. Code silently continues when you index one past an array but trying to access null generates an immediate segv. So adding code to check for array bounds errors is needed because there is no OS support for it. 
> 
> 
I'm sorry, but this sounds a bit funny: "My OS supports segfaults". OTOH does the D specification say that all D implementations should generate code that supports segfaults. I mean if you use an old OS like Windows 9x where memory manager only has a part-time job, the OS may not halt on every segfault.
October 21, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:dj966n$9km$1@digitaldaemon.com...
> Array bounds errors are not caught by the OS. Code silently continues when you index one past an array but trying to access null generates an immediate segv. So adding code to check for array bounds errors is needed because there is no OS support for it.

Depends on how much / where the memory is allocated.  If it happens to be on a page boundary, it might cause a segfault.

But this isn't really a counterargument for adding the feature I proposed.


October 22, 2005
In article <dj8r60$30s1$1@digitaldaemon.com>, Jarrett Billingsley says...

>Since this might severely impact performance, it might even be implemented as a separate compiler switch which could only be used in conjunction with the -debug switch.  Then you could turn it on if you get a segfault and see if it's a null reference that's causing it.

Yes would be nice if we a switch. But with todays superpipelined CPU's it's amazing too see that the condition is almost completely eliminated, means there is no (or maybe a simple one/two clock cycle) additional overhead. I'm still using SmallEiffel and there this technique is the default. I even deliever my final exe's with this options and some more buildin. And don't have any speed problems, even when some parts are heavy CPU intensive

>I'm really tired of stepping through code to find segfaults, and I think this would almost entirely eliminate segfaults in D except when dealing with pointers.

Yes, D is so bad in offering even simple debugging hints that i find it still not worth using now, and surely wouldn't recommend it for serious developing. Instead adding one feature after the other Walter should focus on development support, not language features at the moment. By the way, when do we get complete tracebacks of the stack after a pre/postcondition is violated, together with writing this to a file (and send back to the developer) so we can get the information even if it is executed on a customers computer ?




October 22, 2005
I think making it under -debug, which is only for debug {} sections, would be a mistake.

-[Unknown]


> Jarrett Billingsley wrote:
> 
>> We already have array bounds checking in the debug build, accomplished through a sort of implicit assert, such that
>>
>> int x = array[index];
>>
>> Becomes
>>
>> assert(index >= 0 && index < array.length); int x = array[index];
>>
>> This is to prevent (mostly) the irritating off-by-one errors so common in array handling but which cause a vague memory access violation in nonprotected languages such as C++.  This is a very helpful debugging feature.
>>
>> I think that a logical extension of this would be to check for null references in debug builds as well.  How many times has trying to call a method of a null reference bitten you?  I propose that this:
>>
>> obj.doSomething();
>>
>> Would be implicitly compiled as
>>
>> assert(obj !is null); obj.doSomething();
>>
>> This would severly cut down on the amount of "find the segfault" scavenger hunts that are the bane of any programmer's existence.
>>
>> Since this might severely impact performance, it might even be implemented as a separate compiler switch which could only be used in conjunction with the -debug switch.  Then you could turn it on if you get a segfault and see if it's a null reference that's causing it.
>>
>> I'm really tired of stepping through code to find segfaults, and I think this would almost entirely eliminate segfaults in D except when dealing with pointers.
>>
> 
> This would definitely be useful. I've noticed that ~70% of my programming errors are weird segfaults caused by a few nulls in a wrong place.
> 
> It shouldn't cause too much trouble to include this functionality under the -debug flag. The only downside would be slightly decreased performance. I don't care - why should we care about the speed of debug-mode code in the first place? IMHO only asymptotic time complexity matters.
> 
> BTW. Has Walter any plans to finally implement the compile-time unit tests?
October 23, 2005
Unknown W. Brackets wrote:
> I think making it under -debug, which is only for debug {} sections, would be a mistake.

Probably yes. IMO it should be turned on by default. Only -release and optimized versions should turn it off.
October 25, 2005
"Jarrett Billingsley" <kb3ctd2@yahoo.com> wrote in message news:dj8r60$30s1$1@digitaldaemon.com...
> I think that a logical extension of this would be to check for null references in debug builds as well.  How many times has trying to call a method of a null reference bitten you?  I propose that this:
>
> obj.doSomething();
>
> Would be implicitly compiled as
>
> assert(obj !is null); obj.doSomething();

But the hardware already does this for you.

> This would severly cut down on the amount of "find the segfault" scavenger hunts that are the bane of any programmer's existence.
>
> Since this might severely impact performance, it might even be implemented as a separate compiler switch which could only be used in conjunction with the -debug switch.  Then you could turn it on if you get a segfault and
see
> if it's a null reference that's causing it.
>
> I'm really tired of stepping through code to find segfaults, and I think this would almost entirely eliminate segfaults in D except when dealing
with
> pointers.

All you need to do is compile with debug on (-g) and run the program under the debugger. When the segfault happens, the debugger will put you right on the line that failed.


October 25, 2005
"Jarrett Billingsley" <kb3ctd2@yahoo.com> wrote in message news:dj95ji$964$1@digitaldaemon.com...
> That, and we've already done away with the need to use the debugger to
find
> segfaults caused by invalid array indices; why not take it the rest of the way?

The debugger normally will not find array overruns, because they normally won't cause a seg fault. They'll just cause data corruption, which may or may not eventually result in erratic behavior.


October 25, 2005
"Jari-Matti Mäkelä" <jmjmak@invalid_utu.fi> wrote in message news:dj98ii$bql$1@digitaldaemon.com...
> Ben Hinkle wrote:
> > Array bounds errors are not caught by the OS. Code silently continues
when
> > you index one past an array but trying to access null generates an
immediate
> > segv. So adding code to check for array bounds errors is needed because there is no OS support for it.
> I'm sorry, but this sounds a bit funny: "My OS supports segfaults".

It's not quite true, seg faults are generated by the hardware, not the OS. I do not know of any 32 bit system that doesn't operated in 'protected mode' where seg faults happen in hardware, and is supported by the OS.

I agree with you that your proposal would be very useful for a real mode operating system like DOS where there are no hardware seg faults or OS support for them, but DOS is dead and 16 bit systems are not supported by D.

> OTOH
> does the D specification say that all D implementations should generate
> code that supports segfaults. I mean if you use an old OS like Windows
> 9x where memory manager only has a part-time job, the OS may not halt on
> every segfault.

If you're running a 32 bit app under Win9x, you *will* get seg faults if you dereference NULL, 100% of the time.


October 25, 2005
Walter Bright wrote:
>>OTOH
>>does the D specification say that all D implementations should generate
>>code that supports segfaults. I mean if you use an old OS like Windows
>>9x where memory manager only has a part-time job, the OS may not halt on
>>every segfault.
> 
> 
> If you're running a 32 bit app under Win9x, you *will* get seg faults if you
> dereference NULL, 100% of the time.
> 
> 

Ok, Win9x does support segfaults in a way. Still I don't find that information very helpful since it's so easy to hang up the whole system by doing some tiny errors. Several years ago I had to stop using Windows 98 since it didn't always tell me (at least immediately) I was running some buggy code (illegal memory pointers). Linux does a lot better job in memory handling.
October 25, 2005
In article <djlu53$agr$6@digitaldaemon.com>, Walter Bright says...
>
>
>I agree with you that your proposal would be very useful for a real mode operating system like DOS where there are no hardware seg faults or OS support for them, but DOS is dead and 16 bit systems are not supported by D.
>
>
>If you're running a 32 bit app under Win9x, you *will* get seg faults if you dereference NULL, 100% of the time.
>
>

I think the point isn't to detect segfalts but to eliminate the need to use a debugger to find the code that generated them. Alternately the program could catch segfalts with it's own interrupt handler and print out a line number or such.