June 19, 2007
Reply to Don,
> 
> BTW, the same 'it segfaults anyway' argument could be used to some
> extent for array bounds checking.
> 


char[] buff = new char[10];

buff[$] = 'c'; // past end and (I think) more often then not won't AV


June 20, 2007
Don Clugston Wrote:

> Walter Bright wrote:
> > Georg Wrede wrote:
> >> Walter Bright wrote:
> >>> Kristian Kilpi wrote:
> >>>
> >>>> The problem is that
> >>>>
> >>>>   assert(obj);
> >>>>
> >>>> does not first check if 'obj' is null.
> >>>
> >>>
> >>> Yes it does, it's just that the hardware does the check, and gives you a  seg fault exception if it is null.
> >>
> >> Asserts were INVENTED to *avoid segfaults*.
> > 
> > I don't know when assert first appeared. But I first encountered them in the 80's, when the most popular machine for programming was the x86. The x86 had no hardware protection. When you wrote through a NULL pointer, you scrambled the operating systems, and all kinds of terrible, unpredictable things ensued. Asserts were used a lot to try and head off these problems.
> > 
> > Enter the 286. What a godsend it was to develop in protected mode, when if you accessed a NULL pointer you got a seg fault instead of a scrambled system. Nirvana! What was even better, was the debugger would pop you right to where the problem was. It's not only asserts done in hardware, it's asserts with:
> > 
> > 1) zero code size cost
> > 2) zero runtime cost
> > 3) they're there for every pointer dereference
> > 4) they work with the debugger to let you know exactly where the problem is
> > 
> > Seg faults are not an evil thing, they're there to help you. In fact, I'll often *deliberately* code them in so the debugger will pop up when it hits them.
> 
> True, but forgetting to 'new' a class is an extremely common mistake. The first time I ever used classes in D, I didn't 'new' it (I bet this will happen to almost everyone from a C++ background!). Getting an AV with no line number is pretty off-putting. This remains the #1 situation where I use a debugger. And I hate using debuggers to find silly typos. Getting an assert failure with a line number would be enormously more productive.
> 
> BTW, the same 'it segfaults anyway' argument could be used to some extent for array bounds checking.
No, writes out of array bounds can overrun other variables of your program instead of segfaulting, so I don't think it's a good comparison

I whole-heartedly agree on the line number complaint, I can't imagine debugging not at source level in this day and age. Though it's always fun to see how well the optimizer handles 8-10 levels of inlining in template methods/classes in my C++ code :) I guess I'm spoiled by MSVC 8, which generates line number info even in release builds


June 21, 2007
Having only discovered that D even exists about a month ago, I've been lurking until now, but have to respond to this one:

"Deewiant" <deewiant.doesnotlike.spam@gmail.com> wrote in message news:f51ebj$15e1$1@digitalmars.com...

>>> for the novice, where the debugger is a scary place of last resort
...
>>> would be more useful to have an assert
...
>>> all most of us want is a line number

> stubborn professionals don't use debuggers
...
> they're happy with their printf debugging
...
> happier using a debugger, at least in some cases

Speaking as a "stubborn professional" who is "happy with my printf debugging" and happier to use a debugger "at least in some cases" (:P), I would say this:

To the novice, a debugger is indeed a scary place of last resort; however, to the professional the debugger can also be a place of excessive effort that can often be shown to be unnecessary; thus a debugger is stll a last resort, but with "scary" replaced with "over complicated"; you get the idea :)

The object of any commercial exercise is to successfully deal with a problem for as little time/work/cost as possible. This goes double when working in a very-tight-deadlines sector (as I have for most of my career). The metric of "zealously use a debugger *in all* circumstances" **fails** this test because for many many cases, "printf debugging" and line numbers are sufficient. IMO, zealous use of debuggers is potentially bad as zealous non-use of debuggers; the need to use a debugger depends on the job and the obviousness (or not) of the problem location.

I'd suggest this as an analogy: debuggers can be viewed as being to programming what in-patient invasive surgery is to medicine: the more operations that can be performed with a local, an endoscope and at an out-patient clinic, the better. Save "cut him open and get your hands covered in blood going right inside his guts" surgery for when it is *absolutely* necessary.

Where *unexpected* failures are concerned, it can, as more than one person has already touched on, also depend on practicalities/legalities *over which you have no control* or on ephemeral data such as that from a temperature sensor. In these cases, knowing *exactly* where the program failed, *the very first time it did so*, is essential. Asserts are a good way to do this where a physical possibility of failure exists but is unlikely (you hope!) to be triggered.

I'm sure we've all written things like:

....case 3:
........do_something();
........break;
....default:
........// Should be impossible, but at least trap it if so
........assert((y<=6 || x>=5), "X<5 but Y>6", __LINE__, "x=", x, "y=", y);
........break;
....}

It's annoying to have to assert() that the object exists first, but better than having to tell your client's overnight low-tech personnel in country 3,000 miles away how to compile a suite whose source code they do not have, with a compiler they do not own and then run with the debugger that they do not understand on data whose trigger conditions no longer exist, all over the phone at 3am your time.


June 21, 2007
Walter Bright, el 15 de junio a las 21:33 me escribiste:
> Georg Wrede wrote:
> >Walter Bright wrote:
> >>Kristian Kilpi wrote:
> >>
> >>>The problem is that
> >>>
> >>>  assert(obj);
> >>>
> >>>does not first check if 'obj' is null.
> >>
> >>
> >>Yes it does, it's just that the hardware does the check, and gives you a  seg fault exception if it is null.
> >Asserts were INVENTED to *avoid segfaults*.
> 
> I don't know when assert first appeared. But I first encountered them in the 80's, when the most popular machine for programming was the x86. The x86 had no hardware protection. When you wrote through a NULL pointer, you scrambled the operating systems, and all kinds of terrible, unpredictable things ensued. Asserts were used a lot to try and head off these problems.
> 
> Enter the 286. What a godsend it was to develop in protected mode, when if you accessed a NULL pointer you got a seg fault instead of a scrambled system. Nirvana! What was even better, was the debugger would pop you right to where the problem was. It's not only asserts done in hardware, it's asserts with:
> 
> 1) zero code size cost
> 2) zero runtime cost
> 3) they're there for every pointer dereference
> 4) they work with the debugger to let you know exactly where the problem is
> 
> Seg faults are not an evil thing, they're there to help you. In fact, I'll often *deliberately* code them in so the debugger will pop up when it hits them.

OTOH, wasn't assert included as a language construct so it can throw an exception, giving the user the option to continue with the execution and try to fix the situation?

I find this very useful por applications where high-availability is crucial, I can allways catch any exception (even an assertion) and try to continue working.

Sure you can handle a segfault with a signal handler, but the handling code must be separated from the point of failure, making almost impossible to take a meaningful action.

-- 
LUCA - Leandro Lucarella - Usando Debian GNU/Linux Sid - GNU Generation
------------------------------------------------------------------------
E-Mail / JID:     luca@lugmen.org.ar
GPG Fingerprint:  D9E1 4545 0F4B 7928 E82C  375D 4B02 0FE0 B08B 4FB2
GPG Key:          gpg --keyserver pks.lugmen.org.ar --recv-keys B08B4FB2
------------------------------------------------------------------------
Peperino nos enseña que debemos ofrendirnos con ofrendas de vino si
queremos obtener la recompensa de la parte del medio del vacío.
	-- Peperino Pómoro
June 21, 2007
Martin Howe wrote:
> To the novice, a debugger is indeed a scary place of last resort; however, to the professional the debugger can also be a place of excessive effort that can often be shown to be unnecessary; thus a debugger is stll a last resort, but with "scary" replaced with "over complicated"; you get the idea :)

to a novice, many things might be scary, including D itself, so i won't argue here.
to an experienced programmer a debugger should be a tool that saves you from applying multiple add-printf/recompile/restart cycles by letting you intercept the program when it crashes and inspect the stack and heap instantly. it is there to complement asserts and exceptions, not as an alternative to them.
that surgery analogy doesn't quite work here, since printf's are even more invasive: you're basically implanting a piece of a debugger into the organism.

> It's annoying to have to assert() that the object exists first, but better than having to tell your client's overnight low-tech personnel in country 3,000 miles away how to compile a suite whose source code they do not have, with a compiler they do not own and then run with the debugger that they do not understand on data whose trigger conditions no longer exist, all over the phone at 3am your time.

and that's what memory dumps and post-mortem debugging is for.
the client sends you the dump and you can debug the exact instance of the crash.
June 21, 2007
Leandro Lucarella wrote:
> Sure you can handle a segfault with a signal handler, but the handling
> code must be separated from the point of failure, making almost impossible
> to take a meaningful action.

you can also catch an AV with an exception handler.
June 22, 2007
"Jascha Wetzel" <firstname@mainia.de> wrote in message news:f5ecms$2ejq$1@digitalmars.com...
> ........

I won't argue with the rest, but
> that surgery analogy doesn't quite work here
Well, FWIW, IMO it does -- the printf is endoscope surgery... your actions are targeted at the area you know is the likely source of the trouble and involve minimum trouble to implement; thus for simple cases, it's quicker than the alternative. I use a debugger whenever programs crash for **no obvious reason**, because a debugger is the ONLY way to avoid multiple edit/printf/run cycles in such cases.

I must admit, the *routine* use of debugger sounds like something that with a bit of discipline might be worth adopting; it still feels like overkill, but then I guess I just haven't worked in a sector where impenetrable errors are common daily occurences.

> and that's what memory dumps and post-mortem debugging is for.
> the client sends you the dump and you can debug the exact instance of the
> crash.
Now that *is* scary; the sort of things one expects the top 5% of bleeding-edge expert programmers (which I freely admit to not being one of) to do; certainly wasn't covered by even 3rd-year undergrad stuff; any good web references you can point me to?


June 22, 2007
Martin Howe wrote:
> Well, FWIW, IMO it does -- the printf is endoscope surgery...

oh, ok, sorry - i (mis)understood it the other way around.

> I must admit, the *routine* use of debugger sounds like something that with a bit of discipline might be worth adopting; it still feels like overkill, but then I guess I just haven't worked in a sector where impenetrable errors are common daily occurences.

for native executables, some information is provided exclusively by the debugger (or other tools) that VMs give you out of the box, namely stack traces. therefore it's plausible, that debuggers are more commonly used in native programming.

>> and that's what memory dumps and post-mortem debugging is for.
>> the client sends you the dump and you can debug the exact instance of the crash.
> Now that *is* scary; the sort of things one expects the top 5% of bleeding-edge expert programmers (which I freely admit to not being one of) to do; certainly wasn't covered by even 3rd-year undergrad stuff; any good web references you can point me to?

actually it's less scary for the client than helping the developer to find the problem using log information or Java stacktraces. in most cases it's also easier than trying to describe how to reproduce the problem.

every WinXP user knows the "bla.exe has encountered a problem and needs to close"-dialog with the "Send Error Report" and "Don't Send" Buttons.
what it does is creating a minidump and sending it to MS's crash statistics server.
instead of letting the default dialog pop up, you can replace it with your own or simply save the dump to a file.
Here is more info on that:
http://www.codeproject.com/debug/postmortemdebug_standalone1.asp

as of now, D applications catch all exceptions and print their message. therefore no D application will trigger that crash handler. upcoming releases of Ddbg will allow you to deal with minidumps properly, though.
1 2 3 4 5
Next ›   Last »