November 09, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Held | On 11/9/2012 10:41 PM, David Held wrote: > > Also, the compiler is only deterministic because it isn't yet multi-threaded. That doesn't mean Walter hasn't attempted to make it such on more than one occasion. If the compiler had more immutable data structures, this would probably be an easier effort. ;) There is some async code in there. If I suspect a problem with it, I've left in the single thread logic, and switch to that in order to make it deterministic. It is somewhat annoying that modern Windows will start programs at a different address each time, which makes pointer values from one run to the next not the same. > > Finally, what the debugger cannot do is provide you with a history of what happened, except insofar as you are willing to manually capture the state change of various memory locations as you step through the program. Although I tend to use debuggers as the tool of last resort, I will still insist that logs are even more powerful when used with the debugger, because they can often give you a good idea where you should set breakpoints, avoiding having to step through large portions of the code which aren't relevant to the problem at hand (after all, not all bugs are as obvious as a segfault/AV). Actually, very very few bugs manifest themselves as seg faults. I mentioned before that I regard the emphasis on NULL pointers to be wildly excessive. _______________________________________________ dmd-internals mailing list dmd-internals@puremagic.com http://lists.puremagic.com/mailman/listinfo/dmd-internals |
November 10, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright Attachments:
| On Sat, Nov 10, 2012 at 6:44 PM, Walter Bright <walter@digitalmars.com>wrote:
>
> It is somewhat annoying that modern Windows will start programs at a different address each time, which makes pointer values from one run to the next not the same.
>
>
You can always turn ASLR off. On a semi-related note, do you know why optlink seems to emit base relocations by default (for exes)?
|
November 10, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to Daniel Murphy Attachments:
|
On 11/10/2012 12:22 AM, Daniel Murphy wrote:
> On Sat, Nov 10, 2012 at 6:44 PM, Walter Bright <walter@digitalmars.com <mailto:walter@digitalmars.com>> wrote:
>
>
> It is somewhat annoying that modern Windows will start programs at a
> different address each time, which makes pointer values from one run to
> the next not the same.
>
>
> You can always turn ASLR off. On a semi-related note, do you know why optlink seems to emit base relocations by default (for exes)?
>
No, I don't know why.
|
November 10, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jonathan M Davis Attachments:
| Some compiler may be deterministic, dmd isn't. I get dmd: ../ztc/aa.c:423: void AArray::rehash_x(aaA*, aaA**, size_t): Assertion `0' failed. completely randomly these days. |
November 10, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 11/9/2012 11:38 PM, Walter Bright wrote: > [...] > I'll often use printf because although the debugger knows types, it rarely shows the values in a form I particularly need to track down a problem, which tends to be different every time. And besides, throwing in a few printfs is fast and easy, whereas setting break points and stepping through a program is an awfully tedious process. Or maybe I never learned to use debugger properly, which is possible since I've been forced to debug systems where no debugger whatsoever was available - I've even debugged programs using an oscilloscope, making clicks on a speaker, blinking an LED, whatever is available. You're making my point for me, Walter! I have seen some people whiz through the debugger like they live in it, but I would say that level of familiarity tends to be the exception, rather than the rule. And, it always makes me a little uncomfortable when I see it (why would someone *need* to be that proficient with the debugger...?). Firing up the debugger, for many people, is a relatively expensive process, because it isn't something that good programmers should be doing very often (unless you subscribe to the school which says that you should always step through new code in the debugger...consider this an alternative to writing unit tests). > Note that getting a call stack for a seg fault does not suffer from these problems. I just: > > gdb --args dmd foo.d > > and whammo, I got my stack trace, complete with files and line numbers. There are two issues here. 1) Bugs which don't manifest as a segfault. 2) Bugs in which a segfault is the manifestation, but the root cause is far away (i.e.: not even in the call stack). I will say more on this below. > [...] >> Especially when there may be hundreds of instances running, while only a few actually experience a problem, logging usually turns out to be the better choice. Then consider that logging is also more useful for bug reporting, as well as visualizing the code flow even in non-error cases. > > Sure, but that doesn't apply to dmd. What's best practice for one kind of program isn't for another. There are many times when a command-line program offers logging of some sort which has helped me identify a problem (often a configuration error on my part). Some obvious examples are command shell scripts (which, by default, simply tell you everything they are doing...both annoying and useful) and makefiles (large build systems with hundreds of makefiles almost always require a verbose mode to help debug a badly written makefile). Also, note that when I am debugging a service, I am usually using it in a style which is equivalent to dmd. That is, I get a repro case, I send it in to a standalone instance, I look the response and the logs. This is really no different from invoking dmd on a repro case. Even in this scenario, logs are incredibly useful because they tell me the approximate location where something went wrong. Sometimes, this is enough to go look in the source and spot the error, and other times, I have to attach a debugger. But even when I have to go to the debugger, the logs let me skip 90% of the single-stepping I might otherwise have to do (because they tell me where things *probably worked correctly*). > [...] > I've tried that (see the LOG macros in template.c). It doesn't work very well, because the logging data tends to be far too voluminous. I like to tailor it to each specific problem. It's faster for me, and works. The problem is not that a logging system doesn't work very well, but that a logging system without a configuration system is not first-class, and *that* is what doesn't work very well. If you had something like log4j available, you would be able to tailor the output to something manageable. An all-or-nothing log is definitely too much data when you turn it on. On 11/9/2012 11:44 PM, Walter Bright wrote: > [...] > There is some async code in there. If I suspect a problem with it, I've left in the single thread logic, and switch to that in order to make it deterministic. But that doesn't tell you what the problem is. It just lets you escape to something functional by giving up on the parallelism. Logs at least tell you the running state in the parallel case, which is often enough to guess at what is wrong. Trying to find a synchronization bug in parallel code is pretty darned difficult in a debugger (for what I hope are obvious reasons). > [...] > Actually, very very few bugs manifest themselves as seg faults. I mentioned before that I regard the emphasis on NULL pointers to be wildly excessive. I would like to define a metric, which I call "bug depth". Suppose that incorrect program behavior is noticed, and bad behavior is associated with some symbol, S. Now, it could be that there is a problem with the immediate computation of S, whatever that might be (I mean, like in the same lexical scope). Or, it could be that S is merely a victim of a bad computation somewhere else (i.e.: the computation of S received a bad input from some other computation). Let us call the bad input S'. Now, it again may be the case that S' is a first-order bad actor, or that it is the victim of a bug earlier in the computation, say, from S''. Let us call the root cause symbol R. Now, there is some trail of dependencies from R to S which explain the manifestation of the bug. And let us call the number of references which must be followed from S to R the "bug depth". Now that we have this metric, we can talk about "shallow" bugs and "deep" bugs. When a segfault is caused by code immediately surrounding the bad symbol, we can say that the bug causing the segfault is "shallow". And when it is caused by a problem, say, 5 function calls away, in non-trivial functions, it is probably fair to say that the bug is "deep". In my experience, shallow bugs are usually simple mistakes. A programmer failed to check a boundary condition due to laziness, they used the wrong operator, they transposed some symbols, they re-used a variable they shouldn't have, etc. And you know they are simple mistakes when you can show the offending code to any programmer (including ones who don't know the context), and they can spot the bug. These kinds of bugs are easy to identify and fix. The real problem is when you look at the code where something is failing, and there is no obvious explanation for the failure. Ok, maybe being able to see the state a few frames up the stack will expose the root cause. When this happens, happy day! It's not the shallowest bug, but the stack is the next easiest context in which to look for root causes. The worst kinds of bugs happen when *everyone thinks they did the right thing*, and what really happened is that two coders disagreed on some program invariant. This is the kind of bug which tends to take the longest to figure out, because most of the code and program state looks the way everyone expects it to look. And when you finally discover the problem, it isn't a 1-line fix, because an entire module has been written with this bad assumption, or the code does something fairly complicated that can't be changed easily. There are several ways to defend against these types of bugs, all of which have a cost. There's the formal route, where you specify all valid inputs and outputs for each function (as documentation). There's the testing route, where you write unit tests for each function. And there's the contract-based route, where you define invariants checked at runtime. In fact, all 3 are valuable, but the return on investment for each one depends on the scale of the program. Although I think good documentation is essential for a multi-coder project, I would probably do that last. In fact, the technique which is the cheapest but most effective is to simply assert all your invariants inside your functions. Yes, this includes things you think are silly, like checking for NULL pointers. But it also includes things which are less silly, like checking for empty strings, empty containers, and other input assumptions which occur. It's essentially an argument for contract-based programming. D has this feature in the language. It is ironic that it is virtually absent from the compiler itself. There are probably more assert(0) in the code than any other assert. DMD has a fair number of open bugs left, and if I had to guess, the easy ones have already been cherry-picked. That means the remainders are far more likely to be deep bugs rather than shallow ones. And the only way I know how to attack deep bugs (both proactively and reactively) is to start making assumptions explicit (via assertions, exceptions, documentation), and give the people debugging a visualization of what is happening in the program via logs/debug output. Often times, a log file will show patterns that give you a fuzzy, imprecise sense of what is happening that is still useful, because when a bug shows up, it disrupts the pattern in some obvious way. This is what I mean by "visualizing the flow". It's being able to step back from the bark-staring which is single-stepping, and trying to look at a stand of trees in the forest. Dave _______________________________________________ dmd-internals mailing list dmd-internals@puremagic.com http://lists.puremagic.com/mailman/listinfo/dmd-internals |
November 11, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Held | David Held, el 9 de November a las 22:41 me escribiste: > On 11/9/2012 10:17 PM, Jonathan M Davis wrote: > >[...] > >Logging is extremely useful for applications which are constantly up and/or > >which involve a lot of network traffic or user interaction (which is typically > >non-repeatable and often can't be examined with the debugge running). However, > >that doesn't apply at all to a compiler. Compilers are incredibly > >deterministic, and errors are very, very repeatable. All you have to do is run > >the compiler on the same input, and you'll see the problem again, and stopping > >the compiler in a debugger causes no problems. So, I'd say that logging is > >completely inappropriate for a compiler. > > If there were no logging statements in the compiler, you might have a point. The fact that the dmd source is littered with them puts the lie to your insistence that they are "inappropriate". Obviously, Walter found them very useful at times. > > Also, the compiler is only deterministic because it isn't yet multi-threaded. That doesn't mean Walter hasn't attempted to make it such on more than one occasion. If the compiler had more immutable data structures, this would probably be an easier effort. ;) > > Finally, what the debugger cannot do is provide you with a history of what happened, except insofar as you are willing to manually capture the state change of various memory locations as you step through the program. Well, then I guess you don't know gdb's reverse debugging :D http://sourceware.org/gdb/news/reversible.html Is limited though (no reverse debugging of programs using threads for exapmle), just wanted to point that out as a curiosity :). I agree is always better to catch NULL values as soon as possible. -- _______________________________________________ dmd-internals mailing list dmd-internals@puremagic.com http://lists.puremagic.com/mailman/listinfo/dmd-internals |
November 11, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Held | On 11/10/2012 11:23 AM, David Held wrote: > On 11/9/2012 11:38 PM, Walter Bright wrote: >> [...] >> I'll often use printf because although the debugger knows types, it rarely shows the values in a form I particularly need to track down a problem, which tends to be different every time. And besides, throwing in a few printfs is fast and easy, whereas setting break points and stepping through a program is an awfully tedious process. Or maybe I never learned to use debugger properly, which is possible since I've been forced to debug systems where no debugger whatsoever was available - I've even debugged programs using an oscilloscope, making clicks on a speaker, blinking an LED, whatever is available. > > You're making my point for me, Walter! I have seen some people whiz through the debugger like they live in it, but I would say that level of familiarity tends to be the exception, rather than the rule. And, it always makes me a little uncomfortable when I see it (why would someone *need* to be that proficient with the debugger...?). Firing up the debugger, for many people, is a relatively expensive process, because it isn't something that good programmers should be doing very often (unless you subscribe to the school which says that you should always step through new code in the debugger...consider this an alternative to writing unit tests). I think we are agreeing on that. BTW, we are also specifically talking about debugging DMD source code. I would expect someone capable of understanding compiler guts to be capable of using a debugger to generate a stack trace, so for this specific case I don't see any need to cater to compiler devs who are incapable/afraid/ignorant of using a debugger. > >> Note that getting a call stack for a seg fault does not suffer from these problems. I just: >> >> gdb --args dmd foo.d >> >> and whammo, I got my stack trace, complete with files and line numbers. > > There are two issues here. 1) Bugs which don't manifest as a segfault. 2) Bugs in which a segfault is the manifestation, but the root cause is far away (i.e.: not even in the call stack). I will say more on this below. True, but I find seg faults from deep bugs to be only 10% of the seg fault bugs, the rest are shallow ones. (10% figure is made up on the spot.) > >> [...] >>> Especially when there may be hundreds of instances running, while only a few actually experience a problem, logging usually turns out to be the better choice. Then consider that logging is also more useful for bug reporting, as well as visualizing the code flow even in non-error cases. >> >> Sure, but that doesn't apply to dmd. What's best practice for one kind of program isn't for another. > > There are many times when a command-line program offers logging of some sort which has helped me identify a problem (often a configuration error on my part). Some obvious examples are command shell scripts (which, by default, simply tell you everything they are doing...both annoying and useful) and makefiles (large build systems with hundreds of makefiles almost always require a verbose mode to help debug a badly written makefile). > > Also, note that when I am debugging a service, I am usually using it in a style which is equivalent to dmd. That is, I get a repro case, I send it in to a standalone instance, I look the response and the logs. This is really no different from invoking dmd on a repro case. Even in this scenario, logs are incredibly useful because they tell me the approximate location where something went wrong. Sometimes, this is enough to go look in the source and spot the error, and other times, I have to attach a debugger. But even when I have to go to the debugger, the logs let me skip 90% of the single-stepping I might otherwise have to do (because they tell me where things *probably worked correctly*). Sure, I just don't see value in adding in code for generating logs, for dmd, unless it is in the service of looking for a particular problem. > >> [...] >> I've tried that (see the LOG macros in template.c). It doesn't work very well, because the logging data tends to be far too voluminous. I like to tailor it to each specific problem. It's faster for me, and works. > > The problem is not that a logging system doesn't work very well, but that a logging system without a configuration system is not first-class, and *that* is what doesn't work very well. If you had something like log4j available, you would be able to tailor the output to something manageable. An all-or-nothing log is definitely too much data when you turn it on. I am not seeing it as a problem that I tailor the printf's as required, or why that would be harder than tailoring via log4j. > > On 11/9/2012 11:44 PM, Walter Bright wrote: >> [...] >> There is some async code in there. If I suspect a problem with it, I've left in the single thread logic, and switch to that in order to make it deterministic. > > But that doesn't tell you what the problem is. It just lets you escape to something functional by giving up on the parallelism. Logs at least tell you the running state in the parallel case, which is often enough to guess at what is wrong. Trying to find a synchronization bug in parallel code is pretty darned difficult in a debugger (for what I hope are obvious reasons). Yes, that's true, but it enables me to quickly determine if it is a bug in the async logic or not. BTW, my experience in adding logs to debug async code is that they make it work :-). > >> [...] >> Actually, very very few bugs manifest themselves as seg faults. I mentioned before that I regard the emphasis on NULL pointers to be wildly excessive. > > I would like to define a metric, which I call "bug depth". Suppose that incorrect program behavior is noticed, and bad behavior is associated with some symbol, S. Now, it could be that there is a problem with the immediate computation of S, whatever that might be (I mean, like in the same lexical scope). Or, it could be that S is merely a victim of a bad computation somewhere else (i.e.: the computation of S received a bad input from some other computation). Let us call the bad input S'. Now, it again may be the case that S' is a first-order bad actor, or that it is the victim of a bug earlier in the computation, say, from S''. Let us call the root cause symbol R. Now, there is some trail of dependencies from R to S which explain the manifestation of the bug. And let us call the number of references which must be followed from S to R the "bug depth". > > Now that we have this metric, we can talk about "shallow" bugs and "deep" bugs. When a segfault is caused by code immediately surrounding the bad symbol, we can say that the bug causing the segfault is "shallow". And when it is caused by a problem, say, 5 function calls away, in non-trivial functions, it is probably fair to say that the bug is "deep". In my experience, shallow bugs are usually simple mistakes. A programmer failed to check a boundary condition due to laziness, they used the wrong operator, they transposed some symbols, they re-used a variable they shouldn't have, etc. And you know they are simple mistakes when you can show the offending code to any programmer (including ones who don't know the context), and they can spot the bug. These kinds of bugs are easy to identify and fix. > > The real problem is when you look at the code where something is failing, and there is no obvious explanation for the failure. Ok, maybe being able to see the state a few frames up the stack will expose the root cause. When this happens, happy day! It's not the shallowest bug, but the stack is the next easiest context in which to look for root causes. The worst kinds of bugs happen when *everyone thinks they did the right thing*, and what really happened is that two coders disagreed on some program invariant. This is the kind of bug which tends to take the longest to figure out, because most of the code and program state looks the way everyone expects it to look. And when you finally discover the problem, it isn't a 1-line fix, because an entire module has been written with this bad assumption, or the code does something fairly complicated that can't be changed easily. > > There are several ways to defend against these types of bugs, all of which have a cost. There's the formal route, where you specify all valid inputs and outputs for each function (as documentation). There's the testing route, where you write unit tests for each function. And there's the contract-based route, where you define invariants checked at runtime. In fact, all 3 are valuable, but the return on investment for each one depends on the scale of the program. > > Although I think good documentation is essential for a multi-coder project, I would probably do that last. In fact, the technique which is the cheapest but most effective is to simply assert all your invariants inside your functions. Yes, this includes things you think are silly, like checking for NULL pointers. I object to the following pattern: assert(p != NULL); *p = ...; i.e. dereferencing p shortly after checking it for null. This, to me, is utterly pointless in dmd. If, however, assert(p != NULL); s->field = p; and no dereference is done, then that has merit. > But it also includes things which are less silly, like checking for empty strings, empty containers, and other input assumptions which occur. It's essentially an argument for contract-based programming. D has this feature in the language. It is ironic that it is virtually absent from the compiler itself. There are probably more assert(0) in the code than any other assert. > > DMD has a fair number of open bugs left, and if I had to guess, the easy ones have already been cherry-picked. That means the remainders are far more likely to be deep bugs rather than shallow ones. And the only way I know how to attack deep bugs (both proactively and reactively) is to start making assumptions explicit (via assertions, exceptions, documentation), and give the people debugging a visualization of what is happening in the program via logs/debug output. Often times, a log file will show patterns that give you a fuzzy, imprecise sense of what is happening that is still useful, because when a bug shows up, it disrupts the pattern in some obvious way. This is what I mean by "visualizing the flow". It's being able to step back from the bark-staring which is single-stepping, and trying to look at a stand of trees in the forest. The bulk of the bugs result from an incomplete understanding of how the various semantics of the language interact with each other. For example, behavior X of the language is accounted for in this section of code, but not that section. What has helped with this is what Andrei calls "lowering" - rewriting complex D code into a simpler equivalent, and then letting the simpler compiler routines deal with it. What has also helped is refactoring. For example, the code to walk the expression trees tends to be duplicated a lot. By switching to a single walker coupled with an "apply" function, a number of latent bugs were fixed. I'd like to see more of this (see src/apply.c). Dang I wish I could use ranges in the D source code :-) _______________________________________________ dmd-internals mailing list dmd-internals@puremagic.com http://lists.puremagic.com/mailman/listinfo/dmd-internals |
November 11, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 11 Nov 2012, at 20:50, Walter Bright wrote: > What has also helped is refactoring. For example, the code to walk the expression trees tends to be duplicated a lot. By switching to a single walker coupled with an "apply" function, a number of latent bugs were fixed. I'd like to see more of this (see src/apply.c). On a related note, and sorry to ping you about this again: Could you please have a look at http://d.puremagic.com/issues/show_bug.cgi?id=8957 resp. my »Expression::apply, DeclarationExp and a possible nested context bug« thread on this mailing list (http://forum.dlang.org/thread/CAP9J_HXG8mTtnojU9YwYuSGZp1NQCdY0+7oeHyoQ2WhNR-dAuw@mail.gmail.com)? The question is whether Expression::apply should visit expressions evaluated as part of a DeclarationExp (i.e. initializers). I had to implement a workaround for this in LDC, since it causes an outright crash in its nested context creation code while for DMD the issue is just a rather obscure wrong-code bug, but from past experience I'd like to avoid unilaterally messing with the frontend as much as possible. David _______________________________________________ dmd-internals mailing list dmd-internals@puremagic.com http://lists.puremagic.com/mailman/listinfo/dmd-internals |
November 11, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Nadlinger | On 11/11/2012 2:04 PM, David Nadlinger wrote: > > On a related note, and sorry to ping you about this again: Could you please have a look at http://d.puremagic.com/issues/show_bug.cgi?id=8957 resp. my »Expression::apply, DeclarationExp and a possible nested context bug« thread on this mailing list (http://forum.dlang.org/thread/CAP9J_HXG8mTtnojU9YwYuSGZp1NQCdY0+7oeHyoQ2WhNR-dAuw@mail.gmail.com)? > > The question is whether Expression::apply should visit expressions evaluated as part of a DeclarationExp (i.e. initializers). I had to implement a workaround for this in LDC, since it causes an outright crash in its nested context creation code while for DMD the issue is just a rather obscure wrong-code bug, but from past experience I'd like to avoid unilaterally messing with the frontend as much as possible. I just elevated its priority. _______________________________________________ dmd-internals mailing list dmd-internals@puremagic.com http://lists.puremagic.com/mailman/listinfo/dmd-internals |
November 12, 2012 Re: [dmd-internals] Asserts | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 11 November 2012 20:50, Walter Bright <walter@digitalmars.com> wrote: > The bulk of the bugs result from an incomplete understanding of how the various semantics of the language interact with each other. For example, behavior X of the language is accounted for in this section of code, but not that section. What has helped with this is what Andrei calls "lowering" - rewriting complex D code into a simpler equivalent, and then letting the simpler compiler routines deal with it. A bit off-topic, but I don't agree that lowering has actually helped. The problem is that the compiler frequently lowers things into constructions which are not valid D code. Most obviously, variable declararations inside comma expressions, and local ref variables. The inliner does a lot of that as well. A significant fraction of the CTFE bugs are caused by this. Although lowering definitely helps in some cases, I don't think it's a nett win. _______________________________________________ dmd-internals mailing list dmd-internals@puremagic.com http://lists.puremagic.com/mailman/listinfo/dmd-internals |
Copyright © 1999-2021 by the D Language Foundation