October 14, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Robert W. Cunningham | "Robert W. Cunningham" wrote: > > As a real-time embedded programmer, I would kill for a compiler smart enough to handle some asserts before generating code. I need fast, tight code that has a prayer of meeting its spec. Asserts help do exactly this (as part of an enforcement mechanism for "Program By Contract" and its many ilk), yet there is generally "no room" for them in the ROM (in the runtime code, that is). So, things often work fine under the debugger with asserts present, yet the code fails when burned to ROM. Too many times I've seen this happen, and the cost to the project is always significant. I don't think that's the fault of the assertions. Enabled asserts don't magically make code work that otherwise wouldn't (in fact, nearly the opposite -- they force code to fail consistently when it might sometimes appear to work, or they force code to fail at a predictable point). If code is working in debug builds with asserts present, but not in "release" builds, then generally one of the following is happening: (a) You have a bug which manifests or not according to what addresses various code and data appear at (since presence/absence of asserts moves both code and data around) -- which smells like a loose pointer which sometimes hits something important and sometimes doesn't; or (b) You have a compiler that's too aggressive in optimization, and turns working unoptimized code into differently-working or non-working optimized code; try building without asserts and other debugging features, but with optimizations turned off, and see if the problems go away; or (c) Code other than assert is either compiled or not on the basis of the debug configuration, and this code is causing a problem; or (d) The timing of operations is being slightly altered by the presence or absence of assert tests, or by the speed of optimized vs. unoptimized code, and thereby exposing subtle timing problems; or (e) The runtime environment is different from the testing environment in some way -- the debugger stub, emulation of hardware, or different revisions of hardware are conspiring to make your program act differently. > Give me a language that supports "smarter" asserts, and I'll show you a language that's sure to win in the real-time embedded market. None of the above explanations can really be helped by a "smarter" compiler. -Russell B | |||
October 14, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Russell Borogove | (f) You're hardware is defective. Instable hardware can have the coolest effects, also including optimize builds work while debug builds won't, and vice versa. | |||
October 14, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Axel Kittenberger | Axel Kittenberger wrote in message <9qd27s$2drp$1@digitaldaemon.com>... >(f) You're hardware is defective. Instable hardware can have the coolest effects, also including optimize builds work while debug builds won't, and vice versa. My first embedded system project had so many milliseconds for the interrupt service routine to run. Taking longer would cause cascading interrupts and a crash. To time it, I set an I/O port bit to 1 on entry, and 0 on exit. Connect an oscilloscope to the bit, and voila! | |||
October 14, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Russell Borogove | It is almost always item D, the changed timing between the debug and ROM environments, that causes most of the runtime problems I see in our systems. Most of the other problem types are caught far earlier by meticulous design and exhaustive analysis.
Another parallel problem is simple path coverage testing, where we need to proceed to the hardware tests in parallel with the coverage tests. This is exhaustive and time-consuming testing. However, since the real hardware is often far faster than the debug environment, most of the late bugs revealed by coverage testing tend to reveal themselves on the real hardware.
Language features that even slightly assists such development processes would be very welcome.
Now we are adding reconfigurable processing elements and "soft" cores to our designs, and the situation is getting worse. We really need single languages for both hardware and software design and implementation. The hardware folks use VHDL and Verilog, and we use C and C++. Design and modeling tools as well: They use schematic capture tools, we use UML. Implementation support tools: They have hardware simulators, we have software simulators and debuggers. There have been many efforts to bridge this divide at several levels, yet none have proven practical for use by small teams (3-4 each for hardware and software, excluding management).
Now I find I have functions that actually have better hardware implementations that software can ever do (thing like state machines and "straight-line" math sequences), and I need to be able to design and build in one environment, yet still be able to move to EITHER one for implementation. Testing done in one domain (hardware or software) should be transferable to the other at some level, and thus allow the implementation to switch between domains as needed to meet project goals.
The longer we can put off such decisions, and do so with minimal cost, the faster and better we can create the product. Ideally, it "shouldn't matter" if a given algorithm is implemented in hardware or software: At the design and testing level, it should be the same work for both domains. Either the algorithm is correct or it is not. If it is correct, then we only need verify the implementation. Presently, very little software design is "proven correct" or even "demonstrated" before being implemented.
Not that D could even be a part of the solution for this situation! But it may be a step in the right direction. With the right features, it may allow us to start testing earlier, and end it later, even to the point of shipping the system with more testing code in it than we normally would (or could). Compiler-tested asserts would be exactly such a thing: If they can be identified in advance, then they never need to be "removed" from the application code.
Sure, you can do some similar stuff using the preprocessor, but that's tedious and error prone, since the preprocessor language is NOT the implementation language! They should be one and the same for this to work cleanly.
-BobC
Russell Borogove wrote:
> "Robert W. Cunningham" wrote:
> >
> > As a real-time embedded programmer, I would kill for a compiler smart enough to handle some asserts before generating code. I need fast, tight code that has a prayer of meeting its spec. Asserts help do exactly this (as part of an enforcement mechanism for "Program By Contract" and its many ilk), yet there is generally "no room" for them in the ROM (in the runtime code, that is). So, things often work fine under the debugger with asserts present, yet the code fails when burned to ROM. Too many times I've seen this happen, and the cost to the project is always significant.
>
> I don't think that's the fault of the assertions. Enabled asserts don't magically make code work that otherwise wouldn't (in fact, nearly the opposite -- they force code to fail consistently when it might sometimes appear to work, or they force code to fail at a predictable point).
>
> If code is working in debug builds with asserts present, but not in "release" builds, then generally one of the following is happening:
>
> (a) You have a bug which manifests or not according to what addresses various code and data appear at (since presence/absence of asserts moves both code and data around) -- which smells like a loose pointer which sometimes hits something important and sometimes doesn't;
>
> or
>
> (b) You have a compiler that's too aggressive in optimization, and turns working unoptimized code into differently-working or non-working optimized code; try building without asserts and other debugging features, but with optimizations turned off, and see if the problems go away;
>
> or
>
> (c) Code other than assert is either compiled or not on the basis of the debug configuration, and this code is causing a problem;
>
> or
>
> (d) The timing of operations is being slightly altered by the presence or absence of assert tests, or by the speed of optimized vs. unoptimized code, and thereby exposing subtle timing problems;
>
> or
>
> (e) The runtime environment is different from the testing environment in some way -- the debugger stub, emulation of hardware, or different revisions of hardware are conspiring to make your program act differently.
>
> > Give me a language that supports "smarter" asserts, and I'll show you a language that's sure to win in the real-time embedded market.
>
> None of the above explanations can really be helped by a "smarter" compiler.
>
> -Russell B
| |||
October 15, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Russell Borogove | I would add (f): you put functional code in an assert. From time to time I've made the critical mistake of trying to directly check the return code from a function: assert(ImportantFunction(...) == true) At least in my compiler, when I compiled the Release version, the function was never called. Oops. -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ] | |||
October 15, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Russ Lewis |
Russ Lewis wrote:
>
> I would add (f): you put functional code in an assert. From time to time I've made the critical mistake of trying to directly check the return code from a function:
>
> assert(ImportantFunction(...) == true)
>
> At least in my compiler, when I compiled the Release version, the function was never called. Oops.
Ah, good point -- this is certainly the easiest way to make behavior different between debug and release versions.
-RB
| |||
October 15, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Robert W. Cunningham |
"Robert W. Cunningham" wrote:
>
> It is almost always item D, the changed timing between the debug and ROM environments, that causes most of the runtime problems I see in our systems. Most of the other problem types are caught far earlier by meticulous design and exhaustive analysis.
Are your design and analysis phases capable of catching compiler
bugs or bad-pointer implementation bugs, like items (a) and (b)?
If your systems are that sensitive to timing, then aren't you going to run into all sorts of problems down the road, when your model Foo microcontrollers are phased out by the company that makes them and replaced with the Foo-Plus-Turbo models which are binary-compatible but 2 to 5 times faster depending on the instruction mix?
(I'm not really an embedded systems engineer, but I've been programming game consoles and development systems for them, off and on, since '91 or so.)
-RB
| |||
October 16, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Russell Borogove | Russell Borogove wrote: > "Robert W. Cunningham" wrote: > > > > It is almost always item D, the changed timing between the debug and ROM environments, that causes most of the runtime problems I see in our systems. Most of the other problem types are caught far earlier by meticulous design and exhaustive analysis. > > Are your design and analysis phases capable of catching compiler > bugs or bad-pointer implementation bugs, like items (a) and (b)? Yes, sometimes, but we don't rely on them 100%. We prefer to take a defensive approach: We often use two compilers in parallel, and compare the output from both. While rare, I have been on projects that shipped code from more than a single compiler! We also use lots of lint-like tools to ensure we are using the language features we want, and avoid the ones we don't want. Avoiding known compiler bugs is a very difficult problem, especially on larger projects. When possible, we also create "reference" algorithm implementations in completely different environments (such as MathCad or Mathematica). We use them to generate independent results, which are then used to test the "real" implementation. I have uncovered many bugs in various floating point libraries and hardware this way. > If your systems are that sensitive to timing, then aren't you going to run into all sorts of problems down the road, when your model Foo microcontrollers are phased out by the company that makes them and replaced with the Foo-Plus-Turbo models which are binary-compatible but 2 to 5 times faster depending on the instruction mix? Yup, we do. When parts are phased out, we typically make "lifetime buys" of timing-critical components, then plan the follow-on product to be ready for market before those supplies run out. We have occasionally been caught using bleeding-edge components that somehow never became popular in the market. They make advanced products possible, but they also tend to shorten the overall product cycle. It's a normal design decision: Performance vs price vs time-to-market. Our business model goes for high margins, which means we have to always be first with the best. But it means nothing if we can't deliver repeatably, reliably and on schedule. And early part obsolescence (or an earthquake in Taiwan) can throw a wrench into the most carefully made plans. We avoid this to a large extent in digital circuits by placing more and more stuff into FPGAs. FPGAs only get larger and faster, and they have an excellent track record for being backward-compatible with prior parts and code. Quite often, we get a "free" upgrade when a new FPGA is pin compatible with an old one, but has several times the number of gates. Unfortunately, it seems almost impossible to get a similar processor upgrade that provides higher performance without other system changes. That's why we will be putting the CPU into an FPGA in many of our upcoming products. I have visions of a future language (and a compiler for it) that will find the best way to use the resources within an FPGA. When the application is "compiled", part of the code will become software, more of the code will become a CPU custom-crafted to run that software, and the rest will become fixed (or reprogrammable) logic. That is the direction our technology is heading. Compilers are getting smarter and smarter: Only in the past 5 years has VLIW (Itanium, Crusoe, DSPs) become truly practical for non-trivial applications, and it is all due to the compilers. The same goes for parallel processing and clusters: The compilers (and libraries) have made it possible "for the rest of us". On the hardware side, it is now practically impossible for a hardware designer to create an ASIC without "silicon compilers" for his VHDL and Verilog source code. Both hardware and software folks now have sophisticated code generator front-ends that take much of the drudgery out of implementing the repetitive and simple portions of any design. I use one to create device drivers, and the drivers created with the help of such tools are the best I've ever made. (And I've written lots of device drivers over the years, many in hand-tuned assembler). Someday, the compilers and generators will meet, and will be combined into a higher-level tool that will have a correspondingly sophisticated design, development and debug/test environment. I can hardly wait. But for the moment, I'm looking for whatever help I can get. Especially from languages such as D! > (I'm not really an embedded systems engineer, but I've been programming game consoles and development systems for them, off and on, since '91 or so.) Though I wrote my first computer program in '72, I started professionally programming real-time apps for an embedded 8085 target in '83, when 16K RAM and 32K ROM was truly massive for an embedded system (or any micro). I've burned and erased more UV EPROMS than I can count: In-system ROM emulators didn't become practical until the early 90's. Now I can often find a cycle-accurate CPU simulator (actually, I tend to avoid CPUs for which such simulators are not available) and can get the majority of my application implemented, tested and debugged long before the hardware is ready. Now, if only I could get my CPU simulator tied to the hardware simulator and run them together to do a full system simulation with clock cycle accuracy. One day... Yes, the big boys at General Dynamics and Boeing do this every day. I can't wait for the tools to filter down to those of us working at smaller companies. -BobC | |||
October 16, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Robert W. Cunningham | "Robert W. Cunningham" wrote:
> We often use two compilers in parallel, and compare the output from both...
> We also use lots of lint-like tools to ensure we are using the language features we
> want, and avoid the ones we don't want...
> When possible, we also create "reference" algorithm implementations in completely
> different environments (such as MathCad or Mathematica). We use them to generate
> independent results, which are then used to test the "real" implementation...
Very cool. I wish I could work on projects where we were able to take some of those steps.
-RB
| |||
October 17, 2001 Re: Compiler feature: warnings on known exceptions? | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Robert W. Cunningham | Robert W. Cunningham wrote in message <3BCA2362.8805E797@yahoo.com>... >Sure, you can do some similar stuff using the preprocessor, but that's tedious and >error prone, since the preprocessor language is NOT the implementation language! >They should be one and the same for this to work cleanly. Yes, exactly right! | |||
Copyright © 1999-2021 by the D Language Foundation
Permalink
Reply