Jump to page: 1 2 3
Thread overview
Are D Exceptions Fast and Efficient?
Jun 28, 2005
AJG
Jun 28, 2005
Mike Capp
Jun 28, 2005
AJG
Jun 28, 2005
Sean Kelly
Jun 29, 2005
AJG
Jun 29, 2005
AJG
Jun 29, 2005
Regan Heath
Jun 29, 2005
Regan Heath
Jun 29, 2005
Sean Kelly
Jun 29, 2005
Sean Kelly
Jun 29, 2005
Regan Heath
Jun 29, 2005
Brad Beveridge
Jun 29, 2005
Regan Heath
Jun 29, 2005
Regan Heath
Jun 30, 2005
Nicolas Lehuen
Jun 30, 2005
Sean Kelly
Jun 28, 2005
Walter
Jun 29, 2005
AJG
Jun 29, 2005
Walter
Jul 01, 2005
Stewart Gordon
Jul 01, 2005
Sean Kelly
Jun 29, 2005
Knud Sørensen
Jun 29, 2005
David Medlock
Jul 01, 2005
Dave
June 28, 2005
Hi there,

What is the cost (performance & memory-wise) of throwing and catching an exception in D? First, why do I ask this:

1) Exceptions are actually a great way to handle errors.
2) Thus, it would be great to handle ALL errors via exceptions.
Gets rid of return codes, errnos, null-checking, etc.
3) However, error-handling MUST be fast and efficient.
Particularly errors that you expect will happen relatively commonly.
4) Hence, none of this will work if exceptions are slow.

Why I'm I concerned?

Well, in C#, which is pretty cool (but managed) and has structured exception handling, there is one nasty restriction:

Exceptions are _clearly_ only meant for "exceptional" situations. Things that shouldn't happen a lot: running out of memory, database connection failed, etc.

This restriction is due to [a] various authors recommending such design; understandable, given [b] the fact that when an exception is thrown, your program halts execution literally for a couple of _human_ seconds.

This is simply unacceptable for a simple thing like an "invalid input string." So in C#, we are forced to do "efficient" error checking _before_ a potential exception is thrown. In essence, exceptions in C# are a last resort.

So, to recap, is it possible to handle all errors in D via exceptions (without a
speed cost), or should the C# approach be taken: using exceptions only for
exceptional stuff (a limitation, IMHO)?

Thanks!
--AJG.

2B || !2B, that is the question. ================================
June 28, 2005
In article <d9s10t$1lid$1@digitaldaemon.com>, AJG says...
>
>Well, in C#, [...]
>Exceptions are _clearly_ only meant for "exceptional" situations. Things that
>shouldn't happen a lot: running out of memory, database connection failed, etc.
>[...] when an exception is thrown, your
>program halts execution literally for a couple of _human_ seconds.

I find this very surprising. We've been writing downright exception-happy C# for a couple of years now, including things like dodgy per-cell string parsing inside large data tables, and have never encountered any such problems.

C++ implementations often exhibit a severe pause if they defer loading EH tables in from disk until the first throw, but that's a deterministic-destruction overhead. I wouldn't expect to see it in a GC language like C#. (Or D.)

cheers,
Mike


June 28, 2005
"AJG" <AJG_member@pathlink.com> wrote in message news:d9s10t$1lid$1@digitaldaemon.com...
> What is the cost (performance & memory-wise) of throwing and catching an
> exception in D?

I refer you to the "Error handling" section of D's spec (http://www.digitalmars.com/d/index.html):

"
Errors are not part of the normal flow of a program. Errors are exceptional,
unusual, and unexpected.
D exception handling fits right in with that.
Because errors are unusual, execution of error handling code is not
performance critical.
Exception handling stack unwinding is a relatively slow process.
The normal flow of program logic is performance critical.
Since the normal flow code does not have to check every function call for
error returns, it can be realistically faster to use exception handling for
the errors.
"

For one, D assumes that exceptions are very unusual circumstances.  For two, it says that "error handling code is not performance critical."

Now, it won't pause your app for a few seconds when throwing an exception like in .NET, but it won't be _as fast_ as the rest of your code.

But really, the question is - why would you design an application to throw an exception for anything _other_ than absolutely exceptional circumstances? I think it's more of a design issue than anything.


June 28, 2005
Hi there,

I think I need to be stealing your C# exception code ;) You must be doing something right if it's relatively fast like that. I'm coding in C# right now (a business app) and a particular loop is throwning an IndexOutOfRangeException.

No matter where I catch it, it makes the program choke at least 2-3 seconds. If I let it go unhandled, it will be even worse. This is the case too every other time I've dealt with exceptions.

Anyway, I digress. But I simply can't figure how you've never run into the infamous halt after an exception is thrown.

Cheers,
--AJG.


In article <d9s26c$1mgs$1@digitaldaemon.com>, Mike Capp says...
>
>In article <d9s10t$1lid$1@digitaldaemon.com>, AJG says...
>>
>>Well, in C#, [...]
>>Exceptions are _clearly_ only meant for "exceptional" situations. Things that
>>shouldn't happen a lot: running out of memory, database connection failed, etc.
>>[...] when an exception is thrown, your
>>program halts execution literally for a couple of _human_ seconds.
>
>I find this very surprising. We've been writing downright exception-happy C# for a couple of years now, including things like dodgy per-cell string parsing inside large data tables, and have never encountered any such problems.
>
>C++ implementations often exhibit a severe pause if they defer loading EH tables in from disk until the first throw, but that's a deterministic-destruction overhead. I wouldn't expect to see it in a GC language like C#. (Or D.)
>
>cheers,
>Mike
>
>


June 28, 2005
Jarrett Billingsley wrote:
> For one, D assumes that exceptions are very unusual circumstances.  For two, it says that "error handling code is not performance critical."

This is only theory. Throwing and handling exceptions should be (and in fact
is) as fast as possible.

> Now, it won't pause your app for a few seconds when throwing an exception like in .NET, but it won't be _as fast_ as the rest of your code.

> But really, the question is - why would you design an application to throw an exception for anything _other_ than absolutely exceptional circumstances? I think it's more of a design issue than anything.

Why? Exceptions are natural way of handling almost _every_ _error_. I can't see why someone would resignate from using them just because "they are not performance critical". In the real code execeptions may save a lot of developer time and program time. They are cheaper and faster. Here are two examples:

Example 1.

void doSomething(){
  for(int i=0;i<SOMEBIGCONST;i++)
    if(ERRORCODE == doSubpart())
      handleError();
}

vs.

void doSomething(){
  try{
    for(int i=0;i<SOMEBIGCONST;i++)
      doSubpart();
  }catch(Error e){
    handleError();
  }
}

As you can see in second one you saves SOMEBIGCONST number of comparing command. And if Errors are realy rare, even if throwing&handling of exception isn't "performance critical" you're getting more speed.

Example 2. (this one isn't so easy, but I hope you'll get the idea)

void doFoo(){
  if(ERROR == doBoo()){
    handleExeption();
    // and let's say ... try again doBoo()...
    // (I will not write code for that)
  }
}

int doBoo(){
  if(ERROR == doMoo())
    return ERROR;
  }
}

int doMoo(){
  if(ERROR == doQoo())
    return ERROR;
  }
}
ind doQoo(){
  /* and goo one with that sicknes ... */
}

vs

void doFoo(){
  try{
    doMoo();
  }catch(Error e){
    handlerException();
  }
}

void doBoo(){
  doBoo();
}

void doQoo(){
  /* and go one where there
    will be part that can create error and throw it*/
}

As doFoo() is the only function that know how to handle Error only it is interested in handling it. The rest of code don't have to worry about it because it don't care. Precious CPU cycles are saved.

End of examples.

This is real example of fact that "premature optymalization is the reason of all evil". Saying that using exceptions is not smart because they are not "performance critical" is such premature optymalization.

Exceptions are created for handling errors. And I'll be tring to use them as much as I can for that. Using exceptions is faster and cheaper (in all aspects). And only when profiler will say that some part of my code can benefit from using errorcodes I will use them. This is how things should be done IMO.

Hell - even if they were slow as turtle I would use them. Can't you feel the magic? :)
-- 
Dawid Ciężarkiewicz | arael
June 28, 2005
In article <d9s92i$1v18$1@digitaldaemon.com>, Dawid =?ISO-8859-2?Q?Ci=EA=BFarkiewicz?= says...
>
>Jarrett Billingsley wrote:
>> For one, D assumes that exceptions are very unusual circumstances.  For two, it says that "error handling code is not performance critical."
>
>This is only theory. Throwing and handling exceptions should be (and in fact
>is) as fast as possible.

Anyone interested in performance issues related to exception handing might enjoy this paper: http://www.research.att.com/~bs/performanceTR.pdf


Sean


June 28, 2005
"AJG" <AJG_member@pathlink.com> wrote in message news:d9s10t$1lid$1@digitaldaemon.com...
> What is the cost (performance & memory-wise) of throwing and catching an exception in D? First, why do I ask this:

D's exception handling mechanism is the same as C++'s is, as D and C++ share the same back end. It'll still be an order or magnitude or more slower than, say, returning an error code.


June 29, 2005
Hi Walter,

>> What is the cost (performance & memory-wise) of throwing and catching an exception in D? First, why do I ask this:
>
>D's exception handling mechanism is the same as C++'s is, as D and C++ share the same back end. It'll still be an order or magnitude or more slower than, say, returning an error code.

Thanks for the info. I was wondering whether this has to be the case.

- Is there a way to make exceptions somewhat lighter (and faster)?
- Will D always share the C++ backend for exceptions, or is it at least
theoretically possible to eventually substitute it with a more performant
version?
- What about making the Error class a full-blown stack unwind with stack trace
and everything (i.e. slow), and the Exception class something lighter akin to an
error code (i.e. fast)? I think this would be _immensely_ powerful and useful;
it would be a clean, simple way to unify all error handling into one framework.

Thanks for listening,
--AJG.

PS: I will post an example re: exceptions in a second, could you take a look-see?




================================
2B || !2B, that is the question.
June 29, 2005
Wow, that is a lot of info. I'm only about 20% into it; anyone care to make a succint analysis and report it for us lazy people?

Also, there's very little mention of specific compilers (and in particular, their specific -Optimize switches). Wouldn't this make a heck of a lot of difference?

--AJG.


In article <d9sdkt$22h1$1@digitaldaemon.com>, Sean Kelly says...
>
>In article <d9s92i$1v18$1@digitaldaemon.com>, Dawid =?ISO-8859-2?Q?Ci=EA=BFarkiewicz?= says...
>>
>>Jarrett Billingsley wrote:
>>> For one, D assumes that exceptions are very unusual circumstances.  For two, it says that "error handling code is not performance critical."
>>
>>This is only theory. Throwing and handling exceptions should be (and in fact
>>is) as fast as possible.
>
>Anyone interested in performance issues related to exception handing might enjoy this paper: http://www.research.att.com/~bs/performanceTR.pdf
>
>
>Sean
>
>

================================
2B || !2B, that is the question.
June 29, 2005
Hi Jarrett (and others too, please do read on),

Thanks for the info. I was hoping some of the things in the spec were outdated, and I still hope it's the case, because exceptions are too powerful to relagate to exceptional situations only.

I think in a way the spec contradicts itself. First, it list all the things it
wants to eliminate:
- Returning a NULL pointer.
- Returning a 0 value.
- Returning a non-zero error code.
- Requiring errno to be checked.
- Requiring that a function be called to check if the previous function failed

But then it goes on to say what you quoted, which makes it all irrelevant, because you are almost never going to encounter these "errors" anyway.

>But really, the question is - why would you design an application to throw an exception for anything _other_ than absolutely exceptional circumstances? I think it's more of a design issue than anything.

I disagree here. I think errors happen _all the time_, and they _are_ part of the normal program flow. If we use exceptions only as recommended (a very limited subset of errors, the exceptional "catastrophic" ones), we are back to the very "tedious error handling code" we set out to remove in the first place. It's back to the same ol' dirty clutter.

Let me give you an example. Suppose I am building an application that requires the user to input a sequence of integers, and it does some  processing with them; say, from the command-line. Here it is, with no "handling" in place at all:

# import std.conv;
# alias char[] string;
#
# int processInt(int input) {
#     int output;
#     // Processing.
#     return (output);
# }
#
# main(string[] args) {
#     foreach (int index, string arg; args[1 .. length]) {
#         int temp = toInt(arg);
#         int result = processInt(temp);
#         printf("[#%d] %d => %d\n", index, temp, result);
#     }
# }

Now, in an ideal world, the user will enter perfectly formatter integers, without a single mistake, and it  would all be dandy. But that's not going to happen. And you _know_ that, so you _expect_ an error of this kind. It _is_ part of  the regular program flow.

Luckily for us, exceptions come into play and save this "quick-and-dirty" utility. If you enter a bunch of letters as an integer, toInt will not choke and segfault. It will throw an exception, which prevents it from going further and damaging anything or producing erroneous output.

Since we did not set up any handling (a catch), the program will exit if any input is badly formatted. However, we can take this a step further and make the utility more robust and more useful:

# import std.conv;
# alias char[] string;
#
# int processInt(int input) {
#     int output;
#     // Processing.
#     return (output);
# }
#
# main(string[] args) {
#     foreach (int index, string arg; args[1 .. length]) {
#         try {
#             int temp = toInt(arg);
#             int result = processInt(temp);
#             printf("[#%d] %d => %d\n", index, temp, result);
#         catch (Exception e) {
#             printf("Badly formatted input [#%d]=%.*s\n", index, arg);
#             printf("Exception says: %.*s\n", e.toString);
#         }
#     }
# }

Now, the program will catch any errors neatly, and continue normal operation (which is perfectly valid). Now ask yourself, how many times will a human being type a number incorrectly? I think we know the answer is _all the time_.

What does this mean? We should _use_ the exception mechanism to replace all error handling and checking, but in order to do this, it must be relatively fast. Maybe not as fast as a simple return code, but not an order of magnitude slower either. In this example it doesn't matter much, but what if it's a million inputs? or what if it's a server handling hundreds of thousands of requests?

This is all just IMHO, but I think it makes sense. Look at the traditional alternative:

You must manually check all inputs beforehand, before even attempting to convert them, to see if they are perfectly formatted. You must introduce an error code return and move the output to its own ref parameter, because the error code could be mistaken for correct output.

I think this is futile and redundant; you would be essentially re-writing the toInt function yourself just so that it won't throw an exception. Must one re-invent the wheel?

The point of this whole thing: Exceptions are good. They are simple to use and powerful. They make you code neater and reduce useless clutter. Moreover, they represent things that occur commonly. So, we should make use of them whenever possible, and D should make them fast so that we can do this without incurring a penalty.

Thanks for listening,
--AJG.

>
>"AJG" <AJG_member@pathlink.com> wrote in message news:d9s10t$1lid$1@digitaldaemon.com...
>> What is the cost (performance & memory-wise) of throwing and catching an
>> exception in D?
>
>I refer you to the "Error handling" section of D's spec (http://www.digitalmars.com/d/index.html):
>
>"
>Errors are not part of the normal flow of a program. Errors are exceptional,
>unusual, and unexpected.
>D exception handling fits right in with that.
>Because errors are unusual, execution of error handling code is not
>performance critical.
>Exception handling stack unwinding is a relatively slow process.
>The normal flow of program logic is performance critical.
>Since the normal flow code does not have to check every function call for
>error returns, it can be realistically faster to use exception handling for
>the errors.
>"
>
>For one, D assumes that exceptions are very unusual circumstances.  For two, it says that "error handling code is not performance critical."
>
>Now, it won't pause your app for a few seconds when throwing an exception like in .NET, but it won't be _as fast_ as the rest of your code.


« First   ‹ Prev
1 2 3