February 24, 2015
On 2/24/2015 11:07 AM, deadalnix wrote:
> The page fault strategy is used by ML family language's GC and they get really
> good performance out of it. That being said, in ML like language most things are
> immutable, so they are a

I wrote a gc for Java that used the page fault strategy. It was slower and so I went with another strategy.
February 25, 2015
On Tuesday, 24 February 2015 at 23:49:21 UTC, Walter Bright wrote:
> On 2/24/2015 11:07 AM, deadalnix wrote:
>> The page fault strategy is used by ML family language's GC and they get really
>> good performance out of it. That being said, in ML like language most things are
>> immutable, so they are a
>
> I wrote a gc for Java that used the page fault strategy. It was slower and so I went with another strategy.

That is fairly obvious. Java is not exactly putting emphasis on immutability...
February 25, 2015
On Tuesday, 24 February 2015 at 23:02:14 UTC, ponce wrote:
> One (big) problem about error code is that they do get ignored, much too often. It's like manual memory management, everyone think they can do it without errors, but mostly everyone fail at it (me too, and you too).

Explicit return values for errors is usually annoying, yes, but it is possible to have a language construct that isn't ignorable. That means you have to explicitly state that you are ignoring the error. E.g.

  open_file("file2")?.write("stuff")  // only write if file is ready
  open_file("file1")?.write("ffuts")  // only write if file is ready

  if ( error ) log_error() // log if some files were not ready

or:

  f = open_file(…)
  g = open_file(…)
  h = open_file(…)
  if( error(f,g,h) ) log_error


Also with async programming, futures/promises, the errors will be delayed, so you might be better off having them as part of the object.

February 25, 2015
On 24 February 2015 at 04:04, Andrei Alexandrescu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> I think RC is an important tool on our panoply. More so than Walter. But I have to say you'd do good to understand his arguments better; it doesn't seem you do.

The only argument I can't/haven't addressed is the exception argument, which I have granted to Walter on authority. I think the other issues he's presented (at various times over the years, so I'm not sure what is the current state) can probably be addressed by the language. Scope seems like it could win us a lot of the problem cases alone, but we'd need to experiment with it for a year or so before we'll know.

It does annoy me that I can't comment on the exceptions case, but the
fact is I've never typed 'throw' before, and never scrutinised the
codegen.
I have meant to take some time to familiarise myself with whatever
appears out the other end of the compiler for a long time, but all
compilers seem to be different, architectures are probably different
too, and I just haven't found the time for something that has no use
to me other than to argue this assertion with some authority.
Walter has said I should "just compile some code and look at it",
which probably sounds fine if you have any idea what you're looking
for, or an understanding of the differences between compilers. I
don't, it's a fairly deep and involved process to give myself
comprehensive knowledge of exceptions, and it has no value to me
outside this argument.

That said, I'd still be surprised if it was a terminal argument though. Is it really a hard impasse? Or is it just a big/awkward burden on the exceptional path? Surely the performance of the unwind path is irrelevant? Does it bleed into the normal execution path?

My whole point is; where we seem to have no options for meaningful
progress in GC land as evidenced by years of stagnation, I figure it's
the only practical direction we have.
I recognise there is some GC action recently, but I haven't seen any
activity that changes the fundamental problems?


>> What do you want me to do? I use manual RC throughout my code, but the experience in D today is identical to C++. I can confirm that it's equally terrible to any C++ implementation I've used. We have no tools to improve on it.
>
>
> Surely you have long by now written something similar to std::shared_ptr by now. Please paste it somewhere and post a link to it, thanks. That would be a fantastic discussion starter.

Not really. I don't really use C++; we generally switched from C++
back to C about 8 years ago.
Here's something public of the type I am more familiar with:
https://github.com/TurkeyMan/fuji/blob/master/dist/include/d2/fuji/resource.d
It's not an excellent example; it's quick and dirty, hasn't received
much more attention than getting it working, but I think it's fairly
representative of a typical pattern, especially when interacting with
C code.
I'm not aware of any reasonable strategy I could use to eliminate the
ref fiddling. 'scope' overloads would have solved the problem nicely.

COM is also an excellent candidate for consideration. If COM works
well, then I imagine anything should work.
Microsoft's latest C++ presents a model for this that I'm generally
happy with; distinct RC pointer type. We could do better by having
implicit cast to scope(T*) (aka, borrowing) which C++ can't express;
scope(T*) would be to T^ and T* like const(T*) is to T* and
immutable(T*).


>> And whatever, if it's the same as C++, I can live with it. I've had
>> that my whole career (although it's sad, because we could do much
>> better).
>> The problem, as always, is implicit allocations, and allocations from
>> 3rd party libraries. While the default allocator remains incompatible
>> with workloads I care about, that isolates us from virtually every
>> library that's not explicitly written to care about my use cases.
>> That's the situation in C++ forever. We're familiar with it, and it's
>> shit. I have long hoped we would move beyond that situation in D
>> (truly the #1 fantasy I had when I first dove in to D 6 years back),
>> but it's a massive up-hill battle to even gain mindshare, regardless
>> of actual implementation. There's no shared vision on this matter, the
>> word is basically "GC is great, everyone loves it, use an RC lib".
>
>
> It doesn't seem that way to me at all. Improving resource management is right there on the vision page for H1 2015. Right now people are busy fixing regressions for 2.067 (aiming for March 1; probably we won't make that deadline). In that context, posturing and stomping the ground that others work for you right now is in even more of a stark contrast.

I haven't said that. Review my first post, I never made any claims
about what should/shouldn't be done, or make any demands of any sort.
I just don't have any faith that GC can get us where we want to be. I
was cautiously optimistic for some years, but I lost faith.
I'm also critical (disappointed even) of the treatment of scope. I
hope I'm wrong, like I hoped I was wrong about GC...

There have been lots of people come in here and say "I won't use D because GC", and I've been defensive against those claims, despite being as high risk of similar tendencies myself. I don't think you can say I didn't give GC a fair chance to prove me wrong.


>> How long do we wait for someone to invent a fantastical GC that solves our problems and works in D?
>
>
> This elucubration belongs only to you. Nobody's waiting on that. Pleas read http://wiki.dlang.org/Vision/2015H1 again.

This is a bit of a red herring, the roadmap has no mention of ARC, or practical substitution for the GC. This discussion was originally about ARC over the GC, specifically.

I firmly understand the push for @nogc. I have applauded that effort many times.
I have said in the past though that I'm not actually a strong
supporter of @nogc, and spoke critically of it initially. I'm still
not really thrilled, but I do agree it's a necessary building block
and I'm very happy it's a key focus, but I'm ultimately concerned it
will lead to a place where separation of users into 2 camps (my
primary ongoing criticism) is firmly established, and it will be
justified by the effort expended to achieve that end.
Again, don't get me wrong, I agree we need this, because at this point
I see no other satisfactory outcome, but I'm disappointed that we
failed to achieve something more ambitious in terms of memory
management (garbage collection in whatever form; gc or rc) and @nogc
users will lose a subset of the language.

I'm familiar with the world where some/most libraries aren't available
to me, because people tend to use the most convenient memory
management strategy by default, and that is incompatible with my
environment.
That is the world we will have in D. It's 'satisfactory', ie, it's
workable for me and my people, but it's not ideal; it's a lost
opportunity, and it's disappointing. Perhaps I've just been
unrealistically hopeful for too long?

Perhaps it will fall to the allocator API to save us from this situation I envision, but I don't have a picture for that in my head. It feels like it will probably be awkward and complicated to me however I try and imagine the end product. The likely result of that will be people not using it (just sticking with default GC allocation) unless they are compelled by a high level of experience, or by their intended user base, again suggesting a separation into 2 worlds.

I hope I'm completely wrong. Really!


>> The fact is, people who are capable of approaching this problem in terms of actual code will never even attempt it until there's resounding consensus that it's worth exploring.
>
>
> Could you please let me know how we can rephrase this paragraph on http://wiki.dlang.org/Vision/2015H1:
>
> =============
> Memory Management
>
> We aim to improve D's handling of memory. That includes improving the
> garbage collector itself and also making D eminently usable with limited or
> no use of tracing garbage collection. We aim to make the standard library
> usable in its entirety without a garbage collector. Safe code should not
> require the presence of a garbage collector.
> =============
>
> Could you please let me know exactly what parts you don't understand or agree with so we can change them. Thanks.

I understand what it says, and I generally agree with it.
I agree that @nogc will give us a C-like foundation to work outwards
from, and that is a much better place than where we are today, so I
have come to support the direction.

I wouldn't complain if efficient RC was on the roadmap, but I agree it's outside the scope for the immediate future.

If it were to be said that I 'disagree' with some part of it, which
isn't true, it would be that it risks leading to an end that I'm not
sure is in our best interest; we will arrive at C vs C++.
As I said above, and many many times in the past, I see the @nogc
effort leading to a place where libraries are firmly divided between 2
worlds. My criticisms of that kind have never been directly addressed.

My suspicion is that this is mainly motivated by the fact it is the simplest and lowest-level path, it will give the best low-level building-blocks, and I think that's probably a good thing. But I can also imagine more ambitious paths, like replacing the GC with ambitious ARC implementation as the OP raised (if it's workable, I'm not convinced either way), which would require massive R&D and almost certainly lead to some radical changes. I think resistance is predicated mainly on a rejection of such radical changes, and that's fair enough, but is that rejection of radical change worth dividing D into 2 worlds like C/C++? I don't know of any discussion on this value tradeoff... not to mention, the significant loss of language features to the @nogc camp.

I'm not asking for any changes. I just gave an opinion to the OP. I can see there is RC work going on now, that's good. I expect (well, am hopeful) it will eventually lead to some form of ARC. We'll see.


Re-reading my first 8 posts, I still feel they are perfectly
reasonable. I'm not sure where exactly it is that I started 'a thing',
but my feeling is it was only when you gave a long and somewhat
aggressive reply, and then made a cheap attack on my character, that I
tend to reply in kind.
I also feel like I'm forced to reply, to every point, otherwise I
proliferate this caricature you're prescribing to me (where I
disappear when it gets 'hard', or rather, more time consuming than I
have time for). Which is often true; it does become more time
consuming than I have time for... does that mean I'm not entitled to
input on this forum?
I'll return to my hole now.
February 25, 2015
On 24 February 2015 at 10:36, deadalnix via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
>>
>> This is going to sound really stupid... but do people actually use exceptions regularly?
>
>
> I'd say exception are exceptional in most code. That being said, unless the compiler can PROVE that no exception is gonna be thrown, you are stuck with having to generate a code path for unwinding that decrement the refcount.
>
> It means that you'll have code bloat (not that bad IMO, unless you are embeded) but more importantly, it means that most increment/decrement can't be optimized away in the regular path, as you must get the expected count in the unwinding path.

Can the unwind path not be aware that the non-exceptional path had an
increment optimised away? Surely the unwind only needs to perform
matching decrements where an increment was generated...
I can easily imagine the burden ARC may place on the exceptional path,
but I can't visualise the influence it has on the normal path?


> Moreover, as you get some work to do on the unwind path, it becomes impossible to do various optimizations like tail calls.

Tail call is nice, but I don't think I'd lament the loss, especially when nothrow (which is inferred these days) will give it right back.

Embedded code that can't handle the bloat would be required to use
nothrow to meet requirements. That's fine, it's a known constraint of
embedded programming.
Realtime/hot code may also need to use nothrow to guarantee all
optimisations are possible... which would probably be fine in most
cases? I'd find that perfectly acceptable.


> I think Walter is right when he says that switft dropped exception because of ARC.

But what was the definitive deal breaker? Was there one thing, like
it's actually impossible... or was it a matter of cumulative small
things leading them to make a value judgement?
We might make a different value judgement given very different circumstances.


>> I've never used one. When I encounter code that does, I just find it really annoying to debug. I've never 'gotten' exceptions. I'm not sure why error codes are insufficient, other than the obvious fact that they hog the one sacred return value.
>
>
> Return error code have usually been an usability disaster for the simple reason that the do nothing behavior is to ignore the error.

I generally find this preferable to spontaneous crashing. That is, assuming the do nothing behaviour with exceptions is for it to unwind all the way to the top, which I think is the comparable 'do nothing' case.

I've pondered using 'throw' in D, but the thing that usually kills it
for me is that I can't have a free catch() statement, it needs to be
structured with a try.
I just want to write catch() at a random line where I want unwinding
to stop if anything before it went wrong. Ie, implicit try{} around
all the code in the scope that comes before.
I don't know if that would tip me over the line, but it would go a
long way to making it more attractive. I just hate what exceptions do
to your code. But also, debuggers are terrible at handling them.


> The second major problem is that you usually have no idea how where the error check is done, forcing the programmer to bubble up the error where it is meaningful to handle it.

That's true, but I've never felt like exceptions are a particularly
good solution to that problem.
I don't find they make the code simpler. Infact, I find them to
produce almost the same amount of functional code, except with
additional indentation and brace spam, syntactic baggage, and bonus
allocations.

Consider:
if(tryAndDoThing() == Error.Failed)
  return Error.FailedForSomeReason;

if(tryNextThing() == Error.Failed)
  return Error.FailedForAnotherReason;

Compared to:
try
{
  doThing();
  nextThing();
}
catch(FirstKindOfException e)
{
  throw new Ex(Error.FailedForSomeReason);
}
catch(SecondKindOfException e)
{
  throw new Ex(Error.FailedForAnotherReason);
}

It's long and bloated (4 lines became 13 lines!), it allocates, is not
@nogc, etc.
Sure, it might be that you don't always translate inner exceptions to
high-level concepts like this and just let the inner exception bubble
up... but I often do want to have the meaningful translation of
errors, so this must at least be fairly common.


>> I'll agree though that this can't be changed at this point in the game. You say that's a terminal case? Generating code to properly implement a decrement chain during unwind impacts on the non-exceptional code path?
>>
>
> Yes as you can't remove increment/decrement pairs as there are 2 decrement
> path (so there is pair).

I don't follow. If you remove the increment, then the decrement is
just removed in both places...?
I can't imagine how it's different than making sure struct destructors
are called, which must already work right?


>> I agree. I would suggest if ARC were proven possible, we would like, switch.
>>
>
> I'd like to see ARC support in D, but I do not think it makes sense as a default.

Then we will have 2 distinct worlds. There will be 2 kinds of D code, and they will be incompatible... I think that's worse.


>>> 3. Memory safety is a requirement for any ARC proposal for D. Swift
>>> ignores
>>> memory safety concerns.
>>
>>
>> What makes RC implicitly unsafe?
>
>
> Without ownership, one can leak reference to RCed object that the RC system do not see.

Obviously, ARC is predicated on addressing ownership. I think that's very clear.
February 25, 2015
On 25 February 2015 at 09:02, ponce via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
>>
>> This is going to sound really stupid... but do people actually use
>> exceptions regularly?
>> I've never used one. When I encounter code that does, I just find it
>> really annoying to debug. I've never 'gotten' exceptions. I'm not sure
>> why error codes are insufficient, other than the obvious fact that
>> they hog the one sacred return value.
>
>
>
> I used to feel like that with exceptions. It's only after a position involving lots of legacy code that they revealed their value.
>
> One (big) problem about error code is that they do get ignored, much too often. It's like manual memory management, everyone think they can do it without errors, but mostly everyone fail at it (me too, and you too).
>
> Exceptions makes a program crash noisily so errors can't be ignored.

This is precisely my complaint though. In a production environment
where there are 10's, 100's of people working concurrently, it is
absolutely unworkable that code can be crashing for random reasons
that I don't care about all the time.
I've experienced before these noisy crashes relating to things that I
don't care about at all. It just interrupts my work, and also whoever
else it is that I have to involve to address the problem before I can
continue.

That is a real cost in time and money. I find that situation to be absolutely unacceptable. I'll take the possibility that an ignored error code may not result in a hard-crash every time.


> More importantly, ignoring an error code is invisible, while ignoring
> exceptions require explicit discarding and some thought.
> Simply put, correctly handling error code is more code and more ugly.
> Ignoring exception is no less code than ignoring error codes, but at least
> it will crash.

Right, but as above, this is an expense quantifiable in time and
dollars that I just can't find within me to balance by this reasoning.
I also prefer my crashes to occur at the point of failure. Exceptions
tend to hide the problem in my experience.
I find it so ridiculously hard to debug exception laden code. I have
no idea how you're meant to do it, it's really hard to find the source
of the problem!

Only way I know is to use break-on-throw, which never works in exception laden code since exception-happy code typically throw for common error cases which happen all the time too.


> Secondly, one real advantage is pure readability improvement. The normal path looks clean and isn't cluttered with error code check. Almost everything can fail!
>
> writeln("Hello");  // can fail

This only fails if runtime state is already broken. Problem is elsewhere. Hard crash is preferred right here.

> auto f = File("config.txt"); // can fail

Opening files is a very infrequent operation, and one of the poster child examples of where you would always check the error return. I'm not even slightly upset by checking the return value from fopen.


> What matter in composite operations is whether all of them succeeded or not. Example: if the sequence of operations A-B-C failed while doing B, you are interested by the fact A-B-C has failed but not really that B failed specifically.

Not necessarily true. You often want to report what went wrong, not just that "it didn't work".


> So you would have to translate error codes from one formalism
> to another. What happens next is that error codes become conflated in the
> same namespace and reused in other unrelated places. Hence, error codes from
> library leak into code that should be isolated from it.

I don't see exceptions are any different in this way. I see internal exceptions bleed to much higher levels where they've totally lost context all the time.


> Lastly, exceptions have a hierarchy and allow to distinguish between bugs
> and input errors by convention.
> Eg: Alan just wrote a function in a library that return an error code if it
> fails. The user program by Betty pass it a null pointer. This is a logic
> error as Alan disallowed it by contract. As Walter repeatedly said us, logic
> errors/bugs are not input errors and the only sane way to handle them is to
> crash.
> But since this function error interface is an error code, Alan return
> something like ERR_POINTER_IS_NULL_CONTRACT_VIOLATED since well, no other
> choice. Now the logic error code gets conflated with error codes
> corresponding to input errors (ERR_DISK_FAILED), and both will be handled
> similarly by Betty for sure, and the earth begin to crackle.

I'm not sure quite what you're saying here, but I agree with Walter.
In case of hard logic error, the sane thing to do is crash (ie,
assert), not throw...
I want my crash at the point of failure, not somewhere else.


> Unfortunately exceptions requires exception safety, they may block some optimizations, and they may hamper debugging. That is usually a social blocker for more exception adoption in C++ circles, but once a group really get it, like RAII, you won't be able to take it away from them.

Performance inhibition is a factor in considering that I've never used exceptions, but it's certainly not the decisive reason.
February 25, 2015
On Thu, Feb 26, 2015 at 02:39:28AM +1000, Manu via Digitalmars-d wrote:
> On 25 February 2015 at 09:02, ponce via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> > On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
> >>
> >> This is going to sound really stupid... but do people actually use exceptions regularly?

I wouldn't say I use them *all* the time, but I do use them. In properly-designed code, they should be pretty rare, so I only really have to deal with them in strategic places, not sprinkled throughout the code.


> >> I've never used one. When I encounter code that does, I just find it really annoying to debug. I've never 'gotten' exceptions. I'm not sure why error codes are insufficient, other than the obvious fact that they hog the one sacred return value.
> >
> > I used to feel like that with exceptions. It's only after a position involving lots of legacy code that they revealed their value.
> >
> > One (big) problem about error code is that they do get ignored, much too often. It's like manual memory management, everyone think they can do it without errors, but mostly everyone fail at it (me too, and you too).
> >
> > Exceptions makes a program crash noisily so errors can't be ignored.
> 
> This is precisely my complaint though. In a production environment where there are 10's, 100's of people working concurrently, it is absolutely unworkable that code can be crashing for random reasons that I don't care about all the time.

It doesn't have to crash if the main loop catches them and logs them to a file where the relevant people can monitor for signs of malfunction.


> I've experienced before these noisy crashes relating to things that I don't care about at all. It just interrupts my work, and also whoever else it is that I have to involve to address the problem before I can continue.

Ideally, it should be the person responsible who's seeing the exceptions, rather than you (if you were not responsible). I see this more as a sign that something is wrong with the deployment process -- doesn't the person(s) committing the change test his changes before committing, which should make production-time exceptions a rare occurrence?


> That is a real cost in time and money. I find that situation to be absolutely unacceptable. I'll take the possibility that an ignored error code may not result in a hard-crash every time.

And the possibility of malfunction caused by ignored errors leading to (possibly non-recoverable) data corruption is more acceptable?


> > More importantly, ignoring an error code is invisible, while ignoring exceptions require explicit discarding and some thought. Simply put, correctly handling error code is more code and more ugly.  Ignoring exception is no less code than ignoring error codes, but at least it will crash.
> 
> Right, but as above, this is an expense quantifiable in time and dollars that I just can't find within me to balance by this reasoning. I also prefer my crashes to occur at the point of failure. Exceptions tend to hide the problem in my experience.
>
> I find it so ridiculously hard to debug exception laden code. I have no idea how you're meant to do it, it's really hard to find the source of the problem!

Isn't the stacktrace attached to the exception supposed to lead you to the point of failure?


> Only way I know is to use break-on-throw, which never works in exception laden code since exception-happy code typically throw for common error cases which happen all the time too.

"Exception-happy" sounds like wrong use of exceptions. No wonder you have problems with them.


> > Secondly, one real advantage is pure readability improvement. The normal path looks clean and isn't cluttered with error code check. Almost everything can fail!
> >
> > writeln("Hello");  // can fail
> 
> This only fails if runtime state is already broken. Problem is elsewhere. Hard crash is preferred right here.

The problem is, without exceptions, it will NOT crash! If stdout is full, for example (e.g., the logfile has filled up the disk), it will just happily move along as if nothing is wrong, and at the end of the day, the full disk will cause some other part of the code to malfunction, but half of the relevant logs aren't there because they were never written to disk in the first place, but the code didn't notice because error codes were ignored. (And c'mon, when was the last time you checked the error code of printf()? I never did, and I suspect you never did either.)


> > auto f = File("config.txt"); // can fail
> 
> Opening files is a very infrequent operation, and one of the poster child examples of where you would always check the error return. I'm not even slightly upset by checking the return value from fopen.

I believe that was just a random example; it's unfair to pick on the specifics. The point is, would you rather write code that looks like this:

	LibAErr_t a_err;
	LibBErr_t b_err;
	LibCErr_t c_err;
	ResourceA *res_a;
	ResourceB *res_b;
	ResourceC *res_c;

	res_a = acquireResourceA();
	if ((a_err = firstOperation(a, b, c)) != LIB_A_OK)
	{
		freeResourceA();
		goto error;
	}

	res_b = acquireResourceB():

	if ((b_err = secondOperation(x, y, z)) != LIB_B_OK)
	{
		freeResourceB();
		freeResourceA();
		goto error;
	}

	res_c = acquireResourceC();

	if ((c_err = thirdOperation(p, q, r)) != LIB_C_OK)
	{
		freeResourceB();
		freeResourceC(); // oops, subtle bug here
		freeResourceA();
		goto error;
	}
	...
	error:
		// deal with problems here

or this:

	try {
		auto res_a = acquireResourceA();
		scope(failure) freeResourceA();

		firstOperation(a, b, c);

		auto res_b = acquireResourceB();
		scope(failure) freeResourceB();

		secondOperation(x, y, z);

		auto res_c = acquireResourceC();
		scope(failure) freeResourceC();

		thirdOperation(p, q, r);

	} catch (Exception e) {
		// deal with problems here
	}


> > What matter in composite operations is whether all of them succeeded or not.  Example: if the sequence of operations A-B-C failed while doing B, you are interested by the fact A-B-C has failed but not really that B failed specifically.
> 
> Not necessarily true. You often want to report what went wrong, not just that "it didn't work".

Isn't that what Exception.msg is for?

Whereas if func1() calls 3 functions, which respectively returns errors of types libAErr_t, libBErr_t, libCErr_t, what should func1() return if, say, the 3rd operation failed? (Keep in mind that if we call functions from 3 different libraries, they are almost guaranteed to return their own error code enums which are never compatible with each other.) Should it return libAErr_t, libBErr_t, or libCErr_t? Or should it do a switch over possible error codes and translate them to a common type libABCErr_t?

>From my experience, what usually happens is that func1() will just
return a single failure code if *any* of the 3 functions failed -- it's just too tedious and unmaintainable otherwise -- which means you *can't* tell what went wrong, only that "it didn't work". My favorite whipping boy is the "internal error". Almost every module in my work project has their own error enum, and almost invariably the most common error returned by any function is the one corresponding with "internal error". Any time a function calls another module and it fails, "internal error" is returned -- because people simply don't have the time/energy to translate error codes from one module to another and return something meaningful. So whenever there is a problem, all we know is that "internal error" got returned by some function. As to where the actual problem is, who knows? There are 500 places where "internal error" might have originated from, but we can't tell which of them it might be because almost *everything* returns "internal error".

Whereas with exceptions, .msg tells you exactly what the error message was. And if the libraries have dedicated exception types, you can even catch each type separately and deal with them accordingly, as opposed to getting a libABCErr_t and then having to map that back to the original error code type in order to understand what the problem was.

I find it hard to believe that you appear to be saying that you have trouble pinpointing the source of the problem with exceptions, whereas you find it easy to track down the problem with error codes. IME it's completely the opposite.


> > So you would have to translate error codes from one formalism to another. What happens next is that error codes become conflated in the same namespace and reused in other unrelated places. Hence, error codes from library leak into code that should be isolated from it.
> 
> I don't see exceptions are any different in this way. I see internal exceptions bleed to much higher levels where they've totally lost context all the time.

Doesn't the stacktrace give you the context?


> > Lastly, exceptions have a hierarchy and allow to distinguish between bugs and input errors by convention.
> >
> > Eg: Alan just wrote a function in a library that return an error code if it fails. The user program by Betty pass it a null pointer. This is a logic error as Alan disallowed it by contract. As Walter repeatedly said us, logic errors/bugs are not input errors and the only sane way to handle them is to crash.
> >
> > But since this function error interface is an error code, Alan return something like ERR_POINTER_IS_NULL_CONTRACT_VIOLATED since well, no other choice. Now the logic error code gets conflated with error codes corresponding to input errors (ERR_DISK_FAILED), and both will be handled similarly by Betty for sure, and the earth begin to crackle.
> 
> I'm not sure quite what you're saying here, but I agree with Walter. In case of hard logic error, the sane thing to do is crash (ie, assert), not throw...
>
> I want my crash at the point of failure, not somewhere else.
[...]

Again, doesn't the exception stacktrace tell you exactly where the point of failure is? Whereas an error code that has percolated up the call stack 20 levels and underwent various mappings (har har, who does that) or collapsed to generic, non-descript values ("internal error" -- much more likely), is unlikely to tell you anything more than "it didn't work". No information about which of the 500 functions 20 calls down the call graph might have been responsible.


T

-- 
You are only young once, but you can stay immature indefinitely. -- azephrahel
February 25, 2015
On Wednesday, 25 February 2015 at 16:39:38 UTC, Manu wrote:
> This is precisely my complaint though. In a production environment
> where there are 10's, 100's of people working concurrently, it is
> absolutely unworkable that code can be crashing for random reasons
> that I don't care about all the time.
> I've experienced before these noisy crashes relating to things that I
> don't care about at all. It just interrupts my work, and also whoever
> else it is that I have to involve to address the problem before I can
> continue.

I see what the problem can be. My feeling is that it is in part a workplace/codebase problem. Bugs that prevent other people from working aren't usually high on the roadmap. This also happen with assertions, and we have to disable them to do work then ; though assertions create no debate.

If the alternative is ignoring error codes, I'm not sure it's better. Anything could happen and then the database must be cleaned up.

>
> That is a real cost in time and money. I find that situation to be
> absolutely unacceptable. I'll take the possibility that an ignored
> error code may not result in a hard-crash every time.
>

True eg. you can ignore every OpenGL error it's kind of hardened.

>
> Right, but as above, this is an expense quantifiable in time and
> dollars that I just can't find within me to balance by this reasoning.

To be fair, this cost has to be balanced with the cost of not finding a defect before sending it to the customer.


> I also prefer my crashes to occur at the point of failure. Exceptions
> tend to hide the problem in my experience.

I would just put a breakpoint on the offending throw. THen it's no different.

> I find it so ridiculously hard to debug exception laden code. I have
> no idea how you're meant to do it, it's really hard to find the source
> of the problem!

I've seen this and this is true with things that need to retry something periodically with a giant try/catch.
try/catch at the wrong levels to "recover" too much things can also make it harder. Some (most?) things should really fail hard rather than resist errors.

I think gamedev is more of an exception since "continue anyway" might be a successful strategy in some capacity. Games are supposed to be "finished" at release which is something customers thankfully don't ask of most software.


> Only way I know is to use break-on-throw, which never works in
> exception laden code since exception-happy code typically throw for
> common error cases which happen all the time too.

What I do is break-on-uncatched. Break-on-throw is moreoften than not hopelessly noisy like you said.

But I only deal with mostly reproducible bugs.

> Opening files is a very infrequent operation, and one of the poster
> child examples of where you would always check the error return. I'm
> not even slightly upset by checking the return value from fopen.

But how happy are you to bubble up the error condition up to call stack?

I'm of mixed opinion on that, seeing error code _feels_ nice and we can say to ourselves "I'm treating the error carefully". Much like we can say to ourselves "I'm carefully managing memory" when we manage memory manually. Somehow I like pedestrian work that really feels like work.

But that doesn't mean we do it efficiently or even in the right way, just that we _think_ it's done right.


>
>> What matter in composite operations is whether all of them succeeded or not.
>> Example: if the sequence of operations A-B-C failed while doing B, you are
>> interested by the fact A-B-C has failed but not really that B failed
>> specifically.
>
> Not necessarily true. You often want to report what went wrong, not
> just that "it didn't work".

Fortunately exceptions allows to bring up any information about what went wrong.

We often see on the Internet that "errors are best dealed with where they happen".

I could not disagree more.
Where "fopen" fails, I have no context to know I'm here because I was trying to save the game.

Error codes force to bubble up the error to be able to say "saving the game has failed", and then since you have used error codes without error strings you cannot even say "saving the game has failed because fopen has failed to open <filename>".

Now instead of just bubbling up error codes I must bubble up error messages too (I've done it). Great!

Errors-should-be-dealed-with-where-they-happen is a complete fallacy.

> I don't see exceptions are any different in this way. I see internal
> exceptions bleed to much higher levels where they've totally lost
> context all the time.

In my experience this is often due to sub-systems saving the ass of others by ignoring errors in the first place instead of either crashing of rebooting the faulty sub-system.


>
> I'm not sure quite what you're saying here, but I agree with Walter.
> In case of hard logic error, the sane thing to do is crash (ie,
> assert), not throw...

assert throw Error for this purpose.

>
> Performance inhibition is a factor in considering that I've never used
> exceptions, but it's certainly not the decisive reason.

And it's a valid concern. Some program parts also seldom need exceptions since mostly dealing with memory and few I/O.

February 25, 2015
Am 24.02.2015 um 10:53 schrieb Walter Bright:
> On 2/24/2015 1:30 AM, Tobias Pankrath wrote:
>> Are the meaningful performance comparisons
>> between the two pointer types that would enable us to estimate
>> how costly emitting those barriers in D would be?
>
> Even 10% makes it a no-go. Even 1%.
>
> D has to be competitive in the most demanding environments. If you've
> got a server farm, 1% speedup means 1% fewer servers, and that can add
> up to millions of dollars.

You seeing this completely one sided. Even if write barries make code slower by 10% its a non issue if the GC collections get faster by 10% as well. Then in average the program will run at the same speed.
February 25, 2015
On 2/25/15 1:27 PM, Benjamin Thaut wrote:
> Am 24.02.2015 um 10:53 schrieb Walter Bright:
>> On 2/24/2015 1:30 AM, Tobias Pankrath wrote:
>>> Are the meaningful performance comparisons
>>> between the two pointer types that would enable us to estimate
>>> how costly emitting those barriers in D would be?
>>
>> Even 10% makes it a no-go. Even 1%.
>>
>> D has to be competitive in the most demanding environments. If you've
>> got a server farm, 1% speedup means 1% fewer servers, and that can add
>> up to millions of dollars.
>
> You seeing this completely one sided. Even if write barries make code
> slower by 10% its a non issue if the GC collections get faster by 10% as
> well. Then in average the program will run at the same speed.

Hmmmm... not sure the math works out that way. -- Andrei