June 12, 2018
On Tuesday, 12 June 2018 at 14:19:42 UTC, Steven Schveighoffer wrote:
> On 6/10/18 7:59 PM, Bauss wrote:
>> What is the point of nothrow if it can only detect when Exception is thrown and not when Error is thrown?
>> 
>> It seems like the attribute is useless because you can't really use it as protection to write bugless, safe code since the nasty bugs will pass by just fine.
>
> Array invalid index throws Error, and asserts throw Error. I'm not sure how much you could accomplish without such features. In fact, I'd consider these ESSENTIAL to writing safe code.
>
> Bug-free code is a myth :)
>
> -Steve

Both are cases that the compiler __could__ warn you about potential possibility of them being thrown and thus allowing you to write code that makes sure it doesn't happen.

Ex.

int a = array[400];

Could yield a warning stating a possible a out of bounds error.

Where:

int a = array.length >= 401 ? array[400] : 0;

Wouldn't because you're handling the case.

What I'm trying to say it would be nice to catch certain situations like that (of course not possible with all) because you'll end up having to handle them anyway after the error is thrown.




June 12, 2018
On 6/12/18 11:48 AM, Bauss wrote:
> On Tuesday, 12 June 2018 at 14:19:42 UTC, Steven Schveighoffer wrote:
>> On 6/10/18 7:59 PM, Bauss wrote:
>>> What is the point of nothrow if it can only detect when Exception is thrown and not when Error is thrown?
>>>
>>> It seems like the attribute is useless because you can't really use it as protection to write bugless, safe code since the nasty bugs will pass by just fine.
>>
>> Array invalid index throws Error, and asserts throw Error. I'm not sure how much you could accomplish without such features. In fact, I'd consider these ESSENTIAL to writing safe code.
>>
>> Bug-free code is a myth :)
>>
> 
> Both are cases that the compiler __could__ warn you about potential possibility of them being thrown and thus allowing you to write code that makes sure it doesn't happen.
> 
> Ex.
> 
> int a = array[400];
> 
> Could yield a warning stating a possible a out of bounds error.
> 
> Where:
> 
> int a = array.length >= 401 ? array[400] : 0;
> 
> Wouldn't because you're handling the case.
> 
> What I'm trying to say it would be nice to catch certain situations like that (of course not possible with all) because you'll end up having to handle them anyway after the error is thrown.

It's trivial to get into situations that are provably not going to throw an error, but for which the compiler still is going to insert the check. I think it would end up being more annoying than useful.

If I had it my way, array bounds checks would not be an error, they would be an exception (and not be turned off ever for actual arrays).

-Steve
June 12, 2018
On Tuesday, 12 June 2018 at 15:48:58 UTC, Bauss wrote:
>
> Ex.
>
> int a = array[400];
>
> Could yield a warning stating a possible a out of bounds error.
>
> Where:
>
> int a = array.length >= 401 ? array[400] : 0;
>

looks to me like a crash guard. Similar to something like this

void fn(Foo* foo)
{
  if (foo)
    //do stuff
}

program now crashes somewhere else when foo is null or invalid.
June 12, 2018
On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis wrote:
> On Sunday, June 10, 2018 23:59:17 Bauss via Digitalmars-d-learn wrote:
> Errors are supposed to kill the program, not get caught. As such, why does it matter if it can throw an Error?
>
> Now, personally, I'm increasingly of the opinion that the fact that we have Errors is kind of dumb given that if it's going to kill the program, and it's not safe to do clean-up at that point, because the program is in an invalid state, then why not just print the message and stack trace right there and then kill the program instead of throwing anything? But unforntunately, that's not what happens, which does put things in the weird state where code can catch an Error even though it shouldn't be doing that.

Sorry for off topic but this means that I should revoke a private key every time a server crashes because it's not possible to erase secrets from RAM ?


June 12, 2018
On Tuesday, June 12, 2018 17:38:07 wjoe via Digitalmars-d-learn wrote:
> On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis wrote:
> > On Sunday, June 10, 2018 23:59:17 Bauss via Digitalmars-d-learn
> > wrote:
> > Errors are supposed to kill the program, not get caught. As
> > such, why does it matter if it can throw an Error?
> >
> > Now, personally, I'm increasingly of the opinion that the fact that we have Errors is kind of dumb given that if it's going to kill the program, and it's not safe to do clean-up at that point, because the program is in an invalid state, then why not just print the message and stack trace right there and then kill the program instead of throwing anything? But unforntunately, that's not what happens, which does put things in the weird state where code can catch an Error even though it shouldn't be doing that.
>
> Sorry for off topic but this means that I should revoke a private key every time a server crashes because it's not possible to erase secrets from RAM ?

The fact that an Error was thrown means that either the program ran out of a resource that it requires to do it's job and assumes is available such that it can't continue without it (e.g. failed memory allocation) and/or that the program logic is faulty. At that point, the program is in an invalid state, and by definition can't be trusted to do the right thing. Once the program is in an invalid state, running destructors, scope statements, etc. could actually make things much worse. They could easily be operating on invalid data and do entirely the wrong thing. Yes, there are cases where someone could look at what's happening and determine that based on what exactly went wrong, some amount of clean-up is safe, but without knowing exactly what went wrong and why, that's not possible.

And remember that regardless of what happens with Errors, other things can kill your program (e.g. segfaults), so if you want a robust server application, you have to deal with crashes regardless. You can't rely on your program always exiting cleanly or doing any proper clean-up, much as you want it to exit cleanly normally. Either way, if your program is crashing frequently enough that the lack of clean-up poses a real problem, then you have serious problems anyway. Certainly, if you're getting enough crashes that having to do something annoying like revoke a private key is happening anything but rarely, then you have far worse problems than having to revoke a private key or whatever else you might have to do because the program didn't shut down cleanly.

- Jonathan M Davis

June 12, 2018
On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis wrote:
> Why do you care about detecting code that can throw an Error? Errors are supposed to kill the program, not get caught. As such, why does it matter if it can throw an Error?

Error is currently used for three different things:
* This is a problem that could occur in such a wide range of circumstances, it would make it difficult to use nothrow.
* This is a problem severe enough that almost every program would have to abort in these circumstances, so it's reasonable to abort every program here, and damn the few that could handle this type of problem.
* This is a problem that someone thinks you might not want to catch when you write `catch (Exception)`, even if it can't be thrown from many places and it wouldn't kill most programs.

As an example of the former: I have a service that uses length-prefixed messages on raw sockets. Someone tries to connect to this service with curl. The length of the message is read as 0x4854_5450_2131_2E31 -- ASCII "HTTP/1.1" as an unsigned long.

(Or we read a 32-bit length, but we're running on a system with 128MB of RAM and overcommit turned off.)

The program might be in an invalid state if this allocation fails. It might not. This depends entirely on how it was written. The runtime is in a valid state. But the exception is OutOfRangeError, which inherits from Error.

Similarly, RangeError. There's little conceptual difference between `try {} catch (RangeError) break` and `if (i >= length) break`. But forbidding dynamic array indexing in nothrow code would be rather extreme.

On the other hand, a Unicode decoding error is a UnicodeException, not a UnicodeError. I guess whoever wrote that thought invalid Unicode data was sufficiently more common than invalid values in length-prefixed data formats to produce a difference in kind. This isn't obviously wrong, but it does look like something that could use justification.
June 12, 2018
On Tuesday, June 12, 2018 23:32:55 Neia Neutuladh via Digitalmars-d-learn wrote:
> On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis wrote:
> > Why do you care about detecting code that can throw an Error? Errors are supposed to kill the program, not get caught. As such, why does it matter if it can throw an Error?
>
> Error is currently used for three different things:
> * This is a problem that could occur in such a wide range of
> circumstances, it would make it difficult to use nothrow.

This is not a valid reason to use Error. Error is specifically for cases where failure is a bug in the program or where the program cannot recover from the failure and must be terminated. If a program is simply trying to be able to use nothrow, then it needs to use an error-hanlding mechanism other than exceptions. Not only is this how Errors are designed to work, but the fact that proper clean-up is not guaranteed when a non-Exception Throwable is thrown means that attempting to continue after anything other than an Exception is thrown is incredibly risky, potentially putting your program in an invalid state and causing who knows what bugs. And nothrow functions are a prime case where clean-up is definitely not done for non-Exceptions, because avoiding the extra code necessary to do that clean-up is one of the main reasons that nothrow exists in the first place.

> * This is a problem severe enough that almost every program would
> have to abort in these circumstances, so it's reasonable to abort
> every program here, and damn the few that could handle this type
> of problem.
> * This is a problem that someone thinks you might not want to
> catch when you write `catch (Exception)`, even if it can't be
> thrown from many places and it wouldn't kill most programs.
>
> As an example of the former: I have a service that uses length-prefixed messages on raw sockets. Someone tries to connect to this service with curl. The length of the message is read as 0x4854_5450_2131_2E31 -- ASCII "HTTP/1.1" as an unsigned long.
>
> (Or we read a 32-bit length, but we're running on a system with 128MB of RAM and overcommit turned off.)
>
> The program might be in an invalid state if this allocation fails. It might not. This depends entirely on how it was written. The runtime is in a valid state. But the exception is OutOfRangeError, which inherits from Error.

It's possible to write programs that check and handle running out of memory, but most programs don't, and usually, if a program runs out of memory, it can't do anything about it and can't function properly at that point. As such, D's new was designed with the idea that failed memory allocations are fatal to the program, and any program that wants to be able to handle the case where it runs out of memory but somehow is able to continue to function shouldn't be using the GC for such allocations.

But programs that can even attempt to recover from running out of memory are going to be rare, and having running out of memory throw an Exception would likely cause all kinds of fun problems in the typical case, since if anything catches the Exception, that could easily trigger a chain reaction of nasty stuff. The catch almost certainly wouldn't be properly attempting to recover from running out of memory, and the program would almost certainly assume that allocations always succeeded rather than exiting on allocation failure. So, continuing at that point would effectively put the program in an invalid state. Also, if simply allocating memory could throw and Exception, then that would pretty much kill nothrow, since it would only be viable in @nogc code.

So, while treating all failed memory allocations as fatal is certainly a debtable choice, it does fit what most programs do quite well. But either way, the result is that anyone programming in D who might want to recover from memory allocation failures needs to take that design into account and really should be avoiding the GC for such allocations.

> Similarly, RangeError. There's little conceptual difference between `try {} catch (RangeError) break` and `if (i >= length) break`. But forbidding dynamic array indexing in nothrow code would be rather extreme.

The idea is that it's a bug in your code if you ever index an array with an index that's out-of-bounds. If there's any risk of indexing incorrectly, then the program needs to check for it, or it's a bug in the program. Most indices are not taken from program input, so treating them as input in the general case wouldn't really make sense - plus, of course, treating them as program input in the general case would mean using Exceptions, which would then kill nothrow. In the end, it just makes more sense to treat invalid indices as programming errors. So, in the cases where an index is actually derived from program input, the program must check the index, or it's a bug, and the result will be an Error being thrown.

> On the other hand, a Unicode decoding error is a UnicodeException, not a UnicodeError. I guess whoever wrote that thought invalid Unicode data was sufficiently more common than invalid values in length-prefixed data formats to produce a difference in kind. This isn't obviously wrong, but it does look like something that could use justification.

The difference is that incorrectly indexing an array is considered a bug in your program, whereas bad Unicode is almost always bad program input. Bad input to a program is not a bug in the program. Assuming that the input is valid and treating it that way when it might be invalid would be a bug in the program, but code that validates program input is not buggy because it determines that the input is bad. As such, throwing an Error on bad Unicode doesn't make much sense. The only way that it would make sense to treat invalid Unicode as a bug in the program would be if it were reasonable to assume that all Unicode was validated before ever being passed to std.utf.decode or std.utf.stride.

- Jonathan M Davis

June 13, 2018
On Tuesday, 12 June 2018 at 18:41:07 UTC, Jonathan M Davis wrote:
> On Tuesday, June 12, 2018 17:38:07 wjoe via Digitalmars-d-learn wrote:
>> On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis wrote:
>> > On Sunday, June 10, 2018 23:59:17 Bauss via Digitalmars-d-learn
>> > wrote:
>> > Errors are supposed to kill the program, not get caught. As
>> > such, why does it matter if it can throw an Error?
>> >
>> > Now, personally, I'm increasingly of the opinion that the fact that we have Errors is kind of dumb given that if it's going to kill the program, and it's not safe to do clean-up at that point, because the program is in an invalid state, then why not just print the message and stack trace right there and then kill the program instead of throwing anything? But unforntunately, that's not what happens, which does put things in the weird state where code can catch an Error even though it shouldn't be doing that.
>>
>> Sorry for off topic but this means that I should revoke a private key every time a server crashes because it's not possible to erase secrets from RAM ?
>
> The fact that an Error was thrown means that either the program ran out of a resource that it requires to do it's job and assumes is available such that it can't continue without it (e.g. failed memory allocation) and/or that the program logic is faulty. At that point, the program is in an invalid state, and by definition can't be trusted to do the right thing. Once

If memory serves a failure to malloc in C can easily be checked for success by comparing the returned pointer to null prior to accessing it. If the pointer is null this only means that memory allocation for the requested size failed. I fail to see how this attempt at malloc could have corrupted the entire program state invalid.
Why would it be inherently unsafe to free memory and try to malloc again.
But maybe it's an optional feature and could just be disabled.
But maybe it does mean that the program cannot continue.
I still don't see a need to force quit without the opportunity to decide whether it's an error to abort or an error that can be fixed during run time.

> the program is in an invalid state, running destructors, scope statements, etc. could actually make things much worse. They could easily be operating on invalid data and do entirely the wrong thing. Yes, there are cases where someone could look at

could. Like erasing the hard drive ? But that could have happened already. Could be the reason of the error, in the first place. Destructors, scope statements etc. could also still work flawlessly and it could become worse because of not exiting gracefully. Data not synced to disk, rollback not executed, vital shutdown commands omitted.

> what's happening and determine that based on what exactly went wrong, some amount of clean-up is safe, but without knowing exactly what went wrong and why, that's not possible.
>

But Errors have names, or codes. So it should be possible to figure out what or why. No?
In case of an out of memory error, maybe the error could be resolved by running the GC and retry.

I'm afraid I really can't grasp the idea why it's the end of the world because an Error was thrown.

> And remember that regardless of what happens with Errors, other things can kill your program (e.g. segfaults), so if you want a robust server application, you have to deal with crashes regardless. You can't rely on your program always exiting cleanly or doing any proper clean-up, much as you want it to exit cleanly normally. Either way, if your program is crashing

it is possible to install a signal handler for almost every signal on POSIX, including segfault. The only signal you can't catch is signal 9 - sigkill if memory serves.
So I could for instance install a clean up handler on a segfault via memset, or a for loop, and then terminate.

If program state, and not necessarily just my own programs but any programs that store secrets in RAM, is to be considered invalid on a thrown Error, and I cannot rely on proper clean up, I must consider a secret leaked as soon as it is stored in RAM at the same time an Error is thrown.

Therefore the only conclusion is that such a language is not safe to use for applications that handle sensitive information, such as encrypted email, digital signing, secure IM or anything that requires secrets to perform it's work.
This is really sad because I want to think that improved technology is actually better than it's precursors.
What I would hope for is a mechanism to help the developer to safely handle these error conditions, or at least gracefully terminate. Now I understand that nothing can be done about program state actually messed up beyond repair and the program terminated by the OS but just assuming all is FUBAR because of throw Error is cheap.

> frequently enough that the lack of clean-up poses a real problem, then you have serious problems anyway. Certainly, if you're getting enough crashes that having to do something annoying like revoke a private key is happening anything but rarely, then you have far worse problems than having to revoke a private key or whatever else you might have to do because the program didn't shut down cleanly.
>
> - Jonathan M Davis

I can't know if the error was caused by accident or on purpose.
And I don't see how frequency of failure changes anything about the fact. If a secret is left in RAM it can be read or become included in a coredump. Whether it leaked the first time or not, or not at all I wouldn't know but a defensive approach would be to assume the worst case the first time.
Also it doesn't just relate to secrets not being cleaned up, but I could imagine something like sending out a udp packet or a signal on a pin or something alike to have external hardware stop its operation. Emergency stop comes to mind.

Further, does it mean that a unitest should run each test case in it's own process? Because an assert(false) for a not yet implemented test case would render all further test cases (theoretically) undefined, which would make the unitest{} blocks rather useless ,too?

Sorry for off topic...

June 12, 2018
On Wednesday, June 13, 2018 02:02:54 wjoe via Digitalmars-d-learn wrote:
> On Tuesday, 12 June 2018 at 18:41:07 UTC, Jonathan M Davis wrote:
> > On Tuesday, June 12, 2018 17:38:07 wjoe via Digitalmars-d-learn
> >
> > wrote:
> >> On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis
> >>
> >> wrote:
> >> > On Sunday, June 10, 2018 23:59:17 Bauss via
> >> > Digitalmars-d-learn
> >> > wrote:
> >> > Errors are supposed to kill the program, not get caught. As
> >> > such, why does it matter if it can throw an Error?
> >> >
> >> > Now, personally, I'm increasingly of the opinion that the fact that we have Errors is kind of dumb given that if it's going to kill the program, and it's not safe to do clean-up at that point, because the program is in an invalid state, then why not just print the message and stack trace right there and then kill the program instead of throwing anything? But unforntunately, that's not what happens, which does put things in the weird state where code can catch an Error even though it shouldn't be doing that.
> >>
> >> Sorry for off topic but this means that I should revoke a private key every time a server crashes because it's not possible to erase secrets from RAM ?
> >
> > The fact that an Error was thrown means that either the program ran out of a resource that it requires to do it's job and assumes is available such that it can't continue without it (e.g. failed memory allocation) and/or that the program logic is faulty. At that point, the program is in an invalid state, and by definition can't be trusted to do the right thing. Once
>
> If memory serves a failure to malloc in C can easily be checked
> for success by comparing the returned pointer to null prior to
> accessing it. If the pointer is null this only means that memory
> allocation for the requested size failed. I fail to see how this
> attempt at malloc could have corrupted the entire program state
> invalid.
> Why would it be inherently unsafe to free memory and try to
> malloc again.
> But maybe it's an optional feature and could just be disabled.
> But maybe it does mean that the program cannot continue.
> I still don't see a need to force quit without the opportunity to
> decide whether it's an error to abort or an error that can be
> fixed during run time.

Most programs do not handle the case where they run out of memory and cannot continue at that point. For better or worse, D's GC was designed with that in mind, and it treats failed allocations as an Error. In the vast majority of cases, this is desirable behavior. In those cases when it isn't, alternate memory allocation schemes such as malloc can be used. But regardless of whether the decision to treat failed memory allocations as an Error was a good one or not, the fact remains that as soon as an Error is thrown, you lose the ability to deal with things cleanly, because full clean up is not done when an Error is thrown (and can't be due to things like how nothrow works). So, regardless of whether a failed memory allocation is a condition that can be recovered from in principle, the way that D handles GC allocations make it unrecoverable in practice - at least as far as GC-allocated memory is concerned.

> > the program is in an invalid state, running destructors, scope statements, etc. could actually make things much worse. They could easily be operating on invalid data and do entirely the wrong thing. Yes, there are cases where someone could look at
>
> could. Like erasing the hard drive ? But that could have happened already. Could be the reason of the error, in the first place. Destructors, scope statements etc. could also still work flawlessly and it could become worse because of not exiting gracefully. Data not synced to disk, rollback not executed, vital shutdown commands omitted.

The point is that once the program is in an invalid state, you have no way of knowing whether attempting to do anything else in the program (including running clean up code) is going to make matters better or worse. And since robust programs must be able to deal with crashes anyway, in general, it makes far more sense to forgo any clean up and avoid the risk of doing further damage. Whatever mechanisms are used to deal with a crashed program can then be used just like if the program crashed for any other reason.

> > what's happening and determine that based on what exactly went wrong, some amount of clean-up is safe, but without knowing exactly what went wrong and why, that's not possible.
>
> But Errors have names, or codes. So it should be possible to
> figure out what or why. No?
> In case of an out of memory error, maybe the error could be
> resolved by running the GC and retry.
>
> I'm afraid I really can't grasp the idea why it's the end of the world because an Error was thrown.

Errors are specifically for non-recoverable conditions where it is not considered desirable or reasonable to continue the program - be it because of a bug in the program's logic, a lack of a resource that the program needs and cannot function without, or any other condition where the program is in an invalid state. If the condition is intended to be recoverable, then an Exception is used, not an Error.

And of course, because the program is considered in an invalid state when an Error is thrown (and thus clean up code is skipped), attempting to recover means attempting to continue the program when it's in an invalid state, which could easily do more harm than good.

The entire distinction between Exception and Error has to do with whether the condition is considered to be something that could be recovered from or not. You can debate whether a particular condition should be treated as an Exception or Error, but the distinction between the two in terms of how they should be handled is quite clear. It's perfectly reasonable, acceptable, and desirable to catch Exceptions in order to handle the error condition and recover. It is not reasonable to attempt to catch Errors and attempt to recover. The simple fact that it is an Error makes it that way regardless of whether that particular condition should or should not have been treated as an Error in the first place.

> > And remember that regardless of what happens with Errors, other things can kill your program (e.g. segfaults), so if you want a robust server application, you have to deal with crashes regardless. You can't rely on your program always exiting cleanly or doing any proper clean-up, much as you want it to exit cleanly normally. Either way, if your program is crashing
>
> it is possible to install a signal handler for almost every
> signal on POSIX, including segfault. The only signal you can't
> catch is signal 9 - sigkill if memory serves.
> So I could for instance install a clean up handler on a segfault
> via memset, or a for loop, and then terminate.
>
> If program state, and not necessarily just my own programs but any programs that store secrets in RAM, is to be considered invalid on a thrown Error, and I cannot rely on proper clean up, I must consider a secret leaked as soon as it is stored in RAM at the same time an Error is thrown.
>
> Therefore the only conclusion is that such a language is not safe
> to use for applications that handle sensitive information, such
> as encrypted email, digital signing, secure IM or anything that
> requires secrets to perform it's work.
> This is really sad because I want to think that improved
> technology is actually better than it's precursors.
> What I would hope for is a mechanism to help the developer to
> safely handle these error conditions, or at least gracefully
> terminate. Now I understand that nothing can be done about
> program state actually messed up beyond repair and the program
> terminated by the OS but just assuming all is FUBAR because of
> throw Error is cheap.

Errors are specifically cases where it's not considered reasonable to handle the error condition. The whole point is that they can't be handled safely, and the best course of action is to kill the program without attempting any clean-up. But remember that Errors are supposed to be rare. If they're happening often enough to cause real problems in prouction code, then you have much bigger problems than whether the program is attempting clean-up on shutdown or not.

Also, because the fact that an Error is thrown means that the program is in an invalid state, even if it _did_ attempt clean-up, it would be a terrible idea to assume that that clean-up worked properly. So, any program that truly needs to do clean-up after an Error is thrown is really going to need to do something like do clean-up when the program restarts. And yes, there are potential problems there, but that's what happens when your program gets into an invalid state and is one of the reasons why you want to throughly test software before putting it into production.

And the reality of the matter is that you can't rely on clean-up in _any_ language. There is always a way to kill a program without it being able to do full clean-up, even if that means simply pulling the plug on the computer. Pulling the plug may not affect concerns about leaving data in memory, but you yourself just said that sigkill can't be handled. So, you have a way right there that you can't rely on clean-up happening - and it's not like it's hard to kill a program as long as your user has the permsissions to do so. So, any program that really needs to care about clean-up is going to need a way to handle cases like that.

> > frequently enough that the lack of clean-up poses a real problem, then you have serious problems anyway. Certainly, if you're getting enough crashes that having to do something annoying like revoke a private key is happening anything but rarely, then you have far worse problems than having to revoke a private key or whatever else you might have to do because the program didn't shut down cleanly.
> >
> > - Jonathan M Davis
>
> I can't know if the error was caused by accident or on purpose.
> And I don't see how frequency of failure changes anything about
> the fact. If a secret is left in RAM it can be read or become
> included in a coredump. Whether it leaked the first time or not,
> or not at all I wouldn't know but a defensive approach would be
> to assume the worst case the first time.
> Also it doesn't just relate to secrets not being cleaned up, but
> I could imagine something like sending out a udp packet or a
> signal on a pin or something alike to have external hardware stop
> its operation. Emergency stop comes to mind.
>
> Further, does it mean that a unitest should run each test case in it's own process? Because an assert(false) for a not yet implemented test case would render all further test cases (theoretically) undefined, which would make the unitest{} blocks rather useless ,too?
>
> Sorry for off topic...

I would not advise calling functions in unittest blocks that just assert false. If they're called directly, it doesn't matter much, since clean-up is done for AssertErrors in unittest blocks, but if it's deeper in the call stack, then you could get into an invalid state. How much that matters depends on how independent your tests are, but it would be better to avoid it.

However, even if things do get into a screwed up state, this is a unittest run that we're talking about here. It's not in production and therefore presumably can't do real damage. All of those test failures should be fixed long before the code hits production, so any screwyness shouldn't cause long term problems. But still, in general, you really don't want to be trusting what happens with a unittest build after an assertion failure occurs outside of a unittest block. It might be valid, or it might not.

- Jonathan M Davis

June 13, 2018
On Wednesday, 13 June 2018 at 00:38:55 UTC, Jonathan M Davis wrote:
> It's possible to write programs that check and handle running out of memory, but most programs don't, and usually, if a program runs out of memory, it can't do anything about it and can't function properly at that point.

Simulations that run out of memory are likely unable to recover from OutOfMemoryError. Transactional  programs like webservers are likely to run out of memory due to an unusually large request.

> The idea is that it's a bug in your code if you ever index an array with an index that's out-of-bounds. If there's any risk of indexing incorrectly, then the program needs to check for it, or it's a bug in the program. Most indices are not taken from program input, so treating them as input in the general case wouldn't really make sense

The case I find is almost invariably a hard-coded index into input data, like a CSV file that is supposed to have ten columns but only has eight.

This is often a bug in my program simply because most exceptions I encounter are bugs in my program.

> - plus, of course, treating them as program input in the general case would mean using Exceptions, which would then kill nothrow.

Which goes back to my point of problems that could be caused by too wide a range of code being Errors.