September 16, 2014
On Fri, 01 Aug 2014 01:07:55 -0400, Jonathan M Davis <jmdavisProg@gmx.com> wrote:

> On Thursday, 31 July 2014 at 20:49:18 UTC, Timon Gehr wrote:
>> On 07/31/2014 09:37 PM, Jonathan M Davis wrote:
>>> disable contracts, turn assert(0) into a halt
>>> instruction, and disable bounds checking in @system and @trusted code.
>>> So, if you want to keep the assertions and contracts and whatnot in,
>>
>> Unfortunately, if used pervasively, assertions and contracts and whatnot may actually hog the speed of a program in a way that breaks the deal.
>>
>> Disabling assertions (and whatnot), assuming assertions to be true (and disabling whatnot) and leaving all assertions and whatnot in are different trade-offs, of which assuming all assertions to be true is the most dangerous one. Why hide this behaviour in '-release'?
>
> I'm afraid that I don't see the problem. If you want assertions left in in your release/production builds, then don't use -release. If you want them removed, then use -release. Are you objecting to the fact that the compiler can do further optimizations based on the fact that the assertions are presumed to be true if they're removed? I really don't see any problem with that. You're screwed regardless if the assertions would have failed. By their very nature, if an assertion would have failed, your program is an invalid state, so if you don't want to risk having code run that would have failed an assertion, then just don't compile with -release. And if you are willing to assume that the assertions won't fail and have them disabled in your release build, then you might as well gain any extra optimizations that can be made from assuming that the assertion is true. You're already making that assumption anyway and potentially letting your program enter an invalid state.

This is not tenable. I realized when developing a library with classes, that not doing -release, means every virtual call is preceded and followed by a call to assert(obj), which calls Object.invariant -- a virtual call. Even if you don't define any invariant, it's still called (maybe final classes are better, but I'm not sure).

The cost for this is tremendous. You may as well not use classes.

-Steve
September 17, 2014
On Tuesday, 16 September 2014 at 00:33:47 UTC, Steven Schveighoffer wrote:
> You may as well not use classes.

Always a good idea ;)
September 18, 2014
On Tuesday, 16 September 2014 at 00:33:47 UTC, Steven Schveighoffer wrote:
> The cost for this is tremendous. You may as well not use classes.

Looks like ldc has a separate option to turn off invariants.
September 18, 2014
On Thu, 18 Sep 2014 08:57:20 -0400, Kagamin <spam@here.lot> wrote:

> On Tuesday, 16 September 2014 at 00:33:47 UTC, Steven Schveighoffer wrote:
>> The cost for this is tremendous. You may as well not use classes.
>
> Looks like ldc has a separate option to turn off invariants.

That's a good thing. I'm actually thinking this should be in DMD as well. invariants, always called by default, when almost no object defines one, is really costly. It's also a virtual call, which means any inlining is not possible.

-Steve
September 18, 2014
On 01/08/2014 05:12, Walter Bright wrote:
> On 7/31/2014 2:21 PM, Sean Kelly wrote:
>> Thoughts?
>
> If a process detects a logic error, then that process is in an invalid
> state that is unanticipated and unknown to the developer. The only
> correct solution is to halt that process, and all processes that share
> memory with it.
>
> Anything less is based on faith and hope. If it is medical, flight
> control, or banking software, I submit that operating on faith and hope
> is not acceptable.
>
> If it's a dvr or game, who cares :-) My dvr crashes regularly needing a
> power off reboot.

"If it's a game, who cares" -> Oh let's see... let's say I'm playing a game, and then there's a bug (which happens often). What would I prefer to happen:

* a small (or big) visual glitch, like pixels out of place, corrupted textures, or 3D model of an object becoming deformed, or the physics of some object behaving erratically, or some broken animation.

* the whole game crashes, and I lose all my progress?

So Walter, which one do you think I prefer? Me, and the rest of the million gamers out there?

So yeah, we care, and we are the consumers of an industry worth more than the movie industry. As a software developer, to dismiss these concerns is silly and unprofessional.

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros
September 18, 2014
On Thu, 18 Sep 2014 17:05:31 +0100
Bruno Medeiros via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> * a small (or big) visual glitch, like pixels out of place, corrupted textures, or 3D model of an object becoming deformed, or the physics of some object behaving erratically, or some broken animation.
or the whole game renders itself unbeatable due to some correpted data, but you have no way to know it until you made it to the Final Boss and can never win that fight. ah, so charming!


September 18, 2014
On Thu, Sep 18, 2014 at 05:05:31PM +0100, Bruno Medeiros via Digitalmars-d wrote:
> On 01/08/2014 05:12, Walter Bright wrote:
> >On 7/31/2014 2:21 PM, Sean Kelly wrote:
> >>Thoughts?
> >
> >If a process detects a logic error, then that process is in an invalid state that is unanticipated and unknown to the developer. The only correct solution is to halt that process, and all processes that share memory with it.
> >
> >Anything less is based on faith and hope. If it is medical, flight control, or banking software, I submit that operating on faith and hope is not acceptable.
> >
> >If it's a dvr or game, who cares :-) My dvr crashes regularly needing a power off reboot.
> 
> "If it's a game, who cares" -> Oh let's see... let's say I'm playing a game, and then there's a bug (which happens often). What would I prefer to happen:
> 
> * a small (or big) visual glitch, like pixels out of place, corrupted textures, or 3D model of an object becoming deformed, or the physics of some object behaving erratically, or some broken animation.
> 
> * the whole game crashes, and I lose all my progress?
[...]

What if the program has a bug that corrupts your save game file, but because the program ignores these logic errors, it just bumbles onward and destroys all your progress *without* you even knowing about it until much later?

(I have actually experienced this firsthand, btw. I found it *far* more frustrating than losing all my progress -- at least I can restore the game to the last savepoint, and have confidence that it isn't irreparably corrupted! I almost threw the computer out the window once when after a game crash I restored the savefile, only to discover a few hours later that due to a corruption in the savefile, it was impossible to win the game after all. Logic errors should *never* have made it past the point of detection.)


T

-- 
People tell me I'm stubborn, but I refuse to accept it!
September 18, 2014
On Thu, Sep 18, 2014 at 07:13:48PM +0300, ketmar via Digitalmars-d wrote:
> On Thu, 18 Sep 2014 17:05:31 +0100
> Bruno Medeiros via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> 
> > * a small (or big) visual glitch, like pixels out of place, corrupted textures, or 3D model of an object becoming deformed, or the physics of some object behaving erratically, or some broken animation.
> or the whole game renders itself unbeatable due to some correpted data, but you have no way to know it until you made it to the Final Boss and can never win that fight. ah, so charming!

Exactly!!!!

Seriously, this philosophy of ignoring supposedly "minor" bugs in software is what led to the sad state of software today, where nothing is reliable and people have come to expect that software will inevitably crash, and that needing to reboot an OS every now and then just to keep things working is acceptable. Yet, strangely enough, people will scream bloody murder if a car behaved like that.


T

-- 
Skill without imagination is craftsmanship and gives us many useful objects such as wickerwork picnic baskets.  Imagination without skill gives us modern art. -- Tom Stoppard
September 18, 2014
> The point is that using these enforce() statements means that these methods cannot be nothrow, which doesn't seem particularly nice if it can be avoided. Now, on the one hand, one could say that, quite obviously, these methods cannot control their input.  But on the other hand, it's reasonable to say that these methods' input can and should never be anything other than 100% controlled by the programmer.
>
> My take is that, for this reason, these should be asserts and not enforce() statements.  What are your thoughts on the matter?

Assert's are disable by -release option. If you want to do check, and don't want to raise exception you can define special function like this:
import std.stdio;

void enforceNoThrow(bool pred, string msg, string file = __FILE__, uint line = __LINE__)
{
    if (!pred)
    {
        stderr.writefln("Fatal error: %s at %s:%s", msg, file, line);
        assert(0);
    }

}

void doSomething(int min, int middle, int max)
{
    enforceNoThrow(middle >= min && middle <= max, "middle out of bounds");
}

doSomething(1, 5, 3); //prints error message into stderr and terminate programm execution

This way is better for library code then asserts. Asserts should be used for catching internal bugs, but incorrect code is external bug (for extern library)

September 19, 2014
On 18/09/14 18:49, H. S. Teoh via Digitalmars-d wrote:

> What if the program has a bug that corrupts your save game file, but
> because the program ignores these logic errors, it just bumbles onward
> and destroys all your progress *without* you even knowing about it until
> much later?

Happened to me with Assassin's Creed 3, twice. It did crash and when I started the game again the save files where corrupted. Although, I have no way of knowing if it crashed when the bug happened or much later.

-- 
/Jacob Carlborg