October 07, 2014
On 10/07/2014 10:09 PM, Walter Bright wrote:
> On 10/7/2014 12:44 PM, Timon Gehr wrote:
>> On 10/07/2014 09:26 PM, Walter Bright wrote:
>>> On 10/7/2014 6:56 AM, Timon Gehr wrote:
>>>> On 10/06/2014 01:01 AM, Walter Bright wrote:
>>>>> Relying on program state after entering an unknown state is
>>>>> undefined by
>>>>> definition.
>>>>
>>>> What definition?
>>>
>>> How can one define the behavior of an unknown state?
>>>
>>
>> Well, how do you define the behaviour of a program that will be fed an
>> unknown
>> input? That way.
>>
>> I don't really understand what this question is trying to get at. Just
>> define
>> the language semantics appropriately.
>>
>> Your reasoning usually goes like
>>
>> a certain kind of event you assume to be bad -> bug -> unknown state ->
>> undefined behaviour.
>
>
> What defined behavior would you suggest would be possible after an
> overflow bug is detected?

At the language level, there are many possibilities. Just look at what type safe languages do. It is not true that this must lead to UB by a "definition" commonly agreed upon by participants in this thread.
October 07, 2014
On Mon, Oct 6, 2014 at 6:19 PM, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote:

> On 10/6/14, 4:46 PM, Jeremy Powers via Digitalmars-d wrote:
>
>> On Mon, Oct 6, 2014 at 7:50 AM, Andrei Alexandrescu via Digitalmars-d
>>     I'm thinking a simple key-value store Variant[string] would
>>     accommodate any state needed for differentiating among exception
>>     kinds whenever that's necessary.
>>
>>
>> And 'kinds' is a synonym for 'types' - You can have different kinds of problems, so you raise them with different kinds of exceptions.
>>
>> s/kind/type/g and the question is: why not leverage the type system?
>>
>
> I've used "kinds" intentionally there. My basic thesis here is I haven't seen any systematic and successful use of exception hierarchies in 20 years. In rare scattered cases I've seen a couple of multiple "catch"es, and even those could have been helped by the use of a more flat handling. You'd think in 20 years some good systematic use of the feature would come forward. It's probably time to put exception hierarchies in the "emperor's clothes" bin.
>
>  For a consumer-of-something-that-throws, having different types of
>> exceptions for different things with different data makes sense.  You have to switch on something to determine what data you can get from the exception anyway.
>>
>
> Oh yah I know the theory. It's beautiful.
>
>
I'm not talking theory (exclusively).  From a practical standpoint, if I ever need information from an exception I need to know what information I can get.  If different exceptions have different information, how do I tell what I can get?  Types fits this as well as/better than anything I can think of.



>      It's commonly accepted that the usability scope of OOP has gotten
>>     significantly narrower since its heydays. However, surprisingly, the
>>     larger community hasn't gotten to the point to scrutinize
>>     object-oriented error handling, which as far as I can tell has never
>>     delivered.
>>
>>
>> Maybe, but what fits better?  Errors/Exceptions have an inherent hierarchy, which maps well to a hierarchy of types.  When catching an Exception, you want to guarantee you only catch the kinds (types) of things you are looking for, and nothing else.
>>
>
> Yah, it's just that most/virtually all of the time I'm looking for all. And nothing else :o).
>
>
Most/virtually all of the time I am looking only for the kind of exceptions I expect and can handle.  If I catch an exception that I was not expecting, this is a program bug (and may result in undefined behavior, memory corruption, etc).  Catching all is almost _never_ what I want.


I have not found a whole lot of use for deep exception hierarchies, but some organization of types/kinds of exceptions is needed.  At the very least you need to know if it is the kind of exception you know how to handle - and without a hierarchy, you need to know every single specific kind of exception anything you call throws.  Which is not tenable.


October 07, 2014
On Mon, Oct 6, 2014 at 6:19 PM, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote:

> On 10/6/14, 4:46 PM, Jeremy Powers via Digitalmars-d wrote:
>
>> On Mon, Oct 6, 2014 at 7:50 AM, Andrei Alexandrescu via Digitalmars-d
>>     I'm thinking a simple key-value store Variant[string] would
>>     accommodate any state needed for differentiating among exception
>>     kinds whenever that's necessary.
>>
>>
>> And 'kinds' is a synonym for 'types' - You can have different kinds of problems, so you raise them with different kinds of exceptions.
>>
>> s/kind/type/g and the question is: why not leverage the type system?
>>
>
> I've used "kinds" intentionally there. My basic thesis here is I haven't seen any systematic and successful use of exception hierarchies in 20 years. In rare scattered cases I've seen a couple of multiple "catch"es, and even those could have been helped by the use of a more flat handling. You'd think in 20 years some good systematic use of the feature would come forward. It's probably time to put exception hierarchies in the "emperor's clothes" bin.


Sorry, forgot to respond to this part.

As mentioned, I'm not a defender of hierarchies per se - but I've not seen any alternate way to accomplish what they give.  I need to know that I am catching exceptions that I can handle, and not catching exceptions I can't/won't handle.  Different components and layers of code have different ideas of what can and should be handled.

Without particular exception types, how can I know that I am only catching what is appropriate, and not catching and swallowing other problems?


October 07, 2014
On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang@gmail.com>" wrote:
> On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky wrote:
>> But regardless: Yes, there *is* a theoretical side to logic, but logic
>> is also *extremely* applicable to ordinary everyday life. Even moreso
>> than math, I would argue.
>
> Yep, however what the human brain is really bad at is reasoning about
> probability.

Yea, true. Probability can be surprisingly unintuitive even for people well-versed in logic.

Ex: A lot of people have trouble understanding that getting "heads" in a coinflip many times in a row does *not* increase the likelihood of the next flip being "tails". And there's a very understandable reason why that's difficult to grasp. I've managed to grok it, but yet even I (try as I may) just cannot truly grok the monty hall problem. I *can* reliably come up with the correct answer, but *never* through an actual mental model of the problem, *only* by very, very carefully thinking through each step of the problem. And that never changes no matter how many times I think it through.

That really impressed me about the one student depicted in the "21" movie (the one based around the real-life people who created card counting): Don't know how much of it was hollywood artistic license, but when he demonstrated a crystal-clear *intuitive* understanding of the monty hall problem - that was *impressive*.


> I agree that primary school should cover modus ponens,
> modus tollens and how you can define equivalance in terms of two
> implications. BUT I think you also need to experiment informally with
> probability at the same time and experience how intuition clouds our
> thinking. It is important to avoid the fallacies of black/white
> reasoning that comes with propositional logic.
>
> Actually, one probably should start with teaching "ad hoc"
> object-oriented modelling in primary schools. Turning what humans are
> really good at, abstraction, into something structured and visual. That
> way you also learn that when you argue a point you are biased, you
> always model certain limited projections of the relations that are
> present in real world.
>

Interesting points, I hadn't thought of any of that.

>
> Educational research shows that students can handle theory much better
> if it they view it as useful. Students have gone from being very bad at
> math, to doing fine when it was applied to something they cared about
> (like building something, or predicting the outcome of soccer matches).
>

Yea, definitely. Self-intimidation has a lot to do with it too. I've talked to several math teachers who say they've had very good success teaching algebra to students who struggled with it *just* by replacing the letter-based variables with empty squares.

People are very good at intimidating themselves into refusing to even think. It's not just students, it's people in general, heck I've seen both my parents do it quite a bit: "Nick! Something popped up on my screen! I don't know what to do!!" "What does it say?" "I dunno! I didn't read it!! How do I get rid of it?!?" /facepalm


> Internalized motivation is really the key to progress in school,

This is something I've always felt needed to be, as mandatory, drilled into the heads of every educator. Required knowledge for educators, IMO.

Think like "gold stars" are among the worst things you can do. It really drives the point home that it's all tedium and has no *inherent* value. Of course, in the classroom, most of it usually *is* tedium with little inherent value...


> which
> is why the top-down fixed curriculum approach is underperforming
> compared to the enormous potential kids have. They are really good at
> learning stuff they find fun (like games).
>

Yea, and that really proves just how bad the current approach is.

Something I think is an appropriate metaphor for that (and bear with me on this):

Are you familiar with the sitcom "It's always sunny in Philadelphia"? Created by a group of young writers/actors who were just getting their start. After the first season, it had impressed Danny DiVito enough (apparently he was a fan of the show) that he joined the cast.

In an interview with one of the shows creators (on the Season 1&2 DVDs), this co-creator talked about how star-struck they were about having Danny DiVito on board, and how insecure/panicked he was about writing for DiVito...until he realized: (His words, more or less) "Wait a minute, this is *Danny DiVito* - If we can't make **him** funny, then we really suck!"

A school that has trouble teaching kids is like a writer who can't make Danny DiVito funny. Learning is what kids *do*! How much failure does it take to mess *that* up?

"Those who make a distinction between education and entertainment don't know the first thing about either."


> Yes, social factors are more important in the real world than optimal
> decision making,

I was quite disillusioned when I finally discovered that as an adult. Intelligence, knowledge and ability don't count for shit 90+% of the time (in fact, frequently it's a liability - people *expect* group-think and get very agitated and self-righteous when you don't conform to group-think). Intelligence/knowledge/ability *should* matter a great deal, and people *think* they do. But they don't.

> unless you build something that can fall apart in a
> spectacular way that makes it to the front page of the newspapers. :-)
>

I've noticed that people refuse to recognize (let alone fix) problems, even when directly pointed out, until people start dying. And even then it's kind of a crapshoot as to whether or not anything will actually be done.


October 07, 2014
On 10/07/2014 03:37 PM, Walter Bright wrote:
>
> I believe one of the most
> important things we can teach the young is how to separate truth from
> crap. And this is not done

Hear, hear!

> I.e. logical fallacies and the scientific method should be core curriculum.
>

Yes. My high-school (and maybe junior high, IIRC) science classes covered the scientific method at least. So that much is good (at least, where I was anyway).

> Ironically, I've seen many researchers with PhD's carefully using the
> scientific method in their research, and promptly lapsing into logical
> fallacies with everything else.
>

Yes, people use entirely different mindsets for different topics. Seems to be an inherent part of the mind, and I can certainly see some benefit to that. Unfortunately it can occasionally go wrong, like you describe.

> It's like sales techniques. I've read books on sales techniques and the
> psychology behind them. I don't use or apply them with any skill, but it
> has enabled me to recognize when those techniques are used on me, and
> has the effect of immunizing me against them.
>
> At least learning the logical fallacies helps immunize one against being
> fraudulently influenced.

Definitely. I can always spot a commissioned (or "bonus"-based) salesman a mile away. A lot of their tactics are incredibly irritating, patronizing, and frankly very transparent. (But my dad's a salesman so maybe that's how I managed to develop a finely-tuned "sales-bullshit detector") It's interesting (read: disturbing) how convinced they are that you're just being rude and difficult when you don't fall hook, line and sinker for their obvious bullshit and their obvious lack of knowledge.

Here's another interesting tactic you may not be aware of: I'm not sure how wide-spread this is, but I have direct inside information that it *is* common in car dealerships around my general area. Among themselves, the salesmen have a common saying: "Buyers are liars".

It's an interesting (and disturbing) method of ensuring salesmen police themselves and continue to be 100% read-and-willing to abandon ethics and bullshit the crap out of customers.

Obviously customers *do* lie of course (and that helps the tactic perpetuate itself), but when a *salesman* says it, it really is an almost hilarious case of "The pot calling the grey paint 'black'." It's a salesman's whole freaking *job* is be a professional liar! (And there's all sorts of tricks to self-rationalizing it and staying on the good side of the law. But their whole professional JOB is to *bullshit*! And they themselves are dumb enough to buy into their *own* "It's the *buyers* who are disonest!" nonsence.)

Casinos are similar. Back in college, when my friends and I were all 19 and attending a school only about 2 hours from Canada...well, whaddya expect?...We took a roadtrip up to Casino Windsor! Within minutes of walking through the place I couldn't even *help* myself from counting the seemingly-neverending stream of blatantly-obvious psychological gimmicks. It was just one after another, everywhere you'd look, and they were so SO OBVIOUS it was like walking inside a salesman's brain. The physical manifestation of a direct insult to people's intelligence. It's really unbelievable how stupid a person has to be to fall for those blatant tricks.

But then again, slots and video poker aren't exactly my thing anyway. I'm from the 80's: If I plunk coins into a machine I expect to get food, beverage, clean laundry, or *actual gameplay*. Repeatedly purchasing the message "You loose" while the entire building itself is treating me like a complete brain-dead idiot isn't exactly my idea of "addictive". If I want to spend money to watch non-interactive animations and occasionally push a button or two to keep it all going, I'll just buy "The Last of Us".

October 07, 2014
On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
> On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
> <ola.fosheim.grostad+dlang@gmail.com>" wrote:
>> On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky wrote:
>>> But regardless: Yes, there *is* a theoretical side to logic, but logic
>>> is also *extremely* applicable to ordinary everyday life. Even moreso
>>> than math, I would argue.
>>
>> Yep, however what the human brain is really bad at is reasoning about
>> probability.
>
> Yea, true. Probability can be surprisingly unintuitive even for people
> well-versed in logic.
> ...

Really?

> Ex: A lot of people have trouble understanding that getting "heads" in a
> coinflip many times in a row does *not* increase the likelihood of the
> next flip being "tails". And there's a very understandable reason why
> that's difficult to grasp.

What is this reason? It would be really spooky if the probability was actually increased in this way. You could win at 'heads or tails' by flipping a coin really many times until you got a sufficiently long run of 'tails', then going to another room and betting that the next flip will be 'heads', and if people didn't intuitively understand that, some would actually try to apply this trick. (Do they?)

> I've managed to grok it, but yet even I (try
> as I may) just cannot truly grok the monty hall problem. I *can*
> reliably come up with the correct answer, but *never* through an actual
> mental model of the problem, *only* by very, very carefully thinking
> through each step of the problem. And that never changes no matter how
> many times I think it through.

It is actually entirely straightforward, but it is popular to present the problem as if it was actually really complicated, and those who like to present it often seem to understand it poorly as well. The stage is usually set up to maximise entertainment, not understanding. The presenter is often trying to impress, by forcing a quick answer, hoping that you will not think at all and get it wrong. Sometimes, the context is even set up that such a quick shot is more likely to be wrong, because of an intended wrong analogy to some other completely obvious question that came just before. Carefully thinking it through step by step multiple times afterwards tends to only serve to confuse oneself into strengthening the belief that something counter-intuitive is going on, and this is aggravated by the fact that there isn't, because therefore the part that is supposedly counter-intuitive can never be pinned down. I.e. I think it is confusing because one approaches the problem with a wrong set of assumptions.

That said, it's just: When you first randomly choose the door, you would intuitively rather bet that you guessed wrong. The show master is simply proposing to tell you behind which of the other doors the car is in case you indeed guessed wrong.

There's not more to it.

>
>> I agree that primary school should cover modus ponens,
>> modus tollens and how you can define equivalance in terms of two
>> implications. BUT I think you also need to experiment informally with
>> probability at the same time and experience how intuition clouds our
>> thinking. It is important to avoid the fallacies of black/white
>> reasoning that comes with propositional logic.
>>
>> Actually, one probably should start with teaching "ad hoc"
>> object-oriented modelling in primary schools. Turning what humans are
>> really good at, abstraction, into something structured and visual. That
>> way you also learn that when you argue a point you are biased, you
>> always model certain limited projections of the relations that are
>> present in real world.
>>
>
> Interesting points, I hadn't thought of any of that.
> ...

I mostly agree, except I wouldn't go object-oriented, but do something else, because it tends to quickly fail at actually capturing relations that are present in the real world in a straightforward fashion.

>>
>> Educational research shows that students can handle theory much better
>> if it they view it as useful. Students have gone from being very bad at
>> math, to doing fine when it was applied to something they cared about
>> (like building something, or predicting the outcome of soccer matches).
>>
>
> Yea, definitely. Self-intimidation has a lot to do with it too. I've talked
> to several math teachers who say they've had very good success teaching algebra
> to students who struggled with it *just* by replacing the letter-based variables
> with empty squares.
>
> People are very good at intimidating themselves into refusing to even think.
> It's not just students, it's people in general, heck I've seen both my parents
> do it quite a bit: "Nick! Something popped up on my screen! I don't know what
> to do!!" "What does it say?" "I dunno! I didn't read it!! How do I get rid of it?!?"
> /facepalm

Sounds familiar. I've last run into this on e.g. category theory (especially monads) and the monty hall problem. :-P
In fact, I only now realised that those two seem to be rather related phenomena. Thanks!
October 08, 2014
On 10/7/2014 2:12 PM, Timon Gehr wrote:
> On 10/07/2014 10:09 PM, Walter Bright wrote:
>> What defined behavior would you suggest would be possible after an
>> overflow bug is detected?
>
> At the language level, there are many possibilities. Just look at what type safe
> languages do. It is not true that this must lead to UB by a "definition"
> commonly agreed upon by participants in this thread.

And even in a safe language, how would you know that a bug in the runtime didn't lead to corruption which put your program into the unknown state?

Your assertion rests on some assumptions:

1. the "safe" language doesn't have bugs in its proof or specification
2. the "safe" language doesn't have bugs in its implementation
3. that it is knowable what caused a bug without ever having debugged it
4. that program state couldn't have been corrupted due to hardware failures
5. that it's possible to write a perfect system

all of which are false.


I.e. it is not possible to define the state of a program after it has entered an unknown state that was defined to never happen.
October 08, 2014
On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
[...]
> I've managed to grok it, but yet even I (try as I may) just cannot truly grok the monty hall problem. I *can* reliably come up with the correct answer, but *never* through an actual mental model of the problem, *only* by very, very carefully thinking through each step of the problem. And that never changes no matter how many times I think it through.
[...]

The secret behind the monty hall scenario, is that the host is actually leaking extra information to you about where the car might be.

You make a first choice, which has 1/3 chance of being right, then the host opens another door, which is *always* wrong. This last part is where the information leak comes from.  The host's choice is *not* fully random, because if your initial choice was the wrong door, then he is *forced* to pick the other wrong door (because he never opens the right door, for obvious reasons), thereby indirectly revealing which is the right door.  So we have:

1/3 chance: you picked the right door. Then the host can randomly choose
	between the 2 remaining doors. In this case, no extra info is
	revealed.

2/3 chance: you picked the wrong door, and the host has no choice but to
	pick the other wrong door, thereby indirectly revealing the
	right door.

So if you stick with your initial choice, you have 1/3 chance of winning, but if you switch, you have 2/3 chance of winning, because if your initial choice was wrong, which is 2/3 of the time, the host is effectively leaking out the right answer to you.

The supposedly counterintuitive part comes from wrongly assuming that the host has full freedom to pick which door to open, which he does not in the given scenario. Of course, this scenario is also often told in a deliberately misleading way -- the fact that the host *never* opens the right door is often left as an unstated "common sense" assumption, thereby increasing the likelihood that people will overlook this minor but important detail.


T

-- 
Written on the window of a clothing store: No shirt, no shoes, no service.
October 08, 2014
On 10/08/2014 02:27 AM, Walter Bright wrote:
> On 10/7/2014 2:12 PM, Timon Gehr wrote:
>> On 10/07/2014 10:09 PM, Walter Bright wrote:
>>> What defined behavior would you suggest would be possible after an
>>> overflow bug is detected?
>>
>> At the language level, there are many possibilities. Just look at what
>> type safe
>> languages do. It is not true that this must lead to UB by a "definition"
>> commonly agreed upon by participants in this thread.
>
> And even in a safe language, how would you know that a bug in the
> runtime didn't lead to corruption which put your program into the
> unknown state?
>
> Your assertion

Which assertion? That there are languages that call themselves type safe?

> rests on some assumptions:
>
> 1. the "safe" language doesn't have bugs in its proof or specification

So what? I can report these if present. That's not undefined behaviour, it is a wrong specification or a bug in the automated proof checker.
(In my experience however, the developers might not actually acknowledge that the specification violates type safety. UB in @safe code is a joke. But I am diverting.)

Not specific to our situation where we get an overflow.

> 2. the "safe" language doesn't have bugs in its implementation

So what? I can report these if present. That's not undefined behaviour, it is wrong behaviour.

Not specific to our situation where we get an overflow.

> 3. that it is knowable what caused a bug without ever having debugged it

Why would I need to assume this to make my point?

Not specific to our situation where we get an overflow

> 4. that program state couldn't have been corrupted due to hardware failures

Not specific to our situation where we detect the problem.

> 5. that it's possible to write a perfect system
>

You cannot disprove this one, and no, I am not assuming this, but it would be extraordinarily silly to write into the official language specification: "a program may do anything at any time, because a conforming implementation might contain bugs".

Also: Not specific to our situation where we detect the problem.

> all of which are false.
>
>
> I.e.

Why "I.e."?

> it is not possible to define the state of a program after it has
> entered an unknown state that was defined to never happen.

By assuming your 5 postulates are false, and filling in the justification for the "i.e." you left out, one will quickly reach the conclusion that it is not possible to define the behaviour of a program at all. Therefore, if we describe programs, our words are meaningless, because this is not "possible". This seems to quickly become a great example of the kind of black/white thinking you warned against in another post in this thread. It has to be allowed to use idealised language, otherwise you cannot say or think anything.

What is _undefined behaviour_ depends on the specification alone, and as flawed and ambiguous as that specification may be, in practice it will still be an invaluable tool for communication among language users/developers.

Can we at least agree that Dicebot's request for having the behaviour of inadvisable constructs defined such that an implementation cannot randomly change behaviour and then have the developers close down the corresponding bugzilla issue because it was the user's fault anyway is not unreasonable by definition because the system will not reach a perfect state anyway, and then retire this discussion?
October 08, 2014
On 10/08/2014 02:37 AM, H. S. Teoh via Digitalmars-d wrote:
> On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
> [...]
>> I've managed to grok it, but yet even I (try as I may) just cannot
>> truly grok the monty hall problem. I *can* reliably come up with the
>> correct answer, but *never* through an actual mental model of the
>> problem, *only* by very, very carefully thinking through each step of
>> the problem. And that never changes no matter how many times I think
>> it through.
> [...]
>
> The secret behind the monty hall scenario, is that the host is actually
> leaking extra information to you about where the car might be.
>
> You make a first choice, which has 1/3 chance of being right, then the
> host opens another door, which is *always* wrong. This last part is
> where the information leak comes from.  The host's choice is *not* fully
> random, because if your initial choice was the wrong door, then he is
> *forced* to pick the other wrong door (because he never opens the right
> door, for obvious reasons), thereby indirectly revealing which is the
> right door.  So we have:
>
> 1/3 chance: you picked the right door. Then the host can randomly choose
> 	between the 2 remaining doors. In this case, no extra info is
> 	revealed.
>
> 2/3 chance: you picked the wrong door, and the host has no choice but to
> 	pick the other wrong door, thereby indirectly revealing the
> 	right door.
>
> So if you stick with your initial choice, you have 1/3 chance of
> winning, but if you switch, you have 2/3 chance of winning, because if
> your initial choice was wrong, which is 2/3 of the time, the host is
> effectively leaking out the right answer to you.
>
> The supposedly counterintuitive part comes from wrongly assuming that
> the host has full freedom to pick which door to open, which he does not
> in the given scenario. Of course, this scenario is also often told in a
> deliberately misleading way -- the fact that the host *never* opens the
> right door is often left as an unstated "common sense" assumption,
> thereby increasing the likelihood that people will overlook this minor
> but important detail.
>
>
> T
>

The problem with this explanation is simply that it is too long and calls the overly detailed reasoning a 'secret'. :o)
It's like monad tutorials!