October 08, 2014
On Wednesday, 8 October 2014 at 07:51:39 UTC, eles wrote:
> On Tuesday, 7 October 2014 at 23:49:37 UTC, Timon Gehr wrote:
>> On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
>>> On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
>>> <ola.fosheim.grostad+dlang@gmail.com>" wrote:
>>>> On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky
>

> against a series with first 100 heads and the 101rd being a

well, 101st
October 08, 2014
On 10/07/2014 08:37 PM, H. S. Teoh via Digitalmars-d wrote:
> On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
> [...]
>> I've managed to grok it, but yet even I (try as I may) just cannot
>> truly grok the monty hall problem. I *can* reliably come up with the
>> correct answer, but *never* through an actual mental model of the
>> problem, *only* by very, very carefully thinking through each step of
>> the problem. And that never changes no matter how many times I think
>> it through.
> [...]
>
> The secret behind the monty hall scenario, is that the host is actually
> leaking extra information to you about where the car might be.
> [...]

Hmm, yea, that is a good thing to realize about it.

I think a big part of what trips me up is applying a broken, contorted version of the "coin toss" reasoning (coin toss == my usual approach to probability).

Because of the "coin toss" problem, I'm naturally inclined to see past events as irrelevant. So, initial impression is: I see two closed doors, an irrelevant open door, and a choice: "Closed door A or Closed door B?".

Obviously, it's a total fallacy to assume "well, if there's two choices then they must be weighted equally." But, naturally, I figure that all three doors initially have equal changes, and so I'm already thinking "unweighted, even distribution", and then bam, my mind (falsely) sums up: "Two options, uniform distribution, third door isn't a choice so it's just a distraction. Therefore, 50/50".

Now yes, I *do* see several fallacies/oversights/mistakes in that, but that's how my mind tries to setup the problem. So then I wind up working backwards from that or forcing myself to abandon it by starting from the beginning and carefully working it through.

Another way to look at it, very similar to yours actually, and I think more or less the way the kid presented it in "21" (but in a typical hollywood "We hate exposition with a passion, so just rush through it as hastily as possible, we're only including it because writers expect us to, so who cares if nobody can follow it" style):

1. Three equal choices: 1/3 I'm right, 2/3 I'm wrong.

2. New choice: Bet that I was right (1/3), bet that I was wrong (2/3)

3. "An extra 33% chance for free? Sure, I'll take it."

Hmm, looking at it now, I guess the second choice is simply *inverting* you first choice. Ahh, now I get what the kid (and you) was saying much better: Choosing "I'll change my guess" is equivalent to choosing *both* of the other two doors.

The fact that he opens one of those other two doors is a complete distraction and totally irrelevent. Makes you think you're only choosing "the other ONE door" when you're really choosing "the other TWO doors". Interesting.

See, this is why I love this NG :)

October 08, 2014
On Wednesday, 8 October 2014 at 08:16:08 UTC, Nick Sabalausky wrote:
> On 10/07/2014 08:37 PM, H. S. Teoh via Digitalmars-d wrote:
>> On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
>> [...]

> equivalent to choosing *both* of the other two doors.

Yeah, I think is the best way to put it.
October 08, 2014
On 10/07/2014 11:29 PM, Walter Bright wrote:
> On 10/7/2014 3:54 PM, Nick Sabalausky wrote:
>> It's a salesman's whole freaking *job* is be a professional liar!
>
> Poor salesmen are liars. But the really, really good ones are ones who
> are able to match up what a customer needs with the right product for
> him. There, he is providing a valuable service to the customer.
>

Can't say I've personally come across any of the latter (it relies on salesmen knowing what they're talking about and still working sales anyway - which I'm sure does occur for various reasons, but doesn't seem common from what I've seen). But maybe I've just spent far too much time at MicroCenter ;) Great store, but dumb sales staff ;)

> Serve the customer well like that, and you get a repeat customer. I know
> many salesmen who get my repeat business because of that.
>

Certainly reasonable points, and I'm glad to hear there *are* respectable ones out there.

> The prof who taught me accounting used to sell cars. I asked him how to
> tell a good dealership from a bad one. He told me the good ones have
> been in business for more than 5 years, because by then one has run out
> of new suckers and is relying on repeat business.
>

That sounds reasonable on the surface, but it relies on several questionable assumptions:

1. Suckers routinely know they've been suckered.

2. Suckers avoid giving repeat business to those who suckered them (not as reliable an assumption as one might expect)

3. The rate of loss on previous suckers overshadows the rate of new suckers. (Ex: No matter how badly people hate working at McDonald's, they're unlikely to run low on fresh applicants without a major birthrate decline - and even then they'd have 16 years to prepare)

4. Good dealerships don't become bad.

5. There *exists* a good one within a reasonable distance.

6. People haven't become disillusioned and given up on trying to find a good one (whether a good one exists or not, the effect here would be the same).

7. The bad ones aren't able to compete/survive through other means. (Cornering a market, mindshare, convincing ads, misc gimmicks, merchandising or other side-revenue streams, anti-competitive practices, etc.)

Also, the strategy has a potential self-destruct switch: Even if the strategy works, if enough people follow it then even good dealerships might not be able to survive the initial 5 year trial.

Plus, I know of a counter-example around here. From an insider, I've heard horror stories about the shit the managers, finance people, etc would pull. But they're a MAJOR dealer in the area and have been for as long as I can remember.

>> But then again, slots and video poker aren't exactly my thing anyway.
>> I'm from
>> the 80's: If I plunk coins into a machine I expect to get food,
>> beverage, clean
>> laundry, or *actual gameplay*. Repeatedly purchasing the message "You
>> loose"
>> while the entire building itself is treating me like a complete
>> brain-dead idiot
>> isn't exactly my idea of "addictive".
>
> I found gambling to be a painful experience, not entertaining at all.

I actually enjoyed that evening quite a bit: A road trip with friends is always fun, as is seeing new places, and it was certainly a very pretty place inside (for a very good reason, of course). But admittedly, the psychological tricks were somewhat insulting, and by the time I got through the $20 I'd budgeted I had already gotten very, very bored with slot machines and video poker. And blackjack's something I'd gotten plenty of all the way back on the Apple II.

If I want to gamble I'll just go buy more insurance ;) Better odds.

Or stock market. At least that doesn't have quite as much of a "house" that "always wins", at least not to the same extent.

October 08, 2014
On 10/08/2014 05:19 AM, Walter Bright wrote:
> On 10/7/2014 6:18 PM, Timon Gehr wrote:
>  > I can report these if present.
>
> Writing a strongly worded letter to the White Star Line isn't going to
> help you when the ship is sinking in the middle of the North Atlantic.
> ...

Maybe it is going to help the next guy whose ship will not be sinking due to that report.

> What will help is minimizing the damage that a detected fault may cause.
> You cannot rely on the specification when a fault has been detected.
> "This can't be happening!" are likely the last words of more than a few
> people.
>

Sure, I agree.

Just note that if some programmer is checking for overflow after the fact using the following idiom:

int x=y*z;
if(x/y!=z) assert(0);

Then the language can be defined such that e.g.:

0. The overflow will throw on its own.

1. Overflow is undefined, i.e. the optimizer is allowed to remove the check and avoid the detection of the bug.

2. Guaranteed wrap-around behaviour makes the code valid and the bug is detected by the assert.

3. Arbitrary-precision integers.

4. ...

Code is simply less likely to run as intended or else abort if possibility 1 is consciously taken. The language implementation may still be buggy, but if it may even sink your ship when it generated code according to the specification, it likely sinks in more cases. Of course you can say that the programmer is at fault for checking for overflow in the wrong fashion, but this does not matter at the point where your ship is sinking. One may still see this choice as the right trade-off, but it is not the only possibility 'by definition'.

October 08, 2014
On 04/10/2014 09:39, Walter Bright wrote:
> On 10/1/2014 6:17 AM, Bruno Medeiros wrote:
>> Walter, you do understand that not all software has to be robust -
>
> Yes, I understand that.
>
>
>> in the critical systems sense - to be quality software? And that in
>> fact, the majority
>> of software is not critical systems software?...
>>
>> I was under the impression that D was meant to be a general purpose
>> language,
>> not a language just for critical systems. Yet, on language design
>> issues, you
>> keep making a series or arguments and points that apply *only* to
>> critical
>> systems software.
>
> If someone writes non-robust software, D allows them to do that.
> However, I won't leave unchallenged attempts to pass such stuff off as
> robust.
>
> Nor will I accept such practices in Phobos, because, as this thread
> clearly shows, there are a lot of misunderstandings about what robust
> software is. Phobos needs to CLEARLY default towards solid, robust
> practice.
>
> It's really too bad that I've never seen any engineering courses on
> reliability.
>
> http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
>

Well, I set myself a trap to get that response...

Of course, I too want my software to be robust! I doubt that anyone would disagree that Phobos should be designed to be as robust as possible. But "robust" is too general of a term to be precise here, so this belies my original point.

I did say robust-in-the-critical-systems-sense... What I was questioning was whether D and Phobos should be designed in a way that took critical systems software as its main use, and relegate the other kinds of software to secondary importance.

(Note: I don't think such dichotomy and compromise *has* to exist in order to design a great D and Phobos. But in this discussion I feel the choices and vision were heading in a way that would likely harm the development of general purpose software in favor of critical systems.)

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros
October 08, 2014
On 04/10/2014 10:05, Walter Bright wrote:
> On 10/1/2014 7:17 AM, Bruno Medeiros wrote:
>> Sean, I fully agree with the points you have been making so far.
>> But if Walter is fixated on thinking that all the practical uses of D
>> will be
>> critical systems, or simple (ie, single-use, non-interactive)
>> command-line
>> applications, it will be hard for him to comprehend the whole point
>> that "simply
>> aborting on error is too brittle in some cases".
>
> Airplane avionics systems all abort on error, yet the airplanes don't
> fall out of the sky.
>
> I've explained why and how this works many times, here it is again:
>
> http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
>

That's completely irrelevant to the "simply aborting on error is too brittle in some cases" point above, because I wasn't talking about avionics systems, or any kind of mission critical systems at all. In fact, the opposite (non critical systems).

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros
October 08, 2014
On 03/10/2014 19:20, Sean Kelly wrote:
> On Friday, 3 October 2014 at 17:38:40 UTC, Brad Roberts via
> Digitalmars-d wrote:
>>
>> The part of Walter's point that is either deliberately overlooked or
>> somewhat misunderstood here is the notion of a fault domain.  In a
>> typical unix or windows based environment, it's a process.  A fault
>> within the process yields the aborting of the process but not all
>> processes.  Erlang introduces within it's execution model a concept of
>> a process within the higher level notion of the os level process.
>> Within the erlang runtime it's individual processes run independently
>> and can each fail independently.  The erlang runtime guarantees a
>> higher level of separation than a typical threaded java or c++
>> application.  An error within the erlang runtime itself would
>> justifiably cause the entire system to be halted.  Just as within an
>> airplane, to use Walter's favorite analogy, the seat entertainment
>> system is physically and logically separated from flight control
>> systems thus a fault within the former has no impact on the latter.
>
> Yep.  And I think it's a fair assertion that the default fault
> domain in a D program is at the process level, since D is not
> inherently memory safe.  But I don't think the language should
> necessarily make that assertion to the degree that no other
> definition is possible.

Yes to Brad, and then yes to Sean. That nailed the point.

To that I would only add that, when encountering a fault in a process, even an estimation (that is, not a 100% certainty) that such fault only affects a certain domain of the process, that would still be useful to certain kinds of systems and applications.

I don't think memory-safety is at the core of the issue. Java is memory-safe, yet if you encounter a null pointer exception, you're still not sure if your whole application is now in an unusable state, or if the NPE was just confined to say, the operation the user just tried to do, or some other component of the application. There are no guarantees.

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros
October 09, 2014
On Wednesday, 8 October 2014 at 03:20:21 UTC, Walter Bright wrote:
>> Can we at least agree that Dicebot's request for having the behaviour of
>> inadvisable constructs defined such that an implementation cannot randomly
>> change behaviour and then have the developers close down the corresponding
>> bugzilla issue because it was the user's fault anyway is not unreasonable by
>> definition because the system will not reach a perfect state anyway, and then
>> retire this discussion?
>
> I've been working with Dicebot behind the scenes to help resolve the particular issues with the code he's responsible for.
>
> As for D, D cannot offer any guarantees about behavior after a program crash. Nor can any other language.

Just wanted to point out that resulting solution (== manually switching many of contracts to exceptions from asserts) to me is an unhappy workaround to deal with overly opinionated language and not actually a solution. I still consider this a problem.
October 09, 2014
Am Thu, 09 Oct 2014 13:10:34 +0000
schrieb "Dicebot" <public@dicebot.lv>:

> On Wednesday, 8 October 2014 at 03:20:21 UTC, Walter Bright wrote:
> >> Can we at least agree that Dicebot's request for having the
> >> behaviour of
> >> inadvisable constructs defined such that an implementation
> >> cannot randomly
> >> change behaviour and then have the developers close down the
> >> corresponding
> >> bugzilla issue because it was the user's fault anyway is not
> >> unreasonable by
> >> definition because the system will not reach a perfect state
> >> anyway, and then
> >> retire this discussion?
> >
> > I've been working with Dicebot behind the scenes to help resolve the particular issues with the code he's responsible for.
> >
> > As for D, D cannot offer any guarantees about behavior after a program crash. Nor can any other language.
> 
> Just wanted to point out that resulting solution (== manually switching many of contracts to exceptions from asserts) to me is an unhappy workaround to deal with overly opinionated language and not actually a solution. I still consider this a problem.

A point which hasn't been discussed yet:

Errors and therefore assert can be used in nothrow functions. This is a pita for compilers because it now can't do certain optimizations. When porting GDC to ARM we started to see problems because of that (can't unwind from nothrow functions on ARM, program just aborts). And now we therefore have to worsen the codegen for nothrow functions because of this.

I think Walter sometimes suggested that it would be valid for a compiler to not unwind Errors at all (in release mode), but simply kill the program and dump a error message. This would finally allow us to optimize nothrow functions.