A new approach to the Monty Hall problem

Reams and reams have been written about the Monty Hall problem, but no-one seems to have mentioned a simple fact which, once realised, makes the whole thing seem intuitive.

The Monty Hall show is a (possibly fictional, I'm not sure) TV gameshow. One couple have beaten all the others to the final round with their incredible skill at answering questions on general knowledge and popular culture, and now have a chance to win a Brand New Car. There are three doors. The host explains that earlier, before the couple arrived, a producer on the show rolled a dice. If a 1 or a 4 was rolled, the car was placed behind the red door. If a 2 or a 5 was rolled, it was placed behind the blue door and if a 3 or a 6 was rolled, it was placed behind the yellow door.

The host invites the couple to pick which door they think the car is behind. He then opens one of the other two doors and there's no car behind the door! (He knows where the car is, so he can always arrange for this to happen). Then the host asks the couple if they want to change their mind about which door they think the car is behind. Should they change? Does it make a difference.

Most people's first reaction is that it can't matter. How can it? The car has a one in three chance of being behind each of the doors.

No-one would argue that the car has anything but a probability of 1/3 of being in behind the door the couple picked (say it's the red door). But when the host opens the blue door, magic happens. The probability of the car being behind the blue door suddenly goes to zero. The probability can't vanish (otherwise there would only be a 2/3 probability of there being a car at all) and it can't go to the red door so this ghostly 1/3 probability-of-there-being a car goes to the yellow door. The car now has a 2/3 probability of being behind the yellow door. "Poppycock!" most people would say. Probability isn't this "magic stuff" that can travel between doors. But the correct answer is that the couple should change doors - the car really does have a 2/3 probability of being behind the yellow door.

If you're in doubt, you could simulate the situation with a computer program, run it lots of time for the choices "never change doors" and "always change doors" and see what fraction of the time in each case the couple wins the car. You will find that changing makes you win 2/3 of the time, and sticking 1/3. Or you could enumerate the possibilities:

1/3: Couple picks correct door in the first place. If they change, they lose.
2/3: Couple picks the wrong door. The other wrong door is then eliminated, so if they change, they win.

So changing has a 2/3 probablity of winning. This reasoning sounds like a more plausible argument for changing doors.

The key to this matter, and what makes the whole thing confusing to those who don't realise this, is that probability depends on what you know. If you think about this for a while, it becomes obvious. A fair coin, when tossed, has a 50% probability of landing on heads. However, once the event has happened, the probablity collapses to 0% (if it landed tails up) or 100% (if it landed heads up). Let event A be the tossing of a coin at noon, and success defined by the coin landing heads up. At five seconds to noon the probability of success is 0.5. At five seconds past noon, when everybody can see that the coin landed heads up, the probability of success is 1.0. If the coin is tossed and it rolls under the sofa, then at five seconds past there is still a 50% chance of success. Although the coin has landed, no-one knows what the result is. Probability depends on what you know. If you know nothing about the coin, the probability of success is 0.5.

Suppose a neutral third party is the only one to see the coin, and says "I'm not going to tell you what it says, but I'm going to roll a dice (behind your back, so you can't see it). If it comes up even, I'll say "heads", whatever the coin says. If it comes up odd, I'll say what the coin says. But I won't tell you whether the dice came up odd or even." Suppose this third party then tells says "heads". There's a 50% chance that this was because he rolled an even number and a 50% chance that that's what the coin really said. What is the probability of success now? Well, we can enumerate the possibilities again and notice that of the four equally likely possibilities (Heads+even, Heads+odd, Tails+even, Tails+odd) the only one we've eliminated is Tails+odd, since in that case he would have said "tails". Of the remaining three possibilities (which are still equally likely), two of them involve success so the probability of success is 2/3. We can check this as follows: He says "heads" three out of four times, so the probability of success is (2/3)*(3/4)+0*(1/4)=2/4=1/2 (since we know there is 0 probability of success if he says "tails"). This is the answer we expected.

We conclude that, by cleverness, we can do a "partial collapse" of the probability by finding out a bit of information (if not all of it). In this case the knowledge that the neutral third party said "heads" doesn't give as much information about the state of the coin as seeing the coin itself - it doesn't tell us for definite whether we have heads or not, but it does impart enough information to change the probability.

This is exactly what happens in the Monty Hall problem. The host imparts some information to the couple about which door the car is behind, but not enough to tell for the couple to tell for definite which door the car is behind - just enough to shift the probability in favour of the door which they would choose if they opted to "change". If it was a complete probability collapse (i.e. if he opened any two doors) no-one would be in any doubt as to whether they should change or not. It's just because the probability has only partially collapsed that people get confused.


Addendum

Justin sent me this email:

I read your paper on "Monte Hall Strikes Back." and absorbed that probability depends on what you know. Following is the question that I made up and is having trouble "partial collapsing" it. Maybe you can help me out with an insight:

There are two doors, door #1 and door #2 behind which two real numbers are written at random. You get a prize if you choose the door with larger real number. At this point, the probability of winning a prize is 1/2. However, you get a chance. You first choose a door, and Monty shows you the number behind that door. What should you do in order to do better than 1/2? Or is it even possible to do better than 1/2?

The answer to the question hinges on how these two real numbers are chosen. If all real numbers are equally likely, you can never do better than 1/2 because for any real number x the size of the set of real numbers smaller than x is exactly the same size as the set of real numbers larger than x (this is easy to prove, just pair them up: for all y>0, pair up x-y with x+y).

Of course, the game can't work like that in the real world because most of the real numbers are extremely large (either positive or negative) and require more atoms than there are in the universe to write down.

Suppose we have a more reasonable probability distribution for x, P(x <= r<= x+dx) = f(r)dx[/latex]. Then, given the value behind the first door, [latex]x[/latex], you can calculate the probability of [latex]y[/latex] (the number behind the second door) being less than [latex]x[/latex], [latex]\displaystyle P(y < x) = \int_{-\infty}^xf(r)dr = F(x)[/latex], and switch doors if [latex]P(y<x)<\frac{1}{2}[/latex]. Using this strategy you can calculate your probability of winning before knowing x and without even knowing the distribution!  [latex]P(win; F(x)>\frac{1}{2}) = F(x)
P(win; F(x)<\frac{1}{2}) = 1-F(x)[/latex] [latex]\displaystyle P(win) = \int_{-\infty}^zf(x)(1-F(x))dx+\int_z^\infty f(x)F(x)dx[/latex] where [latex]F(z) = \frac{1}{2}[/latex]. Integrating by parts gives [latex]P(win) = \frac{3}{4}[/latex]. However, the probability of winning using the optimal strategy after you know x depends on the distribution and on the value of x, but can be anywhere between 1/2 and 1.


Addendum 2

John de Pillis, a Professor of Mathematics at University of California in Riverside, emailed me to let me know about a graphical "proof" that switching doors (or cups in this case) improves your chances of success. If you're still confused, his diagram might help.

This diagram appears in John's book "777 Mathematical Conversation Starters", published by the Mathematical Association of America.

17 Responses to “A new approach to the Monty Hall problem”

  1. mikey says:

    Nice explanation.
    Just FYI, Monty Hall is a real person and hosted the game show 'Let's Make a Deal', on which this problem is based.

    • Bill says:

      I don’t even know if this is monitored at all but I find this to be a classics problem of over analysis and over thinking. It is true that switching increases the probability but we have to look at each separate choice event. The initial choice is 1/3. If the contestant stays with that selection and does not change then their probability of selecting the prize is still 1/3. But if the contestant switches then they have a 1/2 probability since they are now selecting between two choices, once goat one prize. That’s it. Since the host removes one of the goats it simply renders it a 50/50 of selecting the prize. But this probability only changes once the contestant selects again. Initial selection probability 1/3. If no swap the the probability still remains at 1/3. But if the contestant swaps their choice they are now choosing between 2 choices only thus a 1/2 chance.

      • Andrew says:

        The probabilities in your analysis don't add up to 1: you're saying the probability of winning is 1/3 that the car is behind your original door but 1/2 that it's behind the other unopened door. What happens the remaining 1/6 of the time?
        Also try doing your analysis again but with the 100 door version of the problem, where the host opens 98 of the goat doors. Do you still feel like the probability of winning goes from 1/100 to 1/2 on switching doors?

  2. The main error in the current discussions of the Monty Hall Paradox/ Problem is that everyone is considering the problem as a “one step problem” whereas it is actually a “two step problem”.
    The first step is the irrelevant part of the problem and is the part in which you choose one out of three doors and then the show host opens the door that has the goat (out of the two doors that you did not choose).
    The second step is the part in which you choose to change doors or not (or, in other words, the step in which you choose one of two doors).

    Given that THE FIRST STEP IS ALWAYS THE SAME, i.e., you pick a door and the host opens, out of the two unchosen doors, the one that has the goat. Therefore THE FIRST STEP DOES NOT ADD ANYTHING UNKNOWN, it is thus irrelevant for the statistical calculations (it has a 1.0 probability).
    The first step shall then be disregarded, for it is just a technique to create the illusion of a more complicated game.
    The problem can be then focused on the second step, which always begins in the same manner: one of the three doors (the one that was opened by the host and which contains the goat) is out of the equation and you have to choose one out of two doors.
    That is it, there is no need for complex calculations, the actual game is quite simple: given those TWO doors, choose one! (the “change your first choice” thing is just to create a feeling of mistery/ risk that does not exist actually).
    It is then a very simple: 50/50 chance of winning the car or not.

    If you want to make things more complicated and actually discuss if the person is “changing” the doors, it remains the same, for you do NOT consider 3 choices, you only consider 2 choices (the third door is always opened and, thus, removed from the discussion).
    Then let’s say you initially chose (out of the 2 doors) the door with the car behind and you change, you lose, but if you do not change you win (50/50 chances). The same applies if you chose the door with the goat.

    • Andrew says:

      The first step isn't always the same, since there are two possibilities for what's behind the door that you chose (car or goat). They just look the same when you don't have all the information. The chance of winning isn't 50/50, as you can verify experimentally - changing doors does give you an advantage. Do you still think it's 50/50 if there are 100 doors and the host opens 98 "goat" doors?

      • Mitch says:

        Yes. Because regardless of what door you choose originally, there are two remaining doors that could possibly contain the car. The first step doesn't matter. Flip a coin to see if you stay or switch.

    • Elvis says:

      Brilliantly wrong. The first "step" is extremely consequential in that it will have only a 1/3 chance of being correct--and there is a 2/3 chance that the car is behind one of the doors you didn't choose. Look at it this way: you can have the choice of staying with Door 1 (33.33% of being correct), OR you can have BOTH Doors 2 and 3 (66.66% chance that the car is behind Door 2 or 3), if you switch. When Monty reveals a goat in one of the doors you didn't pick, it doesn't change the odds of 2/3 that the car is behind either of Doors 2 or 3. If he revealed Door 3 was a goat, then there is 2/3 chance that the car is behind Door 2. 50/50 has nothing to do with it.

    • Oldnsenile says:

      Another way to show a fallacy in the official solution, is to assume 2 players instead of 1. If player 1 selected door #1 and player 2 selected door #2, then they would have equal probabilities of winning. Also, it should be obvious that they could not both increase their probability of winning by trading doors. I don't know why a computer simulation would support an advantage of switching, unless the formulated model was in error.

      • Andrew says:

        But the host opens door 2 and it's empty, so at that point we know that player 2 has a 0% chance of winning. It doesn't follow that player 1 also has a 0% chance of winning - the probabilities change (and are no longer equal for the two players) when the host opens a door!

      • PalmerEldritch says:

        You can't play the MHP with 2 contestants. There are thousands of simulation programs, are you saying they're all in error?

  3. Probability calculation by using the coins.

    After the host throws open the empty door (let’s say C), which he always knows beforehand as empty (due to the placing of the prize behind a certain door and the player choosing a door for himself), the choice involving two remaining doors (A or B) for the prize and by the player in terms of two sides of coin can be considered as the head H (corresponding to door A) and the tail T (corresponding to door B). These choices are mutually exclusive in terms of H and T, implying that the player or the prize can’t be both at door A and door B.

    Consider also that the prize and the player are represented now by two coins, coin 1 and coin 2, respectively.

    We can then represent all the possible choices involving prize, player and the two remaining doors A and B (after the host has opened the empty door with the prior knowledge about the door being empty) in terms of coins 1 and 2 and their heads and tails.

    From the above, prize at door A will be designated as 1H (head in the case of coin 1) and the player at door A as 2H. Similarly, prize at door B will be 1T and the player at door B as 2T.

    The conditions for switch to work (leading to a win), i.e. either the prize is at door A (1H) and the player at door B (2T), or the prize at door B (1T) and the player at door A (2H): which in terms of probability, the probability for the switch to work,
    P(s) = P(1H)*P(2T) + P(1T)*P(2H) ………….. (Eq. 1)

    Similarly, the conditions for the switch to not work (i.e. failing to win), either the prize is at door A (1H) and the player at door A (2H), or the prize at door B (1T) and the player at door B (2T), which in terms of probability for switch not producing a win,
    P(f) = P(1H)*P(2H) + P(1T)*P(2T) ……………(Eq. 2).

    Since the probability for head or tail for any coin is 1/2, i.e. P(1H) = P(1T) = P(2H) = P(2T) = 1/2.

    The probability therefore for the switch leading to a prize (from Eq. 1), P(s) = 1/2;

    And the probability switch leading to no win (using Eq. 2), P(f) = 1/2

    In other words, there is no advantage in the player making a switch

    • Andrew says:

      Your analysis takes it's conclusion (that P=1/2) as a premise, and is therefore a tautology. Just because there are two doors left and a coin has two sides doesn't mean that the probabilities of those two things work out the same, as I hoped my original post had clarified.

  4. athy says:

    ". . .a mistake concerning the nature of time"; lack of "a clear awareness of the difference between the ACTUAL and the IMAGINED" (Ursula Krober Le Guin, ALWAYS COMING HOME, pages 505 & 499 in the University of California Press 2001 paperback edition).
    The math & logic here show a mistake about the nature of time that is very typical in logico-mathematic thought. The chooser in the Monty Hall problem cannot benefit from the difference between the 1/3 and 2/3 probability choices. That chooser is either right or wrong & that difference cannot help them as an individual because THEY ONLY GET ONE CHANCE -- not the SERIES of chances that would be necessary for them individually to profit from an increase in probability. Unlike the mechanically-minded logician thinking in the world of dead abstractions (in other words a 'Platonist'), these choosers are not living in cloud-cukoo land where one can role back time & choose again. The increase in probability would only be beneficial in a SERIES of chances or in a SERIES of choosers considered BY THE LOGICIAN as a whole unit (only by the logician, since the contestants cannot organize ahead of time; theoretically they could form a group by agreement afterwards & offer to share equally with a certain number of other contestants but why would winners join in that? Perhaps among the Hopi or any other integrated group.) Running a computer program or any other multiplying-choices program does not touch the REALITY of the individual chooser's situation: they only get one chance, not a series. If the 2/3 probability door is chosen, it can be wrong; if the 1/3 probability door is chosen, it can be wrong. If they change, they can be wrong; if they don't change, they can be wrong. They are not a series. Their ACTUAL, not imaginary 'Platonic' cloud-cukoo-land probability, is 50:50. The 1/3 probability that is removed from the opened door ACTUALLY, NON-SERIALLY, in the real world of the INDIVIDUAL contestant, migrates equally to both of the unopened doors, the right & the wrong alike -- their ONE TIME CHANCE is 50:50, and in the real world they only have one time, just like you & me.
    Mitch's CONCLUSION is right: '; 'flip a coin, it doesn't matter,' i.e., it's 50:50 either way FOR ONE CHANCE, & so is Dr. Sharma's conclusion (I don't bother with his math). Addendum 2 by Professor Emeritus of Mathematics at the University of California, Riverside, John de Pillis (who seems to be an interesting guy otherwise), refers to a graphic "proof" where the bottom line summing up the effects of the 3 possible choices is what DOES NOT HAPPEN IN REALITY, only in the imagining mind of the mathematician. You only get to choose between two at the end; your chances are 50:50 as Dr. Sharma says.
    Now please explain to me why so many brilliantly mathematical people with much higher IQs than mine have gotten it wrong again & again. A confusion between ACTUAL TIME & analytic intellectual imaginary-no-time-mathematics. (I'm going to check Dr. de Pillis on Zeno, another out-of-this world nut.) I never understood those explanations but did not become crystal clear until I started writing this. Ain't writing grand! As Jed McKenna said, "A mind is a terrible place to think in" (or something like that). And then there is the community-mind aspect! I await responses with some eagerness & (possibly) irrationally hopeful expectations.

    • Andrew says:

      How does the thought process that has lead to that conclusion fare in the face of the 100-door variant, where the host opens 98 doors?

      Also, what probability is measuring is exactly the proportion of times you get it right in a series, so saying it doesn't apply to single experiments seems to be a misunderstanding of what probability is. I'm also curious as to how, in your view, nature determines whether you are doing a single iteration or a series. You say that your probability of winning in a single experiment is 1/2 but seem to conclude that switching does indeed yield 2/3 probability of winning in repeated experiments. What about if you just do the game twice - what then?

  5. PalmerEldritch says:

    @athy wrote "I never understood those explanations" - you obviously still don't.

    Mathematically, the probability (i.e. the likelihood) that the door you picked to begin with hides the car is 1/3.

    If I roll a dice and it lands 1,2,3, or 4 I put the car behind Door1, if it lands 5 or 6 I put the car behind Door2. Would you say that's a 50/50 on the basis that the "chooser is either right or wrong & that difference cannot help them as an individual because THEY ONLY GET ONE CHANCE"?
    If you do all I can suggest is never gamble

  6. Marty says:

    I think everyone with some mathematical knowledge will understand the classic explanation of the problem. It is not hard to explain or demonstrate theoretically.

    I think the criticism from some mathematicians and physics are based upon the fact that the real nature of the universe and physical matter is not completely understand and in an empirical setup with real objects the experiment repeated millions of time could led to different results than the theoretical simulation done with a computer or a paper and pencil demonstration.

    This could be due indeed to the nature of the physical universe that theory and statistic could help to understand but that only empirical experience could confirm.

Leave a Reply