Main      Site Guide    
Message Forum
Re: Coin flipping puzzler - my solution
Posted By: gremlinn, on host 24.25.223.168
Date: Sunday, September 14, 2003, at 12:06:58
In Reply To: Coin flipping puzzler posted by gremlinn on Friday, September 12, 2003, at 17:30:26:

> Here's a math puzzle that's a lot easier to solve than it might seem at first. Can anyone do it?
>
> A number p is picked (by someone else) at random from a uniform distribution on the interval (0,1). Then a coin is crafted which, when tossed, results in heads with probability p and tails with probability 1-p. The coin is given to us without us being told what p is, and we have no way of deducing any additional information on the value of p besides experimentation.
>
> Given what we know, and since the problem's symmetric with respect to heads/tails, the chance of getting a heads is 1/2 on the first flip. I.e., we're 50% confident of getting heads when we're just about to make the first flip.
>
> The question is this: if we keep flipping the coin and getting heads, after which flip do we become 99% confident (or higher) that the next flip will also be heads?

The trick I used was to replace the experiment of flipping the coin (which is discrete, as it has just two outcomes) with an equivalent *continuous* one. This is often a bad idea, as discrete outcomes are often very convenient in calculating probabilities. It works out really nice this time though, for a couple of reasons. For the following, let N be the number of the coin flips that have been done so far.

So, after p is chosen, instead of doing the first coin toss, we instead *simulate* the coin toss by picking another random number from a uniform distribution on (0,1) and calling the result "heads" if it's less than p, "tails" otherwise. Obviously this still gives a probability of p for heads and (1-p) for tails. Then simulate the other (N-1) flips likewise, picking them all independently from uniform distributions on (0,1).

Now the probability that the (N+1)st coin toss will be "heads" is the probability that yet *another* number drawn from the uniform distribution on (0,1) will be less than p. Remember, though, that we're conditioning on the result that the first N "coin toss" results were less than p. Now if we include the determination of p, we have in actuality (N+1) independently drawn numbers from (0,1), with the first happening to be the largest.

The probability that the (N+1)st simulated coin toss (which is the (N+2)nd random pick of a number on (0,1)) is less than p (the first trial) is thus the same as the probability that the (N+2)st trial is less than the *maximum* of the first (N+1) trials. This may seem pointless to say, but it's the key to solving the problem. For now it's irrelevant that the largest of the numbers happened to be the first. For (N+2) independent trials (see (*) below) on identical continuous distributions, the chance that the (N+2)st trial will be *greater* than all the previous ones is 1/(N+2): any of the trials is equally likely to be the largest. And thus the probability that the (N+2)st trial is *not* greater than all the previous ones (i.e., less than the maximum of the first N+1, which is p) is 1 - 1/(N+2). [It's important that the distributions are continuous, since we don't have to worry about the occurrence of ties in ranking the outcomes -- these happen with probability zero.]

So the confidence of getting an additional head, after N successful "heads" tosses, is 1 - 1/(N+2). The value of N where this hits 99% or higher is after 98 coin tosses, at which point there's *exactly* a 99% confidence of getting one more "heads".

For this problem, it was very important that p was chosen from a uniform distribution, as the key step used the fact that the (N+1) trials (made up of N coin tosses, plus the initial pick of p) were *identically* distributed. So a little bit of trickery was involved, but as you see, very little numerical work. It wouldn't be so easy if p were picked from a non-uniform distribution.

(*) It might seem like the coin toss simulations are not independent of the choice of p. After all, the probabilities of "heads" and "tails" depend explicitly on p. However, the *real* outcome of the experiment is just what number in (0,1) was picked. The fact that the *interpretation* of that outcome as "heads" or "tails" depends on p is irrelevant. That's a second way that this method was sneaky, I guess.

Replies To This Message

Post a Reply

RinkChat Username:
Password:
Email: (optional)
Subject:
Message:
Link URL: (optional)
Link Title: (optional)

Make sure you read our message forum policy before posting.