Last time we learned some rules for calculating probabilities. But we need a few more rules to get very far.
For example:
We say a coin is fair if it has probability 1/2 of landing heads up and probability 1/2 of landing tails up. What is the probability that if we flip two fair coins, both will land heads up?
Since each coin could land heads up or tails up, there are 4 events to consider here:
It seems plausible that each should be equally likely. If so, each has probability 1/4. So then the answer to our question would be 1/4.
But this is plausible only because we’re assuming that what one coin does doesn’t affect that the other one does! In other words, we’re assuming the two coin flips are ‘independent’.
If the coins were connected in some sneaky way, maybe each time one landed heads up, the other would land tails up. Then the answer to our question would be zero. Of course this seems silly. But it’s good to be very clear about this issue… because sometimes one event does affect another!
For example, suppose there’s a 5% probability of rain each day in the winter in Riverside. What’s the probability that it rains two days in a row? Remember that 5% is 0.05. So, you might guess the answer is
But this is wrong, because if it rains one day, that increases the probability that it will rain the next day. In other words, these events aren’t independent.
But if two events are independent, there’s an easy way to figure out the probability that they both happen: just multiply their probabilities! For example, if the chance that it will rain today in Riverside is 5% and the chance that it will rain tomorrow in Singapore is 60%, the chance that both these things will happen is
or 3%, if these events are independent. I could try to persuade that this is a good rule, and maybe I will… but for now let’s just state it in a general way.
Independence
So, let’s make a precise definition out of all this! Suppose we have two sets of events, and
Remember that
, the Cartesian product of the sets
and
, is the set of all ordered pairs
where
and
:
So, an event in consists of an event in
and an event in
. For example, if
and
then
Now we can define ‘independence’. It’s a rule for getting a probability distribution on from probability distributions on
and
:
Definition. Suppose is a probability distribution on a set of events
and
is a probability distribution on a set of events
If these events are independent, we use the probability distribution
on
given by
People often call this probability distribution instead of
.
Examples
Example 1. Suppose we have a fair coin. This means we have a set of events
and a probability distribution with
Now suppose we flip it twice. We get a set of four events:
Suppose the two coin flips are independent. Then we describe the pair of coin flips using the probability measure on
with
So, each of the four events—“heads, heads” and so on—has probability 1/4. This is fairly boring: you should have known this already!
But now we can do a harder example:
Example 2. Suppose we have an unfair coin that has a 60% chance of landing heads up and a 40% chance of landing tails up. Now we have a new probability distribution on say
:
Now say we flip this coin twice. What are the probabilities of the four different events that can happen? Let’s assume the two coin flips are independent. This means we should describe the pair of coin flips with a probability measure on
This tells us the answer to our question. We can work it out:
Puzzle 1. In this situation what is the probability that when we flip the coin twice it comes up heads exactly once?
Puzzle 2. In this situation what is the probability that when we flip the coin twice it comes up heads at least once?
For these puzzles you need to use what I told you in the section on ‘Probabilities of subsets’ near the end of Part 7.
Puzzle 3. Now suppose we have one fair coin and one coin that has a 60% chance of landing heads up. The first one is described by the probability distribution while the second is described by
How likely is it that the first lands heads up and the second lands tails up? We can answer questions like this if the coin flips are independent. We do this by multiplying
and
to get a probability measure
on
Remember the rule for how to do this:
where each of and
can be either
or
What are these probabilities:
Puzzle 4. In this situation what is the probability that exactly one coin lands heads up?
Puzzle 5. In this situation what is the probability that at least one coin lands heads up?
Next time we’ll go a lot further…

It should be pointed out that if you believe that a coin is weighted but that each flip is independent, you can still use that coin to make a 50/50 decision by flipping it twice. The point is that the probability of heads-tails and tails-heads are (in the language of the post)
and
respectively, but multiplication (of numbers, anyway!) is commutative, and so each of these possibilities is equally likely. If it comes up heads twice or tails twice, try again.
Nice! That would make a good puzzle!
You can go further, and used the weighted independent coin to make a decision with any random rational probability, p/q.
Let n be the smallest integer such that 2^n>q. Use the above procedure to generate n binary digits, forming a number, r, which will be equally distributed over the range 1 to 2^n. If r>q discard the digits and start again. Otherwise, you’ve now got a number between 1 and q inclusive, equidistributed on that range, which will be between 1 and p inclusive with probability p/q.
Off-hand, I don’t see any way arbitrary irrational probabilities can be generated using a procedure which, like the ones above, on average takes only a finite number of steps, using an independent possibly weighted coin. If the chance of the coin coming up heads happens to be
you can generate some set of irrational probabilities (which set?) but not, I think, 
You might be interested in the work of Penrose where he uses quantum spins to get probabilities without ‘fine-tuning the angle of measurement’… but only rational probabilities:
• Roger Penrose, Angular momentum: an approach to combinatorial space-time, in Quantum Theory and Beyond, ed. T. Bastin, Cambridge University Press, Cambridge, 1971. Available in PDF and Postscript.
typo: the definition of the set Y should be about “tomorrow”, not “today”.
Thanks! There were no clever ideas or jokes in this part, I’m afraid…
Last time we talked about independence of a pair of events, but we can easily go on and talk about independence of a longer sequence of events. For example, suppose we have three coins. […]
Now let’s work out the expected value of the payoff to each player. To do this, we’ll assume:
1) Player A uses mixed strategy

2) Player B uses mixed strategy
3) Player A and player B’s choices are independent.
If you forget what ‘independent’ means, look at Part 8. […]