Here’s a puzzle from a recent issue of Quanta, an online science magazine:
Puzzle 1: I write down two different numbers that are completely unknown to you, and hold one in my left hand and one in my right. You have absolutely no idea how I generated these two numbers. Which is larger?
You can point to one of my hands, and I will show you the number in it. Then you can decide to either select the number you have seen or switch to the number you have not seen, held in the other hand. Is there a strategy that will give you a greater than 50% chance of choosing the larger number, no matter which two numbers I write down?
At first it seems the answer is no. Whatever number you see, the other number could be larger or smaller. There’s no way to tell. So obviously you can’t get a better than 50% chance of picking the hand with the largest number—even if you’ve seen one of those numbers!
But “obviously” is not a proof. Sometimes “obvious” things are wrong!
It turns out that, amazingly, the answer to the puzzle is yes! You can find a strategy to do better than 50%. But the strategy uses randomness. So, this puzzle is a great illustration of the power of randomness.
If you want to solve it yourself, stop now or read Quanta magazine for some clues—they offered a small prize for the best answer:
• Pradeep Mutalik, Can information rise from randomness?, Quanta, 7 July 2015.
Greg Egan gave a nice solution in the comments to this magazine article, and I’ll reprint it below along with two followup puzzles. So don’t look down there unless you want a spoiler.
I should add: the most common mistake among educated readers seems to be assuming that the first player, the one who chooses the two numbers, chooses them according to some probability distribution. Don’t assume that. They are simply arbitrary numbers.
The history of this puzzle
I’d seen this puzzle before—do you know who invented it? On G+, Hans Havermann wrote:
I believe the origin of this puzzle goes back to (at least) John Fox and Gerald Marnie’s 1958 betting game ‘Googol’. Martin Gardner mentioned it in his February 1960 column in Scientific American. Wikipedia mentions it under the heading ‘Secretary problem’. Gardner suggested that a variant of the game was proposed by Arthur Cayley in 1875.
Actually the game of Googol is a generalization of the puzzle that we’ve been discussing. Martin Gardner explained it thus:
Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of a googol (1 followed by a hundred 0s) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.
So, the puzzle I just showed you is the special case when there are just 2 slips of paper. I seem to recall that Gardner incorrectly dismissed this case as trivial!
There’s been a lot of work on Googol. Julien Berestycki writes:
I heard about this puzzle a few years ago from Sasha Gnedin. He has a very nice paper about this
• Alexander V. Gnedin, A solution to the game of Googol, Annals of Probability (1994), 1588–1595.
One of the many beautiful ideas in this paper is that it asks what is the best strategy for the guy who writes the numbers! It also cites a paper by Gnedin and Berezowskyi (of oligarchic fame).
Egan’s solution
Okay, here is Greg Egan’s solution, paraphrased a bit:
Pick some function such that:
•
•
• is strictly increasing: if
then
There are lots of functions like this, for example
Next, pick one of the first player’s hands at random. If the number you are shown is compute
Then generate a uniformly distributed random number
between 0 and 1. If
is less than or equal to
guess that
is the larger number, but if
is greater than
guess that the larger number is in the other hand.
The probability of guessing correctly can be calculated as the probability of seeing the larger number initially and then, correctly, sticking with it, plus the probability of seeing the smaller number initially and then, correctly, choosing the other hand.
Say the larger number is and the smaller one is
Then the probability of guessing correctly is
This is strictly greater than since
so
.
So, you have a more than 50% chance of winning! But as you play the game, there’s no way to tell how much more than 50%. If the numbers on the other players hands are very large, or very small, your chance will be just slightly more than 50%.
Followup puzzles
Here are two more puzzles:
Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.
Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?
But watch out—here come Egan’s solutions to those!
Solutions
Egan writes:
Here are my answers to your two puzzles on G+.
Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.
Answer: If we adopt a deterministic strategy, that means there is a function
that tells us whether on not we stick with the number x when we see it. If
we stick with it, if
we swap it for the other number.
If the two numbers are
and
with
then the probability of success will be:
This is exactly the same as the formula we obtained when we stuck with
with probability
but we have specialised to functions
valued in
We can only guarantee a more than 50% chance of choosing the larger number if
is monotonically increasing everywhere, i.e.
whenever
But this is impossible for a function valued in
To prove this, define
to be any number in
such that
such an
must exist, otherwise
would be constant on
and hence not monotonically increasing. Similarly define
to be any number in
such that
We then have
but
Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?
Answer: As Philip Gibbs noted, a deterministic pseudo-random number generator is still deterministic. Using a specific sequence of algorithmically random bits
![]()
to construct a number
between
and
means
takes on the specific value:
So rather than sticking with
with probability
for our monotonically increasing function
we end up always sticking with
if
and always swapping if
This is just using a function
as in Puzzle 2, with:
if
if
So all the same consequences as in Puzzle 2 apply, and we cannot guarantee a more than 50% chance of choosing the larger number.
Puzzle 3 emphasizes the huge gulf between ‘true randomness’, where we only have a probability distribution of numbers and the situation where we have a specific number
generated by any means whatsoever.
We could generate using a pseudorandom number generator, radioactive decay of atoms, an oracle whose randomness is certified by all the Greek gods, or whatever. No matter how randomly
is generated, once we have it, we know there exist choices for the first player that will guarantee our defeat!
This may seem weird at first, but if you think about simple games of luck you’ll see it’s completely ordinary. We can have a more than 50% chance of winning such a game even if for any particular play we make the other player has a move that ensures our defeat. That’s just how randomness works.
I’m not sure I agree with this analysis. Don’t we have to marginalize over all possible choices of
? Alternatively, if we fix
, don’t we have to take into account the probability that the numbers chosen are “very large” or “very small?” How do we know that such considerations won’t force the probability back down to
?
Joshua wrote:
No, we pick any specific
obeying the three rules listed, and prove that this gives a way of winning with probability more than 50%, no matter what numbers the first player has chosen.
The proof takes this into account.
It can be driven arbitrarily close to
, but it will necessarily exceed 
This is similar to a lottery, also going back to Gardner, with a prize of 1 billion or whatever, divided by the sum of tickets. People could enter by sending a postcard with the number of tickets they wanted.
I’m familiar with the secretary problem in the form of optimizing your wife, in which the term “underlying distribution” takes on a whole new meaning. :-)
Here are three book recommendations. One: “Are you smart enough to work at Google?” by Poundstone. It is a very fun read. He covers the n large version of this puzzle you mention, but not this very interesting case. The book is written in the context of job interview puzzle questions which seem to be the rage in certain companies.
I read not so much for the puzzles, (turns out I am not smart enough to work at google. I needed to read the answers.) but rather to understand how smart companies hire quality employees (Don’t hire people who waste time on obscure math blogs when they should be mopping the floor). It also explains, buy the way, why we ‘google’ something, not ‘googol’ it. As it happens you can be a brilliant business person and a bad speller. Sadly the converse is not true.
Here is my favorite puzzle from the book: You have 367 students in your pre-Calc section take a quiz. They are scored from 0 – 100. Write the psuedo-code for an algorithm to sort the papers by score. Hint: your mother will do a better job than you. (If you think I got the wording wrong, and wording is everything in puzzles, you may be right. Read the book to find out!)
What I really wanted to ask here in this forum is this: why I am so bad at this puzzle, but seemingly so good at my job? I am an engineer. I spend lots of time using math. In fact, sitting on my desk next to me is “Statistics for Experimenters” (book recommendation #2). I use this about 2-3 times a year. I think I understand it. I design about 10 experiments a year, run them, analyze the results. Based on those results, change what I do in my factory. People pay me money to use statistics and probability. So why, when I am confronted with this puzzle is it so completely alien to me? This is something that baffles me. Is it the case that there are opportunities to think like this all the time in my job, but I am too dumb to even formulate the problem in a way that would allow me to do so? Or is it the case that puzzles like this are just few and far between in the real world.
Last week I saw a truck stuck under a bridge, jammed tight, a rental of course! I wanted to shout, hey, do you want a good puzzle? In stead I just gave them the answer (do you know what it is?) Those puzzles come up all the time for me. But never something like the one this post is about. Any thoughts as to why? Anyone?
Final book recommendation: “Fortune’s Formula”, also by Poundstone. A great read about some amazing mathematicians, including Shannon. These guys worked at AT&T when it was the cool-Google-place-to-be, and they sure seemed to use cleverness all the time to solve problems.
Nice comment!
I enjoyed Poundstone’s book Labyrinths of Reason, and may have looked at Fortune’s Formula in a book store, but not Are You Smart Enough to Work at Google?. I should try it.
I have a student, by the way, who is smart enough to work at Google. I’m reading and correcting his PhD thesis now, even though I’m not smart enough—or something enough—to work at Google myself.
I suspect the reason you find the puzzle here tough is that it deals with a situation that doesn’t come up much in practice. Furthermore, it’s one in which the goal is to win more than 50% of the time no matter what, but possibly by a microscopic amount more than 50%.
In real-life situations one is more interested in doing better by a significant amount in average-case situations, or perhaps doing better by a significant amount in every situation, not doing better in every situation but perhaps by a microscopic amount.
So here’s my question: can one prove that, for every
, there is no strategy that guarantees a winning probability greater than
? To me that seems like a more “realistic/effective” version of the puzzle.
Yes, that’s pretty easy to prove.
It is an interesting problem, with a beautiful solution.
I have a problem, if the number of the figures of the numbers in the hands is finite, then the function f must be 0 to the minimum value in the hands and 1 to the maximum value of the number in the hands, so that there is not a tails of the distribution that don’t cover the random number (for example z can be generated in a region where f^-1(x) have not random numbers in the hands, with the limit case of a single figure); if the number of the figures is infinite, then the reading time is infinite, and there is not solution in a finite time, so that I think that there is not a real solution.
I don’t understand this ‘problem’. The length of time it takes to compute what to do might perhaps be unbounded, but it’s definitely finite.
I thought that it can happen that two random numbers can have the same figures, until the n-th figure, and if n tends to the infinity, then the reading time tends to the infinity (if the figure comparison time is finite), and there is not a real solution; the probability that this happen is infinitesimal, but the game rules should apply to all cases.
I think you’re actually agreeing with me: the time it takes to do the computation is unbounded (that is, it can be arbitrarily large) but in any particular case it is finite. Similarly, the probability of winning with this strategy can be arbitrarily close to 50%, but it’s always greater than 50%. This counts as a valid solution to the problem as posed.
Incidentally I posted a related comment regarding the Googol game at Quanta. Gnedin’s solution essentially rules out any improvement on the 1/e-law that is based on the relative ranks when n>2, so the one who is guessing would gain nothing by using randomized strategies, as opposed to the n=2 case. This always seems puzzling to me and I have been wondering if there is any intuitive explanation of why the absolute values observed carry no information for n>2.
Yes, I find that odd, and I would need to carefully check Gnedin’s work to see if it’s really true that randomized strategies can’t help for
Hmmm. I would choose the right hand.
The argument for Puzzle 3 does not cut the cake. Let’s restate the setting:
.
Player A chooses two numbers to write on slips.
Player B chooses a function
Because our players live in a finitary world,
could be bounded-length (e.g. 64-bit) floating point numbers, or maybe a description of a computer program which writes subsequent bits of a real number onto a tape.
I am player B. I generate a nice random number
, choose some strictly increasing
and use my favorite cryptographic hash function to implement

where
so I basically set
Player A knows that I run this strategy, but does not know
. Then there is no way of choosing two numbers onto the slip which prevents player B from having a better than 50% chance.
Player A knows that I run this strategy. Player A has bounded computational resources. We iterate this game, with fixed
. However, Player A is disallowed from using the same number twice in all iterations. If the cryptographic hash function was any good, then Player B is still guaranteed more than 50%.
This technique is often used for de-randomization of algorithms.
Argh, sorry. The function should be

where
Because
still has the codomain
, there must still exist two numbers
such that
. It might well be that player A is incapable of discovering such a pair of numbers, but the puzzle demands a strategy from player B that works “no matter which two numbers I write down”, which is a universal quantifier on all pairs of numbers, not on pairs of numbers that A needs to find by some kind of deliberate, adversarial strategy.
I think it’s interesting to consider lots of variations of this game, to help build our intuitions. However, this only works if we realize the variations are variations! Maybe the point here is that algorithmic randomness becomes as good as randomness when we impose some restrictions on how player A can choose his numbers?
Thanks for sharing this John (and Greg). I find the 1st puzzle’s solution very counterintuitive – it seems to violate some bogus assumption I’ve been making that’s so implicit that I can’t yet even state it. Fascinating.
Nit: John, the otherwise-great presentation of Greg’s solution was a bit confusing to me, because it seems to define x inconsistently. In the paragraph beginning “Next, pick one of the first player’s hands…”), x denotes the number in the hand initially picked (whether or not it’s the larger of the two numbers); while in the next paragraph, x denotes the larger of the two numbers (whether or not it’s in the hand initially picked).
Thanks for catching that mistake—I fixed it.
The assumption is something like “learning one of two arbitrarily chosen distinct numbers provides no help in deciding which is larger,” but maybe one can pinpoint it more clearly.
Greg’s solution becomes more intuitive if looked at this way: the first player (say “he”) shows the second player (“she”) a number. If she thinks the number is “small”, obviously she should switch to the other hand. But if she thinks the number is “large”, she should stick with it.
Now, how can we make the concepts of “looks small” and “looks large” precise? Greg’s function
and the randomly chosen
serve that purpose:
“looks small” precisely when
, and so of course
“looks large” when
.
While this may dispel some of the puzzlement, it engenders a new mystery: why do we need the random
at all? Well, as usual in game theory, randomization immunizes a player against the opponent guessing her strategy. So it’s not so surprising that the deterministic version has a different result.
By the way: I’ve usually seen “monotonically increasing” used in contrast to “strictly increasing”, so for monotonically increasing,
implies merely that
. (E.g., "baby Rudin" defines it this way.)
I agree that ‘monotonically increasing’ should mean
This is equivalent to the condition
but it’s a more reasonable way to state that condition, since it’s saying
preserves something, namely 
The stronger condition that
be ‘strictly increasing’
is what we need in Greg’s solution, so I’ll change my wording in the post.
In the grand scheme of mathematics, not this puzzle, the condition of being ‘monotone increasing’ is more fundamental. A category with at most one morphism between any two objects is called a preorder: a slight generalization of a partially ordered set, where we drop the condition that
and
imply
The idea is that we write
when there is a (necessarily unique) morphism from
to
A functor between two such categories is then the same as a monotone increasing map! So, this is close to the foundations of mathematics, at least if you think category-theoretically.
Michael: what you describe is why I like this way of expressing the solution. You can sum up the entire strategy as: Ensure that, the greater the number you see, the greater your chance of sticking with it. It’s very easy to see why that should work, and not much harder to see how to make it happen.
The alternative formulation in your other comment is the version they preferred on the Quanta web site, and apparently the way the solution was first given historically (using a Gaussian, I think). And though it’s not too hard to follow when you break it down into cases, I still think the one-sentence slogan above makes the other formulation more intuitively appealing.
0) I think that’s right, John. It seemed incredible that seeing one of the two numbers could always provide useful information concerning which number of the pair is bigger. Following your lead, I tried to make this more concrete.
1) I was struck by the fact that Greg’s strategy makes no assumptions whatsoever about Player 1’s algorithm for choosing these numbers. It works regardless of the algorithm or the numbers chosen.
2) So I decided to test it on a specific scenario, and ran into trouble right off the bat: Say I’m Player 1, and I happen to like three specific real numbers a < b < c, but I like b the best. So my algorithm is to pick either {a, b} or {b, c} based on a truly fair coin flip, then randomly place one number from the chosen pair in each of my hands. And say that Player 2 (Greg, following his patented solution algorithm) randomly picks the one of my hands which contains b.
3) Given my algorithm, you know that my other hand must contain either a or c, each with probability 0.5. Does Greg, with b as his only input, have any way of picking the hand containing the larger number with probability > 0.5? No! Greg can compute with that number, he can throw darts at it, he can stare at it until the cows come home, but it won’t help him one bit, because unless my poker face cracks, there’s no way to for him to predict (ok, retrodict) the flip of a fair coin.
4) But Greg’s solution claims an expected winning percentage greater than 50% regardless of Player 1’s algorithm or the specific numbers chosen. So what gives? I looked in vain for an error in Greg’s solution, but came away more convinced than ever of its correctness. I rethought my specific scenario, but it’s pretty straightforward – you just can’t predict a fair coin flip. Until I saw my error, my confusion was near-total; it was great.
5) Thanks again John/Greg for sharing this fine puzzle. I’m sure that much more confusion is in store, because I still find aspects of it bizarre. I can’t wait:-)
I know you understand what the error is, but to spell it out for anyone reading along casually: if you assume that Player 2 picked the hand that contains b, then you’re only telling half the story of the strategy’s probability of success, which also relies on the counterfactual possibility that Player 2 might have picked the other hand, containing either a or c.
In a game where there are several mutually exclusive events
that can occur with non-zero probabilities
, the probability of success for a certain strategy is the sum:
But although that formula is pretty obvious, I think the counterintuitive thing is that the overall value of the strategy can depend on the relationship between more than one of these probabilities
, which in turn depend on mutually exclusive events.
So the fact that in any particular instance of the game we only ever see one of the two numbers does not prevent both numbers from entering into the evaluation of the strategy’s probability of success. The bird’s-eye view from which we judge the strategy, in which all possibilities are taken into account, has access to more information than a player employing the same strategy can access in any single instance of the game.
Thanks Greg! What baffled me was that your argument (which effectively defines each Ei to be the event that Player 1 chooses a specific pair {x,y} of values, and proves that P(success|Ei) > 0.5 for each Ei) seemed to contradict my example.
Only much later did I realize that my example effectively slices the universe along different lines (into a set of Ei’, each defined as the event that Player 2 initially chooses a hand containing a specific value z). Since your argument deals only in Ei, and makes no claims concerning my example’s Ei’, the apparent conflict melts away – they can both be right (and as far as I can tell, are).
Twitter version:
Old arch1’s brain finally cleared
Ei? Ei’! Oh:-)
Just to be clear: I should have said that what I meant by the mutually exclusive events
in the context of this game were the two events of Player 2 choosing either of the two hands. It’s not the case that each of these
is necessarily greater than 50%. Rather, it’s only their probability-weighted sum that’s always greater than 50%, and that’s why the strategy is successful overall, in spite of the fact that the individual
, corresponding to each choice of hand, need not be.
There might or might not be ways of attributing probabilities to the choice of two numbers by Player 1 (you give an example where this can be done, but in general it can’t). However, because the puzzle asks us to come up with a strategy that works for every individual pair of distinct numbers, it’s better to treat that pair as fixed but unknown, rather than subject to probabilities.
Thanks Greg. I didn’t do a good job of explaining my precise point of confusion. On the chance others share it, I’ll try describing it differently.
It involves two different kinds of conditional probability:
a) The probability of Greg’s strategy succeeding, conditional on Player 1 picking two specific numbers
b) The probability of Greg’s strategy succeeding, conditional on Player 2 initially picking a hand containing a specific number
My would-be counterexample concerned only a conditional probability of type b), which I showed equal to 0.5.
I saw this counterexample as contradicting Greg’s solution, since Greg’s solution proves that the conditional probability of his strategy succeeding exceeds 0.5 regardless of the specific numbers involved (it “works for every individual pair of distinct numbers”), and regardless of Player 1’s method of choosing those two numbers.
My mistake was in not realizing that Greg’s solution concerned only conditional probabilities of type a), NOT ones of type b) (regarding which it was silent). So there was no contradiction, after all (whew!).
perhaps this makes no sense (though i remember working on some problems in statistical mechanics comparing the thermodynamic limit to the finite case) but i assumed one could choose integers between 0 and infinity. in that case if you choose any finite number F you choose the other hand, because there are more between F and infinity. more could possibly be said about mapping this to [0,1]
.. . .
Yet it is still interesting. The result is then: if A is omnipotent, then B has no deterministic strategy. But, if B has access to an algorithmically random number, what is the computational power required for A to defeat any deterministic strategy chosen by B in a “reasonable” time ?
Numberphile has a nice video about this puzzle: https://www.youtube.com/watch?v=ud_frfkt1t0
Here’s yet another way of looking at Greg’s solution. Instead of mapping
into [0,1] with a strictly increasing
, and then choosing
with a uniform random distribution, use
to transfer the uniform distribution on [0,1] to a distribution on
. (To be precise: the probability measure of an interval
is defined to be
.)
Choose
randomly using this distribution. As in my earlier comment, the second player (“she”) uses $\latex z$ to decide if numbers “look large” (greater than
) or “look small” (less than
). (Let’s ignore events with zero probability, like
.) She sticks with the first number shown if and only if it “looks large”.
So we have three possibilities:
1)
2)
3)
In cases (1) and (3), her judgment of
and
as small or large is the same: both large in case (1), both small in case (3). Which number is in which hand is a coin flip, otherwise "always choose the left (or right) hand" is a winning strategy. So her strategy is a wash in these cases. But in case (2), her "bigger than
" method correctly assesses matters, and she wins. Case (2) happens with probability
.
In the deterministic variation,
means she thinks
"looks large". So cases when
are a wash, as before. If the first player happens to choose such a pair
for his challenge, then she will win as often as she will lose.
Greg, to prove the impossibility assertion in your solution to Puzzle 2, isn’t it enough to observe that the cardinality of S’s domain exceeds that of its range (so it can’t be 1-1, so it can’t be strictly increasing)?
Yes, that will do it.
From the Upshot, a New York Times column, I saw this last night. It is another “pick a number” puzzle. Different than the game of googol. But the graphics the authors used to present their data are very beautiful. Maybe done with R. Readers may enjoy.
Puzzle: Are You Smarter Than 58,164
Other New York Times Readers?
The problem description reminds me of the Monty Hall problem. Is this just the continuous analog?
I don’t see the similarity except for the obvious one: they’re probabilistic games where naively some information doesn’t seem to help you, but actually it does.
The game here relies crucially for its surprise value on the fact that the numbers range over an infinite line, so knowing the first one seems to help not at all in guessing whether the second will be larger or smaller. The Monty Hall problem, which is much more elementary, seems to rely for its surprise value on people’s poor understanding of conditional probabilities. But perhaps there are deeper connections.
I don't think that randomness is really the key here. In particular, when you write towards the beginning
I don't think that this illustrates the power of randomness, because you don't need randomness to destroy the false intuition that 50% can't be beat. And when you write at the end
that's just wrong (unless by ‘defeat’ you mean a 50% chance of success).
As we know from Puzzle 2, there's no deterministic strategy that guarantees us more than 50% probability of success for every method that A might use to choose the numbers. However, there's a simple deterministic strategy that guarantees us at least 50% probability of success for every method that A might use to choose the numbers, while also giving us more than 50% probability of success (in fact, 100% probability!) for some methods that A might use to choose the numbers. So already the idea that 50% can't be beat is flawed.
The simplest such strategy is this: after looking at one number, keep it if it is positive (or zero) and switch if it is negative. If A chooses two positive numbers (or chooses two negative number), then we have a 50% chance of winning. But if A chooses one positive number and one negative number, then we have a 100% chance of winning. If A (supposing that A knows our strategy) wants to give us any specific probability between 50% and 100%, then A can choose the sign of the numbers randomly with appropriate odds. But nothing that A can do will reduce our probability of success below 50%.
Of course, there’s nothing special about zero here as the cut-off between keeping and switching. If I had to play this game in real life (and had to use a deterministic strategy), then I would probably estimate (on a psychological basis) the median number that A is likely to pick, and use that as my cut-off point. I expect that I would do substantially better than 50% in real life in this way, with no randomness. At any rate, I would do no worse.
What randomness gives us is flexibility. Since A might always pick very large (or small) numbers, we need to allow our cut-off point to be, at least possibly, arbitrarily large (or small). And we can't always use, say, an integer as the cut-off point, because A might pick two fractional numbers with no integer between them. So we choose the cut-off point randomly by some method (any method) that gives nonzero probability to every nontrivial interval of real numbers. This is what Greg Egan's solution theoretically accomplishes.
I say ‘theoretically’, because in practice, when we choose the random number
, even if we use CERN's HotBits because a pseudorandom number generator is not good enough, we're really only going to get
to some level of precision. This means that the cut-off point
will also have limited precision, as well as an upper and lower bound. So in practice, a deterministic strategy isn't necessarily worse, and if A knows exactly what we're going to do, then A can (by expending more computational effort than we will) still reduce our probability of success to exactly 50%.
So if I were playing this game in real life, then not only would I estimate (psychologically) A's median number but also A's precision and bounds. Then choose a random cut-off point to that precision within those bounds1 from a distribution with that median. If A knows what I'm doing, then they can outwit me, but that only means reducing my probability of success back down to 50%. And if A is some random person off the street, then I bet that I can do pretty well.
Actually, I should go a little beyond my estimate in both precision and bounds, going farther as my estimate is less confident. ↩