# Who's the best doubles player?

| Comments (2) | TrackBacks (16) |
Here's a simple problem with kind of a surprising answer. Say you have a group of people playing some game that's played in pairs (like doubles tennis). How many games does it take to determine who the best player is?

We'll start by making some simplifying assumptions:

1. Players skills are purely linear and denoted by integers >= 0.
2. The outcome of any game is deterministic: the team with the highest total skill wins. This implies that the game isn't positional the way that say bridge or foosball is.

I generally like to work problems like this trying to work the simplest possible version, which in this case is four players. We'll call them A, B, C, and D. This allows us to potentially play three games:

1. A,B vs. C,D
2. A,C vs. B,D
3. A,D vs. B,C

Say that A,B wins game 1, A,C wins game 2, and B,C wins game 3. This gives us three equations:

1. A + B > C + D
2. A + C > B + D
3. B + C > A + D

Putting equations (1) and (2) together, we can determine that A>D (because A can beat D with either B or C on his side). Similarly, if we put equations (1) and (3) together, we can determine that B>D. Finally, if we put equations (2) and (3) together, we can determine that B>D. Unfortunately, this doesn't tell us whether A, B, or C is the best.

It may help at this point to try a small numerical example. The results above are consistent with the following skill values:

 Player Skill A 10 B 9 C 6 D 1

These predict the outcomes we expect:

1. A(10) + B(9) > C(6) + D(1)
2. A(10) + C(6) > B(9) + D(1)
3. B(9) + C(6) > A(10) + D(1)

However, they're also consistent with swapping A and B's skills:

 Player Skill A 9 B 10 C 6 D 1

These predict exactly the same outcomes:

1. A(9) + B(10) > C(6) + D(1)
2. A(9) + C(6) > B(10) + D(1)
3. B(10) + C(6) > A(9) + D(1)
Finally, if we swap A and C, we get:
1. A(6) + B(9) > C(10) + D(1)
2. A(6) + C(10) > B(9) + D(1)
3. B(9) + C(10) > A(6) + D(1)

Notice, however, that we were able to determine the worst player: it's D, because he causes whoever he's paired with to consistently lose. This suggests that there must be some set of circumstances in which we can determine the best player, and indeed there are. If we reverse the outcome of game 3 so that A and D win, this gives us the following set of equations:

1. A + B > C + D
2. A + C > B + D
3. A + D > B + C

As before, we can put these equations to determine that A>D, A>C, and A>B. So, now we can determine that A is the best player but not distinguish B, C, and D. In a four-person group, you can always tell who either the best player or the worst player is, but not both.

The intuition for the general case you want to have here is about resolution. If the difference between the best player and the second best player is less than the difference between another disjoint pair of players, then there's no way to distinguish them. No matter how many players you have, there will always be some pair of players who are the closest and there will be no way to distinguish them, no matter how many matches you play.

Acknowledgements: I originally worked this problem with Lisa Dusseault.

## 2 Comments

As an example of how awful this kind of problem can get in the real world, take a gander at:

But the matchups among four do tell you what the closest pair is, so you can reduce a field of n sets of four players to n pairs of the possibly most closely matched players, then regroup those for a smaller set, and repeat until you have the single pair of most closely matched players in the entire tournament. Then you can differentiate all other players using them as a benchmark.

This is an awful technique in practice, because the most closely matched pair is almost always a lame duck.

## Leave a comment

### October 2012

S M T W T F S
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31