MANTRA is used by the tournament hub, a meeting place for players of competitive games.
A good algorithm for ranking players of a competitive game on the Internet must have several properties:
The MANTRA algorithm is a relatively simple system that incorporates these basic considerations. Some of the requirements conflict with each other, so there is no perfect solution. First I will describe how this one works. Then I'll discuss how it satisfies the stated requirements. If you have a suggestion to improve matters, don't be shy.
The ranking algorithm enters the scores from each game into a big matrix which contains the cumulative score of every player against every other player. This data is used to compute the player ranks which minimize the total ranking "conflict".
That means if A always beats B and B always beats C, A will have a higher rank than B and B will have a higher rank than C.
In the real world, however, the situation may be muddier. For example, A may score high against B but low against C, while B scores fairly well against C. In situations such as this, there is no way to rank the people in a straight line without conflicting with the information from the score matrix. The algorithm therefore does the best it can by finding the ranks which minimize the total degree of conflict. Where A, B, and C land, therefore depends on the numbers and possibly on games they have played against other people.
The following example helps illustrate the whole process. It is designed to rank A over B over C, with a little statistical "noise" thrown in to make it more realistic.
Suppose we have the results from the following games:
This produces the following score matrix:
A | B | C | |
A | 5 | 2 | 3 |
B | 0 | 4 | 4 |
C | 1 | 2 | 3 |
The ranking conflict for the six possible orderings can be computed from the matrix.
The conflict between two players is computed according to the following formula:
Here r[A]
is the rank of A, s[A][B]
is the cumulative score of A against B (which is 2 in
this example), and so on.
Using this formula, the total ranking conflict for the six possible orderings are displayed below:
In addition to these, the algorithm also considers orderings in which two or more of the people have the same rank. In this particular example, however, that doesn't change the final result. Clearly, ABC has the lowest ranking conflict. The algorithm got the right answer. Amazing!
You might be thinking, that the diagonal elements in the score matrix (which we never used) could have given the answer a lot easier. After all, these appear to descend in order of rank. However, such an algorithm would essentially be crediting players for the number of games they have won.
Enter player D, the dummy. Consider the following score matrix:
A | B | C | D | |
A | 5 | 2 | 3 | 0 |
B | 0 | 4 | 4 | 0 |
C | 1 | 2 | 45 | 42 |
D | 0 | 0 | 0 | 0 |
It is obvious what is happening here. D is C's dummy, or else D and C really like to play each other, despite the fact that C is a much better player. In either case, the MANTRA algorithm will not be influenced in its choice of rankings between A, B, and C by the introduction of player D. Try it and you will see. D does not contribute any terms to the total conflict except in relation to C. The algorithm passes the dummy test.
Now suppose D challenges A. You might think player A only stands to lose by the encounter and should therefore play chicken to preserve rank. While that is partially true, A might consider that there are some advantages to having a well established rank. The more people A plays against, the broader and more stable is the support for A's rank. That puts more terms in the conflict sum, making it less likely for some event outside of A's control to cause a loss of rank.
If you have read this far, you might appreciate the addition of a rank uncertainty to the system. That would tell the difference between an upstart and someone with a well established rank. However, I'm feeling like a bit of an upstart myself these days. Let's give the upstarts a chance...