In any good rating system, if two players with the same rating played a large number of games, one would expect each to win half of the games that were not a draw. As the difference in their ratings increases, the probability that the higher-rated player will win increases. In the U. S. system the difference in ratings at which the better player will win 90.9% of the time is arbitrarily set at 400. A player with a rating of 1100 will win 91% of his games with a player with a rating of 700, and a player with a rating of 2000 will win 91% of her games with a player with a rating of 1600.
For any particular match, it should be possible to calculate from the difference in the player's ratings the probability that one of the players will win. Taking “We” to be the “win expectancy” and “ΔR” the difference in the players' ratings,
We (underdog) = 1 / (1 + 10 ^ (ΔR / 400))
[The formula on the original web page is incorrectly formatted. The one above is correct. ^ is raise-to-the-power-of]
For example, using this formula, if two players differ by, say 90 rating points, the probability of a win for the higher-rated player is 0.627, and for the lower-rated player, 0.373. If the results of a series of games bear out this expectation, the players' ratings are “correct,” and shouldn't change. Players' ratings change only when the results of a match are not what the difference in their ratings led one to expect, and the extent of the change in ratings is based on how far off the expectation was.
So, according to the US Chess formula, the 63% point is a difference of 90 points.
In Backgammon 65% is the difference between a top player and an average player. I believe BKR formula is based on the the one referred to above so we could expect the entire ratings spread to be maybe 100 or so points each side of average!
So, okay, you're right - a difference of zero is exaggerated but with such a small spread and a volatility of up to 10% of that per match? ... they might as well be the same, lol.