Board for everybody who is interested in BrainKing itself, its structure, features and future.
If you experience connection or speed problems with BrainKing, please visit Host Tracker and check "BrainKing.com" accessibility from various sites around the world. It may answer whether an issue is caused by BrainKing itself or your local network (or ISP provider).
Lista de boletines
No tienes autorización para escribir mensajes en este boletín. Para escribir mensajes en este boletín se require un nivel mínimo de membresía de Brain Caballo.
Eriisa: It is kind of funny. They played against higher rated players to gain their high ratings, and then to see some turn and purposly not play low rated players just to "protect" their high ratings - seems kind of silly doesn't it.
BIG BAD WOLF: I think before you speak you better not lump all people into that statement. I played "whoever", it didn't matter if I played a high rated player or a low rated player. My rating got high because of the amount of games I won - nothing to do with who I played. In the beginning - most of the people I played were lower rated players. So don't lump everyone into your little statement there.
For me it doesn't change much anyway, i prefer to play in the higher only tourney bracket, because the games are more challenging. I won't "purposly not play low rated players" because i'm not to bothered if there are lower rated players in the tourney. And as for teh "protection" of my higher rating.. tis a load of cobblers LOL, i've played people of high or low ratings since i joined the site, and it has still remained reletively high.
BIG BAD WOLF: It seems just as silly to get a ridiculously high rating in a particular game too, and then quit playing that game to "protect" your high rating. I guess if you can view it that way, the other scenerio can be viewed the same way.
Modificado por playBunny (27. Septiembre 2005, 02:27:13)
BIG BAD WOLF: It's worse than that my friend. As a top 5 player it is against my interests to play outside the top 20 if I want to protect my rating. A system that makes that kind of thing even thinkable is not a good one.
In a fair system I'd be happy to play anybody, literally. With the widely used ELO Bg formula, if I were playing a single match against a beginner then I'd earn 1.25 points and lose 2.75 points. My share of the 4 available points would be about 30%. But I'd tend to win about 70% of the time so it would balance out. We'd both stay at our ratings, neither gaining nor losing over the long term.
That's what the formula is supposed to do - maintain the status quo when players are playing true to their rating.
Against an average player I'd earn 1.5 or lose 2.5 but I'd win less often - about 60% - which again makes it balance out.
A fair system doesn't penalise higher rated players when the lose against lower rated ones - the wins make up for it. Under such a system the best protection for your rating is to study the game and play it well.
But, back to BrainKing:
With this new formula I'd win 1 BKR point and lose 15 against the beginner. I'd still only win 70% of matches so ..
After 16 matches I'd have gained 11 x 1 = 11 points.
Yet I'd have lost 5 x 15 = 75 points.
I'm down 64 points - penalised for only having won what is reasonable.
In the proper system it would be win 11 x 1.25 = 13.75 and lose 5 x 2.75 = 13.75. Balance.
Against the average player I'd win 2 BKR points or lose 14. I'd still win 60% so ..
After 16 matches I'd have gained 10 x 2 = 20 points
But would have lost 6 x 14 = 84 points.
Again down 64 points.
In the proper system it would be win 10 x 1.5 = 15 and lose 6 x 2.5 = 15. Balance.
Against a top 20 player I'd win 6 BKR points or lose 9. This time I'd only be expected to win 55% of the matches.
After 16 matches I'd have gained 9 x 6 = 45 points
But I'd lose 7 x 9 = 63 points.
So even against someone closely matched I'd be losing 18 points just for winning only as many as expected.
In the proper system it would be win 9 x 1.75 = 15.75 and lose 7 x 2.25 = 15.75. Balance.
Modificado por playBunny (27. Septiembre 2005, 01:47:29)
This change is fair in all the skill-only games but in anything to do with luck this new system will always pull the highest of two players down regardless of who they play. This will cause a general flattening of the ratings and make them less meaningful.
As shown in the examples below, the formula won't balance the ratings at a point where they reflect the respective skill levels but will continue to penalise the higher rated player until both ratings are equal!
This is the case for everybody whatever their rating and whatever their skill level. In any set of matches against the same opponent, whichever player is the the higher rated of the pair will always go down more than they can possible earn - because luck will not allow them to earn what the formula says they should be capable of.
Asunto: I had this to say on the Backgammon board, but it applies to other games with luck in them too.
Modificado por Walter Montego (27. Septiembre 2005, 02:17:31)
It used to be when I played someone a game of Backgammon and we were within a few hundred points of each other in rating we'd be playing for 8 rating points. Now there's this sliding scale and I find it completely unfair to the higher rated player. I will never be able to play a higher rated player that cares about his rating on this site because how disadvantageous the odds are now. Someone that's 300 points above me risks 14 rating points to my 2! This is nowhere near the odds of my actual chance of winning. It may not 1 to 1, but it can't be 7 to 1.
Can we please have a ratings system for Backgammon that reflects the odds of winning and keeps in mind that there's luck involved? This disparity will further segegrate the Backgammon playing community or will encourage people to not play rated games at all. I can see having a big difference in the points awarded if we were playing a match to 10 wins or game points, but to have it like this for a single game is ridiculous.
playBunny: Your examples do not take into account the ratings adjustments that would occur after each match -- presumably the point loss would be smaller each time your rating decreased (and the opponent's rating increased). However, I'm not convinced that your conclusion isn't right; we will have to see what comes out of the process. Since Fencer will always have the database of past results, it will always be possible to introduce a different formula if this one proves unworkable.
I always thought the ELO formula was unncessarily complex, though complexity counts for little in a computerized world. What about a formula that awarded a winner a number of points equal to C x (r/R), where C is a constant (such as 4), r is the loser's rating, and R is the winner's rating? The loser would subtract the same number of points. There could be a minimum adjustment (such as 1) and a maximum (such as 10).
alanback: Yes, that's correct. The examples should have said something like "Taking 16 matches" rather than "After 16 matches".
If I had a BKR simulator I could do the numbers properly but .... yes again, the conclusion is correct because the bias exists no matter what the two players' ratings are. The point being that the higher rated player cannot maintain a level against the opponent unless they win highly unrealistic numbers of games - way beyond chance.
The ELO Bg formula, once understood, is actually very elegant. (Though, to a non-mathematician like myself, that elegance has to be studied to get it into the brain, lol). One of the key points is that it maintains the rating difference between two players who are playing consistently at their respective skill levels. The idea is that a player at 2000 is going to win, for example, 56% against than an 1800er; so is the 1800er against a 1600er; and so too the 1600er against a 1400er. It's the difference of 200 that matters, not the ratings themselves.
The formula is a feedback loop that awards points according to this rating difference such that over time the resulting rating difference reflects the actual performance difference. The winner and loser both adjust by the same amount but the amount is greater if the loser wins. This ensures that the players stay at the same difference when playing consistently but ensures that they converge when the lower rated one plays consistently beyond their rated ability. Yet only to a given point - the point where their win rate in relation to the other is predicted by the new rating difference.
As an example, say the 2000er were to play only the 1600er in a lot of matches but the 1600er was winning 44% (ie. something expected of the 1800er). The two ratings would converge until they were 200 points apart (1900 and 1700) and then stay that way - the rating difference now accurately reflecting the performance difference.
I'm not enough of a maths-head to picture how your proposal would work [I'd have to write a program to show me how it works - or you can. ] but I don't think it would create the negative feedback effect. It also wouldn't have the same comparability (eg. difference of 200 = 56%:44% wins), though that may or may not be a disadvantage.
Kata Liana: Sorry - I did not mean to "lump" you into the people who have said they will not play players that are rated a lot lower then they are - I guess I mistook when you wrote "I will only play high rated opponents now."
And yes, I agree that people who get a high rating then stop playing to "protect" their rating are pretty bad also, Fencer has already taken steps to clear some of those players (some which have left the site) from the rating list.
When I click "My Profile", I now see a link for "Order list" next to edit and change password.. What is it?
When I click it, it says no orders listed..
Well, as you might know, BrainKing is and always will be mostly focused on Chess, Checkers and similar non-dice games. I don't say I will never improve the rating formula for Backgammon and other dice games but the priority of this task is lower than average, at the moment.
Well, the new ratings are out and they are about what I expected them to be. With the ratings no longer inflated, the point differentials between winning and losing are also not as great as they were before. I think we can live with this.
<Luck is definitely a factor in the backgammon ratings but it is not *the* factor. As many said, the real factor is that with the same skill difference as in chess, the better player win percentage is a lot lower. But this has to do not only with luck, but also with the length of the game. For example, if the game of go was played here with the same rating system, you would soon see people over 3000, because in go you almost never win against a stronger opponent (while it still happens in chess).
The solutions could be :
1) Implement and play only "Pro backgammon", matches to e.g. 5 points with a doubling cube, which is a much better game anyway. But implementing that would probably be a lot of work for Fencer.
2) Tuning the Elo formula. In the Elo formula there is a constant of 400 which means that it requires a rating difference of 400 points to have a winning expectation of 10/11=90.9%. Changing the 400 to e.g. 600 would dilate the rating scale so that you are 600 points higher than your opponent instead of 400 when you win 90.9% of the games. But this is not a so great solution as it looks, because it requires for each game to estimate the relation "skill difference -> winning expectation", which can be done only in a somewhat arbitrary way as the "skill difference" is a subjective value.
3) Accept that because of the nature of one-game backgammon the rating scale will always be shrinked and that the rating differences must be taken as meaning a higher skill difference than you would expect !
THE HIT MAN: Here , here, if i really wanted to know all this argy bargy about blinking rankings and BKR i would have asked a physcisisistissisitttt, ohhhhhhh u know what i mean LMAO
alanback:
Or that they know how to choose their opponents.
Or that the site is attracting better players
Or that you have a grudge that you're number 6 and not in top 5 ;)
wellywales: and for the record darling, as u are well away the recent comment made by myself about the BKR was not directed at you at all, but u new that anyway, shame a certain somebody else idnt though. If she had scrolled right down the message board she would have seen exactly where it all started.
alanback: Yes, for example YOU, why especially Alaback's BKR dropped so much while others didn't........?
Fencer can you give an explanation for this? "Curious as always......."
Pythagoras: The algorithm used by the system previously to compute BKR had a bug -- if the players' ratings were less than 400 points apart, then the winner's BKR adjustment was always +8 and the loser's was -8 regardless of which player won. The adjustments should be smaller if the higher-rated player wins, and larger if the lower-rated player wins.
If the ratings difference was larger than 400 points, the system formerly assigned negligible adjustments if the higher rated player won, and relatively large adjustments if the lower rated player won (I'm not sure this has changed!).
Now, the ratings adjustment is always larger if the lower-rated player wins than if the higher-rated player wins.
Formerly, if a high-rated player was careful to play only opponents whose ratings were within 400 points of his own, he was pretty much guaranteed that his rating would continue to rise as long as he won more than half his games. Now, it is very difficult to even maintain a high rating.
Modificado por playBunny (27. Septiembre 2005, 21:26:27)
Pythagorus: The bug was that the points awarded for matches weren't variable according to the rating diference. Previously, playing someone within 400 points meant a gain or a loss for both players of 8 points and only 8 points.
The new system is that the 16 points are now apportioned according to the rating difference.
The first system favoured the higher-rated player (at whatever level, eg it would favour someone at 1600 playing someone at 1400).
The new system is correct for skill based games but for Backgammon it heavily favours the lower rated of the pair.
Both systems are flawed for Backgammon.
For backgammon:
At FIBS the average rating is 1500 and the top is 2000+.
At Vog the average ratings is 1600 and the top is about 2100.
Thus the top half of the playing pool is spread out over 500 points.
Here the average for backgammon is 2000! And the top players are at about 2200.
This squashes the top half of players into a mere 200 points. A ridiculously small range.
In Hypergammon the average is 1930 and the top 20 starts at 2100. A range of 170 points.
In Nackgammon it's average 1675 up to 1875 for #20 giving a range of 200.
Chess: Average 1675 to #20 at 2207. A range of 530.
It's a Chess formula. It works for Chess. It doesn't work for Backgammon.
alanback: The high preponderence of provisionals in the top 20 is a result of that squashing. The startup formula awards opponent's rating + 400 for a win. A new player need only win against a few average players and their rating will be 200 points higher than the top established players.
Fencer: A crazily high average and a squashed range? Provisionals who shoot way beyond the top just by beating average players? It's very flawed. I wish you didn't hold the Backgammon community in such contempt.
Maybe you and others don't think you do but it sure seems like it.
1] A serious (ie. it has caused much discussion and argument) bug which has been known about for over two years!. No action.
2] At least a small addition to the rules to alleviate the upset caused by the bug? Two years and no action..
3] Pro backgammon. No progress. No information. No visible action..
4] A proper rating system. No intention.
5] Your priority for these is "lower than average". Well, considering 1] and 2] it's way below average.
That's what I mean by contempt. And I'm not alone in wishing that it wasn't that way.
playBunny: Yep, that sums it up. You seemed to have explained to me why I don't like this rating system when it's used for Backgammon. I didn't know the particulars of it, just that I don't like it and how it seems unfair to the higher rated players. I guess we'll just have to see how it goes for a few weeks. Your's and alanback's prediction for the lowering of the top people's ratings has come to pass. I too was lowered a little bit in Backgammon, but lost nearly 300 points off my Dark Chess rating! Dark Chess has a little bit of luck in it, but not the amount of Backgammon. I think this new or fixed rating system will be OK for Dark Chess, but it stinks for Backgammon. Fencer has Chess at heart and will get around to Backgammon when he has taken care of his other pressing affairs. Hopefully it is higher on his to do list than the laundry. :)
Modificado por Chicago Bulls (27. Septiembre 2005, 22:12:06)
Yep, no one should complain to Fencer about delaying of taking actions. He has to do around 1.500 things per day. And that's only the Brainking-related one's......
Yeah, it's a bad habit of me putting a dot "." on integer numbers to make them more easy to read.......The correct rule is to put a space or nothing at all.......
First, resigning on the first move doesn't count as a loss for rating purposes.
Second, besides the number of games that have been played, another factor is the ratings of those who were played... especially for the first few games. This has worked in my favour in Anti Line4. :)
alanback: It could be 16 pts because you have played so many games, as I haven't played as many games say in nack as in backgammon the pts difference is in excess of 30
(ocultar) Si deseas encontrar un adversario de parecido nivel de juego al tuyo, echa un vistazo en la página de Clasificaciones del tipo de juego que quieres y busca a uno con un BKR similar. (pauloaguia) (mostrar todos los consejos)