Lista keskustelualueista
Sinulla ei ole oikeutta kirjoittaa tälle alueelle. Tälle alueelle kirjoittamiseen vaadittu minimi jäsenyystaso on Brain-Sotilas.
Then you agree with the concept, and just are saying that my "p.a.s.s." is equivalent to what you are calling a "rating".
I am just narrowing the scope of "rating" to two player games, and calling the rating by a different name so as not to confuse the two.
Since Fencer has decided on a system, there is no need to continue this.
To answer his question, I thought we were still in search of a system. I was asked by CzuchCheckers to post it in his "Ponds Plus" DB, and the suggestion was to also post it in here for discussion.
As it is no longer needed, I will no longer post on the topic.
A "weighted average of your pond performance" is not a rating. I came up with "p.a.s.s." as the acronym for it.
You can see that the range of values for your "p.a.s.s." designation will always be 1.0 as the best score possible, to whatever the limit is for the largest pond.
This is completely differnt that the range of a person's "rating", as, by default, I think a rating can never be below some arbitrary floor, such as 600 I think.
What does it mean when you are rated 2200? In the United States Chess Federation, this number is higher than 98% of the rated population, so this is the minimum threshhold for the Master Class.
It also means when pitted against someone 400 points or more below you, your chances of winning are functionally 100%. So, a 2200 player should be able to beat a 1799 player almost all the time.
This is an ideal situation, of course.
But understand the "rating" measures the likelihood of an outcome of one player versus another.
This is the definition of the term "rating".
In a pond game, you are not diametrically opposed to any one player. "Diametric opposition" is the mathematical framework upon which is built the concept of "ratings".
In the domain of computer science and mathematics, "ponds" would be described as a collective, parallel game of minimal elimination.
As such, it does not fit the defined parameters of what has been labeled "ratings".
We need a new working definition to underscore a player's performance in this group environment.
I offered one.
If you read and understand papers on mathematics, you would agree with me on the nomenclature.
It is not possible to assign an "Elo rating" to a game where there is more than two combatants.
There is no need for a "rating" since it is not a concept for a multiple player game.
A rating measures the likelihood of you defeating another opponent, based on their rating.
There is no system to predict the likelihood of how individuals would perform in a collective pool all acting in parallel, but there is a way to evaluate your performance independent of who your peers are.
A "rating" is very inaccurate at first, and corrects itself over time.
The "p.a.s.s." method does the same thing. Over time, your participation in a variety of pond games will effect your score. It does not matter if you are playing with an entire pond of perfect 1.0 players, or with a bunch of people who have repeatedly fallen out on the first run around.
Over time, the result will be the same...the scores will all approach their own performance, as surely as "water seeks its own level" in the natural world.
Likewise, say you came in 1st place out of 300 players, but finished 15th out of 16 in another pond. Should you be penalized too much for the poorer performance?
[(1 x 300) + (15 x 16)]/316 = 1.708
Again, it treats the quality of your performances in a meaningful manner.
A "rating", per se, does not apply to Ponds. Ratings measure the likely outcome of one player against another when the game has completely opposite goals -- in chess, checkmating the opponent.
Ponds is a "free for all" or "every person for themself" game.
So, a meaningful measurement would be how long you can stay in the pond.
An "average ranking" of your Pond performance, weighted by the number of players per pond, provides the most meaningful information.
In this case, the lower the number, the better.
But how can you qualitatively assess your overall performance?
Easy.
For each pond game you are in, multiply your final position by the number of players in that pond. Divide this by the sum of all players in every pond you played in.
For example, suppose you came in first place in 3 different pond games, each with 16 players. And you can in 14th place in a pond with 200 players.
How does this compare with someone who came in 2nd place in a pond of 50, 5th place in a pond of 75, and 11th place in a pond of 200?
It is not immediately apparent, so do this:
Player A
3 first place finishes:
(1 x 16) + (1 x 16) + (1 x 16) = 48
1 finish in 14th place
(14 x 200) = 2800
Sum of all pond players = 16 + 16 + 16 + 200 = 248
So add 2800 to 48, and divide by 248
Player A = 2848/248 = 11.48
Now Player B would have a performance rating of (2 x 50) + (5 x 75) + (11 x 200) = 2675 divided by (200 + 75 + 50), so...
Player B = 2675/325 = 8.23
Player B would actually have a better performance than Player A, overall.
Binary sets would just make it easier for Fencer to code. If you want a complete emulation of a multi-player multi-round event, there are parallel ratings systems such as Glicko2. Good luck trying to encode it though.
My thinking is not constrained, I just offered something that would work. I did not see anyone else offering anything.
The problem with a rating system for a game like Ponds is breaking down the game into binary sets for rating. For example, in chess, it is you against one opponent. The result is rated in a straightforward manner, since the goal of you and your opponent are "diamtetric opposites" -- you try to checkmate, and so do they.
In rating a pond game, it gets more complicated. When someone falls in on round 1, who did they lose to?
Will every player staying in have defeated this person? With the "faller" have lost to the N-1 who remained?
If so, in a pond of 50 players, the first one to fall out gets saddled with 49 losses.
The winner would also have a huge win count: 50 + 49 + 48 + 47... which would be 50 x 51/2 = 1275 at the end!
For rating purposes, I think it make sense to track two numbers: cumulative players defeated (as show above) and binary trials.
Cut the sections down into binary sets. In the case of a section of 64 players, perform the ratings as follows:
Players 1-32 get just one win against corresponding players from 33-64. So 1 beats 33, gets rated, 2 beats 34, gets rated... 32 beats 64, gets rated.
Players 1-16 get credit for a win against players 17-32.
Players 1-8 get credit for a win against players 9-16.
Players 1-4 get credit for wins against 5-8.
Players 1-2 get credit for a win against 3-4.
Player 1 gets credit for a win against 2.
That way, ratings will "stay close" to what we have come to experience as "normal" for other games on here.
And, pond winners in larger ponds will get more points than smaller ponds, yet those who exit early won't have their ratings totally sublimated.
Czuch, you will have to tell me how come Nash's paper was of some assistance to the S.A.L.T. participants, a much more complex series of negotitations with more dire consequences, yet it is of no value in a pond game.
And again, I said I generated the entire spectrum of bids using ranges and multiple worksheets.
Furthermore, I stated that there is a range of bets for every situation that would allow one person to finish ahead, given the 3 stipulations of:
1. non-discovery of the strategy
2. non-cooperation of others
3. non-suicidal bets
This is not the same as saying there is always a way to win. In fact, if you look at item #3, it is clear there is a way to disrupt this strategy every game. If just one player in each remaining round does something cavalier, it completely negates any gains that can be made.
This is not a "cop-out" as so many of you have said, it is something that was identified from my first post.
This is not "my idea", this is based on a paper that has been public for decades.
Anybody can do the same thing as I did.
As for Fencer's "human factor" remark: Nash's paper was PRECISELY about the human factor! You should really read it before you make such remarks that are so easy to dispute!
Redsales: "There's a rumor that he said he'd wait until the last minute to move each time to delay his opponent's victory..."
FACT: A rumor? Give me a break! And since when is using the time allocated for your game against the rules?
Redsales: "...that is the single poorest show of sportsmanship I've witnessed on this site to date."
OK Steve, what do you call this? A lie lending support to a rumor concluding with drama is...what? It's not aggressive? What is it then? And where did I provoke him?
Redsales: "It's so pathetic that I hope it isn't true. The kindergarteners I teach at in Korea have a leg up on that level of maturity."
OK Steve, what is that? Is this not another attack against me?
Sorry pal, he removes the posts, or maybe I show Fencer how biased you are as a moderator.
So I essentially played a game with a huge handicap, against the #1 player on the site in chess no less, but it's my game, and I played it the way I wanted.
And, if you happen to look at the matrix for this tournament, you will see Reza is only on move 21 against me in the game he is losing to me, whereas I am on move 38 against my opponent.
His Bishop is of the wrong color, something I tried to explain to you on more than one occasion. I don't have to move any pawns, and he can't win them. As pawns come off the board in pairs, he will be left with King + Bishop vs. King which is a draw, as everyone knows.
So, Redsales, get off my back about something that does not concern you at all.
Thad: Easy Thad. There are say 10 people remaining. You message 8 of your choice and say "bet this" which is an otherwise ridiculously high bid. All who remain suffer, but suffer cooperatively, which sends the 1 you did not message into the pond.
If I stay in it, it will tempt cooperation, as I would be easy to identify as the one to try and undermine. If somehow we only showed who drops into the pond at the end of each round, it would probably work even better.
I think to make this a true "double blind" experiment, I should withdraw from the tournament, and ask if any player would like to make bets that I propose. When that person splashes into the pond, he or she can reveal themself, and you guys can make fun of the system at that time
I disagree. You should also look up "The Hangman's Paradox" for a better explanation of why I made this claim. (hint: by making the claim I am influencing the play of others.)
You have the right idea, but Excel can map out all of the bet ranges for you, and you can eliminate all scenarios where you lose, then just look at your survival numbers, then see the pattern.
Given the rules of the game, players want to pursue the objective to keep money for as long as possible, not spend it all in one turn. When there is a majority going against the grain in such a fashion, the only factor of relevance is cooperation.
Answer: If there is NO COOPERATION among the other players, and the OBJECTIVE is to last for as long as possible, you can say with 100% certainty a bet of $19,999 will keep you around until the next turn.
Why is this so?
The only way to "lose" would be if everyone bet $20,000 (I am not sure if this would mean they all zero out anyway) which would imply massive cooperation, which would mean criterion #3 for unravelling the strategy has been evoked.
Granted, this is a ridiculous example, but you can go through a RETROGRADE ANALYSIS of survival through N rounds, then determine a bound you need never cross, then work your way back, then come up with a betting plan.
It is easier than you think, just play with it on paper (or in Excel) and you will see what I mean.
Let me not give away the system but still try and answer your question.
Is there a bet I can place on round 1 that will 100% guarantee that I do not fall into the pond? say $20,000 is the max, what is the most I can bet and still have me around for turn 2?
1) Like a WWII pilot, you learn about them after the fact.
2) You can know with 100% certainty that you will not fall into the pond. The only way there is a CHANCE you DEFINITELY will fall into the pond is if everyone works together against you. Read Nash's paper. It deals with cooperationg and competition.
I will disclose the method to Pedro after filtering out the high level math, but there is nothing in the spreadsheet that is not in Nash's paper.
Nash's paper deals with things far more complex than the pond game. It deals with a wide range of applicability such as Nuclear Arms Race negotiations, trying to get a better raise from your boss, deciding which girl to ask to dance in a crowded room, and plea bargaining a federal conviction.
It works for Pond games too, unless, as previously stated:
1. Some other person also derives the strategy.
2. A "kamikaze" id enters the pond with the sole objective to disrupt your strategy, even though they will lose sooner in doing so.
3. Everyone else messages what they will play, so it is you vs. the pond-dwellers rather than everyone for themself.
Maybe even a group can pool together and cooperate and thereby influence their own self preservation longer than they could if operating alone.
The point is:
This is a mathematics game, which means it can be modeled. It is "small enough" so that every permutation can be plotted.
I don't understand why there is denial over this simple fact.
The winning strategy, by the way, was the subject of the mathematical paper written by John Nash which was the subject of the movie A Beautiful Mind with Russel Crowe playing Nash.
Grenv bid 511 but was outbid by 523. There were others who bid 500+ as well, so they show the example of what happens when several people aggressively pursue the bonus.
But there are two things a player can do:
1. Minimize losses
2. Maximize gains
I am pretty sure some of you are well on the path to the right track, but you are using intuition (how else you can explain the "511") rather than deduction (a formula that will let you outperform the field.)
I joined a pond of 50, so now I know there is a target on my back
You hit the nail on the head, Mr. Formerly UpchuckKing.
But when will "everyone" do this? Will everyone bid the sum of their money on the first run? How about the second run? Will they bid half of their money?
The idea is to force as many people as possible to bid as high as possible, and still be able to bankrupt them.
Like entering the Final Round of Jeopardy with more than twice as much money as everyone else. It does not matter what they bid, you can't lose.
Now use retrograde analysis and keep working your way back from a proven win.
You will see an interesting pattern emerge, independent of what other people bet.
And, since I generated all of the betting scenarios, I discarded the ones that lose, and the ranges of lost bets, and guess what
I do have one question though.
At what point can you see what everyone else has bet. At the end of each round?
Otsikko: Re: Re:and had the solver determine a bet placement ordering for me.
Stevie: SIMULATION SIMULATION
Hello! I have not played in a pond before, this was just running numbers to show there is a way to:
a) secure the top point position
b) bankrupt "n" people below you in the long run, where "n" = the round number + 1, or splash the only person who can catch you with boni.
(piilota) Jos seuraat säännöllisesti vain joitakin keskustelualueita, voit lisätä ne suosikkikeskusteluihin klikkaamalla halutun palstan yläreunassa olevaa "lisää suosikkeihin" -linkkiä. (pauloaguia) (näytä kaikki vinkit)