4003
Post by: Nurglitch
Plenty has been said in this forum about the application of Statistics to Warhammer 40,000, and for good reason. Where dice are concerned, Statistics is the go-to mathematical tool for assessing and managing one's expectations of which way Tzeentch may be blowing that day. However, not all of the elements are random, and often the outcome of a particular action will depend on your opponent's reaction to
One interesting application of Game Theory has been instituted in the move from 4th edition to 5th edition. In the 4th edition rational (meaning in this context players that prefer $2 to $1 at the end of a game, all else being equal) players realized several things:
(1) Getting the first turn was an advantage
(2) Preserving units for the last turn was an advantage
This can be demonstrated by a toy-game called Hawk-Dove (or the Prisoner's Dilemma if you swing that way...): In Hawk-Dove the players each have two strategies, named Hawk and Dove. The Hawk tactic is to take. The Dove tactic is to give. The potential outcomes can be represented on a 2x2 table with the following results:
1. Hawk vs Hawk: Each player takes a $1 from the other (1/1)
2. Hawk vs Dove: The Hawk player takes $1 from the Dove, and the Dove gives the Hawk $2 (0/3)
3. Dove vs Dove: Each player gives $2 to the other (2/2)
If it is common knowledge to both players that the game will only be played once, and according to the rules, then it is rational for both players to play Hawk. Though the outcome will be worse for both than if both players played Dove, it will not be as bad for one player if one played Dove and the other played Hawk. In other words, by playing Dove your worst outcomes as a player will be either $2 or $0, while by playing Hawk your worst outcomes as a player will be $3 or $1.
Suppose we change the game to be a set of Hawk-Dove games, called it the Definitely Iterated Hawk-Dove game. If it is common knowledge to both players that this Definitely Iterated game will be played 10 times, and according to the rules, then it is likewise rational for both players to play Hawk each time the game is repeated, or each turn. The Hawk strategy is thus said to dominate the Dove strategy.
Incidentally modification for multiple iterations turns the choice of Hawk and Dove from strategies, or goals, into tactics, or means of achieving said goals. The Hawk strategy generalizes from playing Hawk, to playing Hawks.
Just as in the 4th edition of Warhammer 40,000, knowing when the game will end on a known turn means that certain strategies will dominate other available strategies.
Likewise consider a game more like Warhammer 40,000, where one player gets to play a turn, and then the other player gets to play a turn. Let's call this game the Sequential Iterated Hawk-Dove. If a player has the first turn, again Hawk dominates Dove, because playing Dove depends on the subsequent player preferring $2 to $3, which is irrational.
In 40k terms this meant that a player with access to a 40k equivalent of the Hawk strategy would always prefer that strategy, because it was always the best strategy if they got the first turn, or the second, and whether their opponent decided to take the Hawk strategy, or the 40k equivalent to the Dove strategy.
Now there is a version of the Iterated Hawk-Dove game where Dove dominates Hawk. In this version, the turns are indefinitely iterated, so that players don't know which turn is the last turn, and thus cannot work backwards to figure out the dominating strategy on the first turn, and must play it from turn to turn. Playing from turn to turn, iteration to iteration, means that players can benefit from the goodwill generated by playing Dove, and retaliate against an opponent who played Hawk last turn. In other words, playing Hawk in one turn means that one risks the other player playing Hawk in the next turn, and doing worse than playing Dove.
To illustrate this, consider an Indefinitely Iterated Hawk-Dove game where two turns have passed with both players playing Dove, meaning they each have $4. If player A decides to play a Hawk tactic in the next turn, in order to take advantage of player B continuing to play Dove, then they will have $7 and $4 respectively. This is good if the game ends, but only lucky because either the likelihood of the game ending was common knowledge and thus both would have chosen Hawk, or it was unknown and player A risked retaliation.
But so what if player B relatiated? Suppose the game ends on turn 5, with both players continuing to defect after turn 3? Player A would win with $9 while player B would lose with $6. Not bad, but player A is rational, and prefers $2 to $1. Had player A continued to co-operate, then he would have finished the game with $10. The same goes for player B. That tactical decision would have been irrational.
So why was it irrational if player A played Hawk on turn 3, when it won him the game, which otherwise would have been a tie? Firstly, it is taken as an assumption in Hawk-Dove and its game that player A is rational and prefers the monetary result to the game result, which was established by Hawk-Dove when it was better for players to choose to tie with Hawk rather than risk losing. Similarly trying to play Hawk in the Indefinitely Iterated Hawk-Dove risked losing because it could well have been player B's strategy to play Hawk in turn 2, meaning the result of the game would have been a loss for player A at $5 and a win for player B at $8.
This symmetry between strategies is important because it means that either player trying to outdo the other by playing Hawk first either risks losing, or doing worse in the game than if they had held off playing Hawk and played Dove instead. Just as in the Hawk-Dove game, the players in the Indefinitely Iterated Hawk-Dove game better serve their rationality if they avoid doing worse than otherwise.
Furthermore there may be a greater game afoot. Imagine players in an Indefinitely Iterated Hawk-Dove tournament, where the risk of doing worse than otherwise could scuttle their chances of winning the tournament, or simply knock them out with what little they managed to squeeze out of their last opponent. Again, whether the goal is to win the iteration, the game, or the tournament, the player does better than otherwise by avoiding loss rather than chasing uncertain gain. And where certainty is available, as in the Hawk-Dove game, the strategy is clear.
I'll see if I can look it up, but DashofPepper had a tournament experience exactly like this where he essentially got locked by multiple ties because instead of playing to win, his opponents wisely decided not to risk it and instead avoided losing.
The application to playing games of 40k is clear: If you want to stop losing, stop trying to win and start trying not to lose. Because if both players chose to try and win, and they don't make mistakes, then the game will be settled by the dice, and as mentioned that's an irrational risk if you want to do better than you would otherwise.
In this thread I'm going to use game theory to build on the groundwork laid out here, and introduce additional concepts to the ones of rationality, common knowlege, domination, the Nash Equilibrium (not doing worse than otherwise), and reciprocity (good-will and retaliation) that I've mentioned here. I would also like to further modify the Sequential Iterated Hawk-Dove game to build a model of Warhammer 40,000, and to eventually show where the traditional view of Math Hammer as Statistics fits into the mathematical treatment of Warhammer 40,000 as a game.
19377
Post by: Grundz
My favorite part about people that try to hammer it out, is they don't see the big picture.
I commonly lure them into losing a game by presenting a mathmatically attractive option that drags units out of place or works towards the greater plan.
tactics beat mathammer, people that don't have a concept of the math generally do poorly but too much emphasis is put on it.
21789
Post by: calypso2ts
I wrote a whole lot initially but my browser killed it. I would be interested to see how you abstract WH40k to match some of these Game Theoretic principles. More importantly, how you can they be transformed to identify winning strategies for these games.
I originally expanded on some of these concepts, but I do not want to step on your toes. Also, I assume you will be building this model into your explanation and I would hate to rehash things.
One interesting read about rationality though (it is short) is by Hey "Do Rational People Make Mistakes" Automatically Appended Next Post: Grundz wrote:
tactics beat mathammer, people that don't have a concept of the math generally do poorly but too much emphasis is put on it.
This is the 'Hannibal' syndrome, you can win every battle (tactics) while losing the war (strategy). Sometimes it is okay to lose individual encounters when it advances your aggregate strategy.
4003
Post by: Nurglitch
Grundz:
Yes, and in presenting your opponents with a suitably sweetened outcome that they find irresistable, you are using game theoretic mathematics better than they are.
Part of this discussion is to situate
calypso2ts:
The plan is not to abstract Warhammer 40,000 to fit game theoretic principles, but to build a model of Warhammer 40,000 using game theoretic principles, while stressing the basics such as the difference between tactics and strategies, and analyzing strategically preferable options.
I might also recommend writing in Notepad and then copy-pasting into your browser. I've been dealing with fishy internet connections for years and it's saved me lots of headache for long posts.
Incidentally, by 'Hannibal', aren't you referring Pyrrhus? Hannibal didn't win all his battles to lose the war, he just won most of them and then was recalled to North Africa when the Romans went on the offensive. Pyrrhus, on the other hand, gave his name to the Pyrrhic Victory.
11988
Post by: Dracos
calypso2ts wrote:This is the 'Hannibal' syndrome, you can win every battle (tactics) while losing the war (strategy). Sometimes it is okay to lose individual encounters when it advances your aggregate strategy.
My most recent game is direct annecdotal evidence of this. I was playing a capture and control game wherin my opponent was orks. By turn 3, I had crippled his mobility but he had a huge central position. Therefore, I sacrificed every unit in my army to prevent him from being able to take my objective, while sending a mobile unit to claim his. By doing this, my army had less points left on the board at the end, but I knew that I could simply make him take too long to go through my army to make it to my objective. I lost many battles in that war, but overall won the game 2 objectives to none.
edit: But its not quite as simply as that either: Had he chosen to play for the draw I would have been hard pressed to remove him from his own objective. Playing for a draw is often a good way to play as it often has the lowest risk.
22749
Post by: Lycaeus Wrex
Excellent post, which definetely makes for interesting reading. Whether I would personally analyse a game of 40k this deeply is up to the jury to decide, but I appreciate that other people take the time to write up an intriguing a read such as this.
L. Wrex
4003
Post by: Nurglitch
As with any kind of competitive analysis, it's probably better to go by gut instinct than to attempt to crunch the numbers under stress. That said, training yourself to do the math naturally means training your intuitions to match the math, so your intuitions are more accurate and reliable.
4003
Post by: Nurglitch
I should reiterate that something I'm going to be reiterating is that trying to win is not a sound strategy, and that it's better to avoid losing.
11988
Post by: Dracos
Interesting to note that the Hawk-Dove game is a great analogy. Trying to not lose will help you to win against someone who is trying to win, but is a horribly drawish way to play if you opponent also takes the same strategy.
However, if you examine the strategy as part of a tournament strategy, its actually not as helpful to try to avoid losing. Since avoiding losing results in more drawish games, you are less likely to come out on top at a tournament.
4003
Post by: Nurglitch
Dracos:
I think it would be helpful here to distinguish between losing the game and losing more in terms of acting irrationally. When I say that you should play to avoid losing, I mean you should follow the Nash Equilibrium and act rationally, not that you should attempt to play to a draw.
Remember, in the Hawk-Dove game, playing the Dove strategy is worse than playing the Hawk strategy. While both players playing the Dove strategy would get more by both playing Dove than both playing Hawk, they would get less by playing Dove if the other played Hawk. Playing Hawk is therefore playing to avoid losing, not playing to win.
As a placehold, therefore, it is useful to consider playing to win as substituting forthe Dove strategy and playing not to lose is the Hawk strategy.
Which brings me back to my point about not losing. If playing not to lose really is the equivalent of the Hawk strategy, then playing not to lose shouldn't result in more tied games, particularly if you're playing against someone playing to win.
It's an interesting point about the Hawk-Dove game that empirically speaking beginners play Dove at a far higher rate than experienced players, and the more experienced they are, the more likely they are to simply play Hawk.
Warhammer 40,000 has a far longer feedback loop between the strategy the player employed (if at all...) and the results of that strategy such that many players don't adapt a losing strategy. Indeed, that's why I raised DashofPepper's tournament report as an example of a good player by all accounts being flummoxed by an opponent that didn't also try to win.
4003
Post by: Nurglitch
Heh, I'm going to follow this up with something more substantive, but I thought I might mention that one of the Soul Drinker novels has the Adeptus Mechanicus (or an Explorator Fleet thereof) solving fleet-action problems using abaci because they can
To be honest an abacus is handy. I'm currently using custom Litko pieces and dice to calculate the value of stuff like the expected value of Venerable Dreadnoughts. So much easier than calculating in a linear fashion...
24707
Post by: Hesperus
I like game theory, and I like that you know game theory, but I don't think the game theory is doing any work here at all. Consider this post an attempt to figure out exactly what you're saying.
First, a technical point. The prisoner's dilemma is a simultaneous game. 40k is a sequential game. The solution happens to be the same either way in this case, but simultaneous games in which there's no dominant strategy come out different if you make them sequential games.
Second, I don't really see where you're getting your payoffs. Where do you get these? I guess you're saying that, if the other player plays to win and you play to not lose, you'll beat them. And here your terminology is confusing. Shouldn't the player who plays to win have a better chance of winning? Otherwise he's not actually playing to win; he's just playing like an idiot. Maybe I simply don't understand your terms.
Maybe what you mean by 'play to win' is 'play riskily,' and by 'play to not lose' you mean 'play conservatively?' That makes your argument at least coherent, but it also makes it wrong.
It's wrong because risky plays don't necessarily have lower expected payoffs than conservative ones. Risk-loving people are just as rational as risk-neutral or -averse people. They just have a different preference when it comes to risk.
Now, you could be making an empirical claim, that in general risky plays in 40k do have lower expected payoffs than their conservative alternatives. That's very interesting if it's true, but (a) it doesn't require any game theory analysis, and (b) you haven't offered much evidence to prove it. Indeed, most tournament winners run pretty risky lists and play pretty risky games. On your payoff scheme, they should be losing to conservative players, but they don't.
Please correct me if I've gotten something wrong. I like the idea of applying game theory to 40k; I just don't think it works in this instance. Applying it to list-building has some potential, though. Maybe you can get us past the inane rock-paper-scissors analogy.
28942
Post by: Stormrider
This is an interesting take on playing "not to lose", which is an effective idea, but I am a bit too bold to do that.
I am what you could call an "attack and recieve" tactician, I'll either let my opponent overextend themselves and crush their salient (much like the USSR at Kursk) or I advance on an even and broad front (terrain permitting) and gradually grind their force down. Not allowing them units to kill piecemeal, and making them suffer for ever loss they do enflict. It's a bit of a generalistic approach, but it's very effective.
Good thread Nurglitch!
31026
Post by: SmackCakes
Nurglitch wrote:1. Hawk vs Hawk: Each player takes a $1 from the other (1/1)
2. Hawk vs Dove: The Hawk player takes $1 from the Dove, and the Dove gives the Hawk $2 (0/3)
3. Dove vs Dove: Each player gives $2 to the other (2/2)
Nurglitch wrote:But so what if player B relatiated? Suppose the game ends on turn 5, with both players continuing to defect after turn 3? Player A would win with $9 while player B would lose with $6. Not bad, but player A is rational, and prefers $2 to $1. Had player A continued to co-operate, then he would have finished the game with $10. The same goes for player B. That tactical decision would have been irrational.
So why was it irrational if player A played Hawk on turn 3, when it won him the game, which otherwise would have been a tie? Firstly, it is taken as an assumption in Hawk-Dove and its game that player A is rational and prefers the monetary result to the game result
That doesn't make any sense. If I win $10 off you, and you win $10 off me, then neither of us win any money. If I play Hawk on turn 3 then I win $3
Firstly, it is taken as an assumption in Hawk-Dove and its game that player A is rational and prefers the monetary result to the game result
If that were the case why would he choose $0 over $3? doesn't make sense.
27582
Post by: Smarteye
Nice comparison!
Admittedly I had to read it 2-3 times to understand it, but very thought-provoking none the less.
20243
Post by: Grey Templar
Nice post
i can see how this is a good strategy.
"Try not to lose"
25580
Post by: Maelstrom808
Playing not to lose has been a core ideal I apply in pretty much every game I've played. 40k, A&A, RTS and TBS video games, etc. For 40k I fell into it naturally playing Necrons and have extended it to every other army I've dabbled in (Nids, IG, Tau). Form a strong defensive core first and formost. Add in glass cannon offensive elements that are designed to quickly cripple the opponent's own offensive capability, then let them make the mistakes and seize an opening if one appears, but don't risk the core to create an opening. Overall I'd say I draw a LOT, win occasionally and rarely ever lose. I also don't have the "I must win to have fun" perspective that I find with many people.
6153
Post by: Xenoscry
First let me start by saying I love this thread.
It's well thought out and well written. In my opinion the core concept is a relatively simple one but has the ability to fundamentally change the way people play their games. I am extremely interested to see what you do with this and will be watching this closely.
Second, I have been playing Tau since 2002 and learned the "don't play to win, play not to lose" philosphy fairly early on. I have always loved firewarriors and do my best to field 2-3 full squads in any given game. At first it was because from a fluff standpoint the Firewarriors are supposed to be the backbone of the Tau Military and more seen on the battlefield than any other type of unit. But over time I found I was losing many games because my oppnents just weren't failing saves and for all the pulse rifle's strength and range it won't kill anyone if they don't fail that armor save. I wrote posts on other forums and talked to everyone I could find at hobby stores. Eventually I came across a post on warseer where a guy began talking about how Tau players really shouldn't be trying to table their opponents.
"The name of the game in warhammer isn't who killed the most. It's who dies the least."
Since then I have always kept that in mind during my games and in doing so my ability to win while playing the troop heavy army I enjoy has increased dramatically. I no longer attempt to simply wipe out my opponent but try to weaken him so that my defensive strengths play through. (good armor, good sv's, good mobility) All the while thinking the more of my army that is alive than his increases my ability to take points, further damage his force, and control the scale in which statistical abnormalities effect my position.
While I have applied this theory to the Tau it can work for everyone and I'm just glad to see people talking about it... FOR THE GREATER GOOD! lol ;o)
4003
Post by: Nurglitch
Hesperus:
I do believe I spent some time discussing where sequential games fit into this, specifically the Sequentially Iterated Hawk-Dove game, the fact that Warhammer is a sequential game, and that I would be building a model once I'd covered enough material that the conceptual machinery would be available.
In other words, that first post was just about introducing basic concepts and encouraging people to use them in juxtaposition with Warhammer 40,000. If you feel that I haven't managed to show how game theory can be used to analyze Warhammer 40,000, you'd be right, because I haven't gotten to that point yet.
So instead, as an abstract exercise to get people thinking about the two, I linked playing to win with Dove, and playing not to lose with Hawk. The payoffs follow accordingly to the Hawk-Dove games. Risk here is in terms of not acting rationally and stating a revealed preference for less rather than more. I suppose the problem with introductions is that they don't also contain a body and a conclusion.
The important thing to take home, so to speak, is that one can think about Warhammer 40,000 in game theoretic terms, rather than any specific conclusions about strategy. I think I might go back and tag out that post to clarify.
SmackCakes:
The Hawk-Dove game and its descendents are not zero-sum games. Giving the other player $10 does not proportionately decrease your own score.
You may have noticed that I made no mention of the resources available to each player, and incidentally a concept of resources is one of the things that will distinguish the Hawk-Dove toy game from the game theoretic model I will be building.
Regarding the reason for a player rationally playing Dove in any single iteration of an Indefinitely Iterated Hawk-Dove game, I had explained that while a player may appear to benefit from the Hawk strategy in the Hawk-Dove game, the conditions of the Indefinitely Iterated Hawk-Dove game makes that benefit short-term and near-sighted.
There are three things that make playing Dove, and risking $0, instead of playing Hawk, and risking $3.
First there is the symmetry between the players such that they are both rational, and if doing something is a good idea for one player, it will be a good idea for another player. This means that player need to consider sets of outcomes, rather than particular outcomes.
Secondly there is something called backwards induction which is how one works back from a goal to establish how best to achieve that goal.
Thirdly, and tying backwards induction to symmetry, the player prefer more at the end of the game rather than any momentary gain at the end of any turn or iteration.
So [1, 3] > [0, 2] in the Hawk-Dove game because it is a single iteration and a player playing Hawk will never do as badly as when they play Dove.
In the Indefinitely Iterated Hawk-Dove game the players don't know when the game will end, and so while they will do better than their opponent if they play Hawk earlier, their opponent knows the same thing thanks to common knowledge and symmetry, but they will not do as well as when they continuously play Dove, since $2 > $1. Over five games, a pair of reliable Dove-players will rack up $10 whereas a pair of reliable Hawk-players will rack up $5. The Hawk-players are therefore acting irrationally because they prefer $10 to $5.
Notice that the symmetry in the Hawk-Dove game means that the actual result of play will be $1 for each player, which is preferable to $0 if either played Dove.
Let me return to the notion of backwards induction and the general problem of circularity. Lots of people have difficulty with game theory because it uses cyclical reasons, something that people sometimes mistake for the circular reasoning fallacy. Indeed, many people get caught in a viscious loop the first time they are introduced to the Hawk-Dove game because they want to think of it like "If I know that he knows that I know that he knows, etc"
As mentioned revealed preference circumvents all that blather about minds and thoughts, as it fixes the outcomes as an indexed table, which is why so many toy games are of the 2x2 grid variety with two strategies/tactics, because it's easier to demonstrate concepts with a parsimonious number of elements.
24707
Post by: Hesperus
So my earlier post was a little rambling. I'll say things succinctly here.
I disagree.
As far as I can tell, your thesis is: Playing conservatively beats playing riskily.
That's fine, but the parts of game theory you introduce don't support it. They don't hurt it, either. They don't do anything either way.
Evidence suggests your thesis is wrong, at least at higher levels of play. If playing conservatively beat playing riskily most of the time, most tournament winners would be people who played conservatively (assuming those are the only two ways to play). But most tournament winners don't play conservatively.
Here's my opposing position: Playing riskily gives you a higher chance to either win big or lose big, and playing conservatively gives you a chance to either win small or tie. In tournament setting you have to win big, so it makes more sense to play riskily. In non-tournament settings it's a matter or personal preference.
I do think it would be interesting to apply game theory to list writing, ie the metagame, because that is a simultaneous game. It'll be complicated, though.
4003
Post by: Nurglitch
Hesperus:
I'm not sure how to make this clear: I arbitrarily connected playinng to win and playing not to lose with the Hawk-Dove game to encourage people to think about Warhammer 40,000 in game theoretic terms. I apologize for misleading you into thinking I was making anything more than an abstract claim for the sake of promoting discussion.
If you want to argue with any concrete claims I will actually be making, please stay tuned forwhen I actually make them.
Also I don't see any point to discussing list writing in the Tactics forum: that's Army List forum stuff. If you'd like to start a thread there about applying game theory to army list writing, please feel free to do so.
752
Post by: Polonius
If i'm reading you correctly, it sounds like youre saying that the best game theory approach is to simply make choices that result in the "best" average or worst case scenario, as opposed to those strategies that result in the "best"best case scenarios.
it seems that in many games, the ability to gain positive value through skill is relatively slight, while the ability to lose value through mistakes is huge. Is that where you're heading?
It would seem that actual game play in 40k (aggressive vs. conservative) is not entirely related: there are times that playing aggressively is actually the way to avoid the largest loss. Playing horde orks against tau gunline is not the time to be coy.
It seems the other concept being alluded to here is the difference between "winning" and "acheiving the best result." the rational player wants to have the best possible value at the end of the day, instead of worrying about winning any given games.
24707
Post by: Hesperus
Yeah, sorry, you posted while I was writing my last post and I didn't notice. If your're not actually trying to model anything yet, then cool. It's gonna be really hard to model a game as complex as 40k, but you seem to have the background to do it. I'm interested to see what you can do.
35046
Post by: Perkustin
Lol thought it said 'Methhammer' that is all.
Post +1... lol
35368
Post by: CurrentlyUnknown
Hm, maybe I'm misreading Nurglitch's post, but my understanding is that using the "model" that he describes in the first post isn't really an abstraction that should be applied to 40k, either at a single game, or tournament level. It is far too simplistic, and does not accurately describe the game, or a series of games.
His point, I believe, was to give a framework to show how game theory can provide "answers" for rational-acting players in situations. For example, backwards induction is something that can be utilized to assess tournament play. That said, taking the same tourney scenario and simply shoving it into the prisoner's dilemma (or hawk-dove, because that's how I roll) game doesn't really provide the information you'd want.
29681
Post by: EvilEggCracker
So... Essentially it's all situational, and we should try not to lose?
Withering hail of lasguns > Game Theory.
My men will advance no matter what, with their bayonets. Strategically smart or not, I will take the entire damn table.
24707
Post by: Hesperus
If your dichotomy is risky v. disciplined, then you're using risky in a way I'm not. Your way is something more like "reckless," which is definitely a bad way to play. Of course you always want to play the odds, because then (shockingly) the odds are in your favor. I was using risk in the way typically used by economists, and especially game theorists. A 'riskier' choice in that sense is something with a lower probability of success, but a higher payoff. Say, for example, you have a scoring unit on an objective and your opponent has a scoring unit on another objective (an that's it). It's the bottom of the 7th turn. You can sit on your objective and tie, or assault him on his objective, with a 25% chance of wiping him out, a 25% chance of being wiped out, and a 50% chance of both units surviving and contesting. Assume a win's worth 2, a tie's worth and a loss is worth 0. Which should you do? Well, you could rationally pick either one, because they have the same expected value (of 1). A risk-loving person would go for it, while a risk-averse person would take the guaranteed tie. Neither choice is right or better. Now, you're saying that disciplined play usually works better. That's probably true: you can't lose the forest for the trees. But a very risky player can be just as disciplined as a very conservative one. He simply has a different kind of battle plan. He can still stick to it, taking calculated risks, though he'll probably have to plan for more contingencies than a conservative player. Seems like you have two real mottos. 1. Stick to the plan! That's good advice. 2. Do as well as you can! Also good advice. If you think you can only get a draw, then play for it. If you think you can pull a massacre, play for that. But if you think you can pull a massacre, you don't play to not lose.
4003
Post by: Nurglitch
I think I'm going to continue with the purpose of this thread now...
As mentioned, I wanted to discuss more game theoretic machinery in order to reinforce the principles discussed in relation to Hawk-Dove, and perhaps learn from my mistake in proposing abstracted applications over the specifics of Warhammer 40,000.
But first we need to go back to a fundamental aspect of Warhammer 40,000, and that is its sequential structure. Warhammer 40,000 alternates turns between players rather than simultaneously combining them, as well as ordering movement, shooting, and close combat movement as sequences. Close combat itself is simultaneous in that the results don't sequence into separate combat, and that models may attack at the same Initiative step, but also a sequence defined by the Initiative steps.
Both players pieces interact in close combat, making it a complex piece of simultaneity and sequence nested in the turn sequence. This complexity will be addressed after I've reviewed some implications using the Sequentially Iterated Hawk-Dove game.
In the Sequentially Iterated Hawk-Dove game, the players retain common knowledge of potential outcomes despite the choices made by the players happening in sequence. As noted, backwards induction tells us that players will rationally prefer to play Hawk if they know when the game will end, and prefer Dove if they do not.
Although the second player will appear to have the advantage of knowing the first player's move, both players will know the potential reactions to their movies via backwards induction, so that they will have the information necessary to anticipate the potential reciprocity and hence the dominating strategies.
Now it may seem that the players of Warhammer 40,000 do not have several of the elements available to Sequentially Iterated Hawk-Dove players. They do not appear to have perfect information, for instance, but that is to confuse uncertainty about any particular outcome with all of the potential outcomes in any given turn that form parameters of action. They seem to have complete information of the game, just as players of the Hawk-Dove games do.
However, the Hawk-Dove games lack an element in Warhammer 40,000, the random element of dice. This means that while players of the Sequentially Iterated Hawk-Dove games can assess the potential outcomes of either player's choices and hence the expected utility of the strategies and tactics available to them. For every choice, there is one of four outcomes, according to the payoff table of the Hawk-Dove game.
Players of Warhammer 40,000 are swamped with information about the expected value of their choices as well as the expected utility. The expected utility of an action in Warhammer 40,000 has ceilings ((except in the case of Blood Talons...) and floors with the floor typically set at 0 (though sometimes negative in the case of Gets Hot), and the space in between is divided by all the potentials. For example a squad of five Tactical Space Marines shooting with their Boltguns at a target 12" or less away have 11 outcomes, ranging from 0-10 results (damage/wounds), with the expected value usually closer to 0 than 10.
In a practical sense then the players often lack information about the outcomes of their own actions and those of their opponent, but in game theoretic terms they should be able (given enough time and calculating machinery) to determine the payoff table associated with each turn of the game. This complete pay-off table would suggest complete information is it was not the case that Warhammer 40,000 is concerned with scarcity of resources and space, as well as time (time being the number of turns or iterations).
So let's return to a perceptive 'mistake' made by a poster in this thread where they wondered why the players of the Iterated Hawk-Dove might prefer to exchange $10 in a zero-sum for $3 and a win. In the Hawk-Dove game the players can play Hawk or Dove without regard for whether they have the resources to so, or the space to make it happen. They just have to worry about the dimension of time limiting the dominating strategies.
Played using Victory Points as the payoff, Warhammer 40,000 becomes a zero-sum game, and Kill Points and random numbers of Objectives can be thus be regarded as a feature that broadens the number of strategies available from those of purely zero-sum games.
Regardless, this scarcity of resources and space means that the while players can use backwards induction and the payoff table to think one game turn ahead, they will not have complete information for all the potential game turns ahead. In the Sequentially Iterated Hawk-Dove game, playing Hawk does not prevent one's opponent from playing Hawk in retaliation the next turn.
By contrast destroying a Land Raider on the first turn prevents it from acting in subsequent turn and limits the Land Raider from synergizing with other units. Therefore the ultimate payoff is more than 2x that of destroying the Land Raider in the second turn. It is more than 2x the payoff because as well as potentially limiting the Land Raider from acting twice, it also limits the scope and options for retaliation.
A first-approximation of this situation can be demonstrated by rebuilding the Sequentially Iterated Hawk-Dove game so that both players have access to a limited pot of $10, five turns each and players receive their payoff on the next turn after playing a tactic. This means that the player with the first turn gets no payoff until their opponent chooses a tactic, and so on.
Indefinite iterations and variable first turn will be added to the model later.
I'll iterate that, as usual, rationality is measured by how much money the players accrue over the course of the game rather than whether they win.
24707
Post by: Hesperus
See, this is what I was concerned about to start. People who haven't taken game theory are gonna have a heck of a time understanding what Nurglitch is talking about. The prisoner's dilemma (which is identical to the hawk and dove game, incidentally) is not directly applicable to 40k. It's a simultaneous game and 40k is a sequential game (more specifically, an indefinitely iterated (repeated) sequential game. In the prisoner's dilemma you don't pick between which player you're going to be, because each player has the same dominant strategy: squeal. There's never a time in the prisoner's dilemma when you shouldn't squeal. You might keep quiet in one instance of an indefinitely iterated series of prisoner's dilemmas to build some goodwill with your opponent, so maybe you can get to the cooperative outcome. But I have no idea how to connect that to decisions in 40k. So starsdawn, good try. It's confusing stuff. But it's not Sun Tzu, and it's not...whatever you were doing there. And Nurglitch: As I said before, I think this is an interesting idea, but it's gonna be crazy hard to actually use. I'm having trouble following you and I have a background in econ. Use small words, and don't use examples unless you're actually modeling something. I think it's just confusing people. And if I can help in any way, I'd love to. It's a huge project.
4003
Post by: Nurglitch
Deleted in Compliance with the Moderators
Some of the concepts explored in the Hawk-Dove game.
Rational - Players are rational when they consistently prefer more to less.
Players - Games have a minimum of two.
Iterations - Number of times a game procedure it repeated.
Tactics - Options in a game.
Strategies - Sets of tactics ordered over iterations. Synonymous with tactics when considering single-iteration simultaneous games such as Hawk-Dove and the Prisoner's Dilemma.
Payoff table - A table cross-referencing the utility of simultaneous tactics. It will have a minimum of four cells.
Revealed Preference - The utility of each intersection of each player tactics. In Hawk-Dove the revealed preferences are &[$0, $1, $2, $3] for each player.
The Nash Equilibrium - The strategy that best replies to the set of potential opposing strategies.
Domination - One strategy dominates other strategies if it is rational regardless of any opposing strategy.
Game tree - A flow-chart describing the relation of players to strategies and payoffs in sequential games.
Reciprocity - The property of a strategy reacting to the payoffs of previous iterations.
Common knowledge - Information about the game and its states shared by the players.
19941
Post by: SpankHammer III
Hi all
first off wow my head hurts. Ok i don't know game theory, i've heard of it but thats about it, I can't say whether its applicable or not but Nurglitch i'm interested in what you come up with.
Arn't you going to have problem with the whole rational thing? as scubman said people arn't rational, but you also have the problem in 40k of people not recognising somethings worth. You mentioned that a rational play will always pick $2 over $1 but what if the player doesn't know $2 is greater than $1. As was mentioned newer plays often try to table the other player rather than attempting to hold objectives/limit losses.
I would ask you try and keep this layman friendly for those of us with out math qualifications, however if this thread isn't designed for layman then that cool and i'll be quiet.
29681
Post by: EvilEggCracker
Nurglitch wrote:EvilEggCracker:
Well, I suppose someone had to go ahead and embarrass themselves in this thread. Way to take one for the team.
Not at all. I just feel that this entire discussion is inherently flawed - it's theory.
You have to remember that you're applying it to a game with random variables. Strategy and tactics are all too fluid to define as "Hawk" or "Dove". Ultimately, this discussion is pointless and has little-to-no relevance to the actual game.
I just wanted to put that point across in a more "fun" way.
Edit: Also, if you played a mass infantry Imperial Guard army, you'd understand why I'd want to strap the bayonets on and start marching.
11988
Post by: Dracos
Then why do you insist in detracting from this thread? I for one am enjoying what he has to say, and your posts only serve as a minor annoyance and nothing else. You should stop posting here.
23589
Post by: Sageheart
Hesperus wrote:I like game theory, and I like that you know game theory, but I don't think the game theory is doing any work here at all. Consider this post an attempt to figure out exactly what you're saying.
First, a technical point. The prisoner's dilemma is a simultaneous game. 40k is a sequential game. The solution happens to be the same either way in this case, but simultaneous games in which there's no dominant strategy come out different if you make them sequential games.
Second, I don't really see where you're getting your payoffs. Where do you get these? I guess you're saying that, if the other player plays to win and you play to not lose, you'll beat them. And here your terminology is confusing. Shouldn't the player who plays to win have a better chance of winning? Otherwise he's not actually playing to win; he's just playing like an idiot. Maybe I simply don't understand your terms.
Maybe what you mean by 'play to win' is 'play riskily,' and by 'play to not lose' you mean 'play conservatively?' That makes your argument at least coherent, but it also makes it wrong.
It's wrong because risky plays don't necessarily have lower expected payoffs than conservative ones. Risk-loving people are just as rational as risk-neutral or -averse people. They just have a different preference when it comes to risk.
Now, you could be making an empirical claim, that in general risky plays in 40k do have lower expected payoffs than their conservative alternatives. That's very interesting if it's true, but (a) it doesn't require any game theory analysis, and (b) you haven't offered much evidence to prove it. Indeed, most tournament winners run pretty risky lists and play pretty risky games. On your payoff scheme, they should be losing to conservative players, but they don't.
Please correct me if I've gotten something wrong. I like the idea of applying game theory to 40k; I just don't think it works in this instance. Applying it to list-building has some potential, though. Maybe you can get us past the inane rock-paper-scissors analogy.
I'm very much on the same thought process as this. the hawk-dove talk is interesting but i don't see exactly how it works. I see the idea of playing conservatively as legit, def if you think of the article "way of the water warrior" which seems to show a viable way to play a conservative list that requires careful planning, but it doesnt seem to play to not lose.
and wha t exactly do you define as risk or rationality in a game such as 40k. Is rationality keeping objectives in mind? What comes to mind is a Land Raider wrecking havoc on my forces, I rationally need to stop it since it is hurting my chances of having troop choices to grab objective, but it could be a risk to take out that land raider. is that too situational for this? I like what you're saying, but I have trouble grasping how exactly it comes into play when I am looking at the field of battle and the various choices I can make. Automatically Appended Next Post: could you also maybe try explaining the prisoner's Dilemma in a bit more detail, not to derail the OP. I am really interested in this but I am having trouble grasping it
28090
Post by: liam0404
Firstly, I personally would like to thank Nurglich for this fantastic thread. As a Comp Science graduate, this is right up my street.
As I'm currently on a train I will reserve my full comments for later, but I do have one question - wouldn't the game tree for a game as compex as 40k be absolutely horrific in scale? You'd need some very tangible heuristics to search it effectively, and I'm not sure what you could use. Automatically Appended Next Post: @sageheart
The basic prisoner's dilemma is as follows.
You have prisoner A and Prisoner B, both arrested on suspicion of tge same crime. Both are interrogated seperately. If They both implicate each other, both receive 10 years. If they both remain silent, they both get 6 months. If one implicates the other, but the other remains silent, the accuser walks free while his partner is jailed for life.
24707
Post by: Hesperus
To help clarify:
The reason it's called 'the prisoner's dilemma' is because, if both prisoners act rationally, they'll end up with a result that's worse for both than if they'd both acted irrationally.
Pretend you're prisoner A. You're thinking about what to do. If prisoner B squeals, then you have a choice between squealing and getting 10 years, or staying silent and getting life. If B stays silent, you have a choice between squealing and going free or staying silent and getting 6 months. In either case, it's better for you to squeal.
In game theory terms, squealing is the dominant strategy: no matter what the other player does, you should do it.
This means that, assuming both actors are rational, both will squeal, which means both will get 10 years.
Notice that, if both had stayed silent, both would have gotten 6 months, the jointly-optimal result.
A strategy is a set of choices that covers all possible events in the game. A Nash equilibrium is a set of strategies such that neither player ever regrets one of his choices.
In the prisoner's dilemma, the Nash equilibrium is:
For A: If B squeals, squeal. If B stays silent, squeal.
For B: If A squeals, squeal. If A stays silent, squeal.
It's a Nash equilibrium because both players are satisfied with all of the choices in their strategy: given what the other player does, they don't regret doing what they do.
Does that help? Also, Nurgltch, correct me if I messed anything up. It's been a few years.
23589
Post by: Sageheart
I have heard of this before. Thanks for clarifing. It makes me understand the concept Nurgltch is explaining a lot more. The Dove-hawk confused me but this makes more sense to me.
So the idea behind this, tell me if i'm wrong, I really want to understand this as well as possible, is that if both players are acting rationally, both will squeal since it leads to the best results no matter what the other player does. and this becomes the way to play with the focus of not losing rather than to win. If you had not talked you had the outcome of life or freedom but no middle ground.
My friend, a neuroscience/math teacher at NYU, explained something like this to me in regards to ML. He was talking about some people mathhammer 2 orks with rokkits as being just as optimal as 1 Sm with a ML.
He was explaining this is not true, since the purpose is to hit one tank, you just want to get that hit, and if we assume a SM always hits, and the orks get either: no hits, one hit, or two hits, you would rather have the SM since you get that one hit no matter what. I believe that is how he explained it. If you have a chance of failing it isn't always the best option since it is more risky.
If i have this correctly understood, I would then wonder how exactly one puts this concept into game terms, and "squeals" in the game. Does one do that by not taking risks?
13192
Post by: Ian Sturrock
Anyone wanting a slightly more in-depth but very readable intro to game theory (and, in fact, a kind of history of economics and economists in general, most of whom were a right rum bunch) could do worse than look at a pop science book called Doctor Strangelove's Game.
4003
Post by: Nurglitch
Ken Binmore's A Very Short Introduction to Game Theory is also very accessible for people, and contains many amusing anecdotes.
Something Mr. Binmore points out about toy games like the Hawk-Dove game is that they're simply little things used to illustrate game theoretic concepts, and I related that in the original post. Neither the Hawk-Dove game nor the Prisoner's Dilemma apply directly to Warhammer 40,000.
Where they do apply is getting people to understand the concepts when they're applied in new and novel ways.
Which is why I originally proposed a verion of the Hawk-Dove game called the Sequentially Iterated Hawk-Dove game, wherein the dominant strategy is to play the Hawk tactic for whatever number of finite iterations.
The Sequentially Iterated Hawk-Dove game has more in common with Warhammer 40,000 than its predessors, the Hawk-Dove game and the Iterated Hawk-Dove game. Specifically, it is sequential, or "IGO-UGO" in the vernacular.
In my last post I proposed two other properties of Warhammer 40,000 that an accurate game theoretic model should have, namely an account of resources (time, space, material), and uncertainty in non-player random elements.
While the latter is covered by Statistics, the former pays out differently depending on what's known as the evolutionary perspective on game theory, as opposed to the rational perspective we've been exploring so far. I'll go into it further in my next post.
Remember that the game theoretic account of rationality is simple, it merely assumes that players consistently prefer $2 to $1, all else being equal. That's all there is, consistency and a preference for better. As assumption of rationality go, this one is pretty spectacularly successful.
Remember that rationality is fixed to the goal of the game, rather than to any sub-goals, so that in Warhammer 40,000 the rational player consistently prefers to win rather than tie, and tie rather than lose. Victory points, kill points, and objectives are all ways of expressing and mediating this preference.
And even if one can't trust one's opponent to be rational even according to this broad definition, then one can simply go with the dominating strategy rather than seek the Nash Equilibrium.
Regardless, I believe the modifications offered concerning the Sequentially Iterated Hawk-Dove game in a previous post could bear some examination. The modifications are again simple, to demonstrate what they do: A limit to the money being drawn on (a shared pot of $10), and a single player-turn delay in the result.
27872
Post by: Samus_aran115
Okay, so I'm guessing this is just a discussion on mathammer? Okay, fine
I can appreciate math hammer. It does have its uses. But I really hate when people justify how much something sucks with mathammer. Or justifying how good something they have is. I don't care. Stfu and play, and stop hiding behind your math.
I.e: "Dude, why are you taking terminators? On average, my inquisitor lord's acolytes will wound one per turn, and my lord's force weapon will auto kill one guy per turn. Then you take your morale check, which you'll probably fail...."
Feth off buddy. I think you forgot to factor in that my bolters are AP 5 and will have probably torn apart your stupid 5+ save guys before they even get into assault, and that I'll probably be shooting lascannons into the unit from my land raider.
That's just an example I came across today
23589
Post by: Sageheart
I'm not sure if this is really mathhammer as much as the metaphysical psychlogical world of wh40k. Almost like the weird demention of the stack in magic the gathering. Don't mean this insulting like, just its about abstract theories of game mechanics rather than just number bashing
that was prob a bad example.
24707
Post by: Hesperus
@Nurglitch: 2 things.
1. Any good books introducing evolutionary game theory? I spent [too much money] on a college degree and my game theory class didn't even mention it.
2. I'm thinking you should edit the first post to start with: "This thread is NOT about regular mathhammer. This thread is a (very) brief introduction to game theory and explores its application to 40k." Maybe that'll cut down on the irrelevant posts.
24707
Post by: Hesperus
I guess my post wasn't clear. I've taken a college-level game theory course. Apparently it was a crappy course, because I've never heard of the evolutionary perspective on game theory. Are there any books on that specifically?
Oh, and if it's just applying evolutionary psychology to game theory, I'm considerably less interested.
24707
Post by: Hesperus
I'm asking Nurglitch, really. He mentioned the 'evolutionary approach' in his last post. I don't know anything about it, and I'm curious.
411
Post by: whitedragon
Hesperus wrote:I'm asking Nurglitch, really. He mentioned the 'evolutionary approach' in his last post. I don't know anything about it, and I'm curious.
http://plato.stanford.edu/entries/game-evolutionary/
You're welcome. Google is your friend.
19941
Post by: SpankHammer III
Nurglitch Thank you for explaining the rational/irrational part again e.g. 2 being better than 1 was do with the overall outcome of the game (win better than lose) rather than the indiviual elements of play. That had cleared it up a lot for me.
32391
Post by: txscotch12
Gotta say the whole playing not to lose thing works. I made a skarbrand/slaanesh list counting on the fact that I would have higher initative then my opponent. Turns out however that there are several eldar units which are actually faster on the charge then slaanesh. Realizing this I decided to try for the draw and not the win. Making sure my single base had plenty of defense I ate the incoming eldar units as they advanced on my single base. The game went to turn 6 which gave me time to get a single deep striking unit close enough to his base which had been left mostly undefended due to his aggresive play style. Thanks to playing for the draw I was able to sqeak out a win in a game I should have lost by all rights.
752
Post by: Polonius
I know nurglitch is going to be working on a model for 40k, but I think that a lot of people are missing some of the underlying principles here, and not just in terms of hawk/dove. One key principle here is the idea that as rational people we want to maximize the outcome of the game. This is one area where 40k differs from hawk/dove, in that cash result is the only outcome worth measuring in hawk/dove, while 40k can have varying different positive outcomes for players. Approaching this from the mindset of a competitive player, a goal is probably to win tournaments, than you at least have a defined outcome: position at the end of the tournament. Even then, you need to ask yourself what your true goal is: achieve the highest possible placing, or maximize the chances of finishing first? As tournaments get larger, the two approaches blend into one, but at smaller events avoiding the hardest opponents is more critical to placing highly than winning every game. Let’s assume you want to maximize your chances of finishing first. If I take anything from hawk/dove, it is that you should try to control your own fate. Taking a list that has great strengths, but huge exploitable weakness is a massive risk. Rather than maximizing the number of armies you can easily table, focusing more on reducing the number of armies that can easily defeat you is more critical. 40k, probably more than most games of skill, does not allow for masterful moves that gain major advantage. Games are won or lost based on exploitation of mistakes, not by brilliant moves. I’m not sure what Nurglitch is going to write, but that’s my message: playing to not lose isn’t about style, but about realizing that the player that makes the fewest negative moves will win. No amount of brilliance can overcome a mistake properly exploited by your opponent.
24707
Post by: Hesperus
I know! I know! But I won't answer.
Here's a hint, though: Nurglitch gave it away in his first post.
4003
Post by: Nurglitch
Polonius:
I'm not saying that the rational person seeks to maximize per se, since the definition of rationality of consistently preferring more to less applies equally well to satisficing as it does to maximizing. We should probably come back to this later once the machinery has been developed a little more, but for now I'll suggest that it's worth keeping in mind we'd need to add special conditions to distinguish these goals by the terms of Hawk-Dove.
4003
Post by: Nurglitch
Here's the solution to the Dominant Strategy in the Sequentially Iterated Hawk-Dove (5) game.
I'll start again with Hawk-Dove. In Hawk-Dove choosing Hawk is the dominant strategy because it is the best answer to which ever strategy may be played by the opposing player, scoring [3:2],[1:0] in favour of Hawk.
If we consider Hawk-Dove as a game consisting of one iteration, we can extended the Hawk-Dove game some finite number of iterations. This means that we can see the various end-states of the game, and work backwards. In the Hawk-Dove we only have to start from the last iteration and we're done at Hawk. With no reciprocity to consider, merely rationality and common knowledge of the choices and costs involved, the choice is simple at each (and only) iteration.
In the Definitely Iterated Hawk-Dove game, the version lasting five turns, we start at the last turn and work back. In the last turn there is likewise no reciprocity, while rationality and common knowledge holds. The player should play Hawk. On the fourth turn, when one might otherwise be worried about reciprocity of action on the fifth turn, one knows that the next turn the game is over so it'll be Hawk. Knowing that the other player will play Hawk next turn regardless, the turn becomes the Hawk-Dove game. Using backwards induction you work back to turn 1, which is again Hawk for the reasons discussed.
Because the ending is fixed, no reciprocity is available, since to play Dove at any point is to do worse than play Hawk, and the dominant strategy is [H, H, H, H, H].
A strategy of [D, D, D, D, H] would score $1 to $13 in favour of this [H, H, H, H, H]strategy. In fact, the best you can do against it is $5 to $5 using itself. Therefore it is the dominant strategy. The same reason to play H on the last turn applies on each preceding turn because you can work backwards from the fifth iteration, making the dominant strategy known.
28444
Post by: DarknessEternal
Is this thread ever going to get a point where it correlates to 40k?
4003
Post by: Nurglitch
DarknessEternal: It will, eventually. We're currently on the business of applying the game theoretic concepts to sequential games having first been applied to simultaneous game. If people would stay on topic and post about it rather than riding their personal hobby horses, this might go somewhere a little faster. Now that everyone understands how to derive the dominant strategy in a finite sequential game, it's time to start building in the other concepts of reciprocity, resources, uncertainty, and so on. Reciprocity is the result of uncertainty with regard to the ending of the game, as knowing when the game ends makes Hawk the dominant strategy for both the Hawk-Dove game and the Iterated Hawk-Dove games. If it is not public knowledge when the game is going to end, and hence the best strategy on the last turn, then players have to go turn by turn instead of using backwards induction to determine the dominant strategy. But before I go into that, it's important to discuss the concept of the game tree. In the terms of both the Iterated Hawk-Dove game and the Sequentially Iterated Hawk-Dove game. A game tree is essentially a flow-chart describing the options available to game players at each turn or iteration, and allowing players to both look ahead to the consequences of their choices and look back. The game tree for an Iterated Hawk-Dove game of three iterations will have three nodes, with each node having two branches (Hawk and Dove), whereas the Sequentially Iterated Hawk-Dove game of three iteration will have six nodes, with each node having two branches (again, Hawk and Dove). Making a game tree will help people understand why Hawk is the dominant strategy for Definitely Iterated Hawk-Dove games, and discover the dominant strategy when the number of iterations is not definite. This is important because, as mentioned, standard 5th edition Warhammer 40,000 games end randomly and so indefinitely iterate. Can anyone tell me how many nodes the game tree of Warhammer 40,000 will have? The branches are more complex, since each unit in an army has three sets of sequentially oriented choices (Move, Shoot, Assault), but we'll start building in resources (units) and uncertainty (dice) in a bit. Once these things have been discussed, then we'll be able to discuss reciprocity in Warhammer 40,000 in more exact terms.
12157
Post by: DarkHound
If I'm reading this right, 40k would have atleast 10 nodes (if we aren't counting deployment as a turn for each player) to a maximum of 14 nodes.
11988
Post by: Dracos
I'm not sure if a node for 40k would be where each unit has a choice, or if it would be for each phase.
Come to think of it, I don't think it would make sense to base the nodes off the units since the game is played sequentially based on phases, therefore:
If the nodes are for each phase, and the branches are choices you can make for each unit, then there would be 3 nodes: Move, Shoot, Assault. The set of branches would be a complex array of units and all their possible actions.
edit: I forgot deployment as above, so that has to be worked in somehow. However, while each of these 3 nodes repeat every player turn, deployment would be singular node that is not repeated. So based on my 3 nodes above, it would be one alternating iteration of 1 node, then an indefinite alternating iteration of 3 nodes.
okay I was wrong!
4003
Post by: Nurglitch
Darkhound:
I think deployment counts as an iteration/turn since the players make choices that affect the rest of the game.
Excluding that then, you are correct that a game will have between 10 and 14 nodes corresponding to 5 and 7 game turns (10 and 14 player turns). The game ends automatically after 14, and has a chance of ending on a 4+ after 12, and 3+ after 10.
Including deployment, the game will have 12-16 nodes.
This dice-oriented uncertainty makes the game indefinitely iterated, although I'd suggest that we could also treat it as a definitely iterated game with uncertainty of ending sooner.
11988
Post by: Dracos
So its only each turn that counts as a node, but not each phase?
4003
Post by: Nurglitch
Dracos:
Nodes are equivalent to each turn and reflect game states. Branches are equivalent to the choices that are made affecting the game state of the next turn, and hence link nodes.
Speaking more exactly, a game of Warhammer 40k will have between 12 and 16 sets of nodes, with each set of nodes containing unit states, as well as overall game states.
While uncertainty gets introduced by the dice making an otherwise definite game indefinite, resources are tracked by node, and resources includes units.
Edit: I suppose you could also divide up the nodes by the turn-states, since the turn sequence breaks the turn up into a set of sequential 'turns' that the player gets to take without alternating with the other player.
11988
Post by: Dracos
I thought that dividing it up into phases would have more value (although also more complexity) since each phase has a new board position (or game state).
edit: I guess if we do it this way you would end up with Deployment being 2-4 nodes (Normal deployment, then possibly scouts and infiltrators), then each player turn being 3 nodes. so the game overall would have 32-46 nodes.
21789
Post by: calypso2ts
Nurglitch wrote:
The plan is not to abstract Warhammer 40,000 to fit game theoretic principles, but to build a model of Warhammer 40,000 using game theoretic principles,
All models are abstractions simplifications and assumptions
Nurglitch wrote:
Incidentally, by 'Hannibal', aren't you referring Pyrrhus? Hannibal didn't win all his battles to lose the war...
Hannibal crushed every army he faced in Italy - his strategy to crush all opponents and therefore force a surrender however was a losing strategy. The Romans would not surrender, instead they waited him out knowing he could beat them in an open field but not in assaulting a city. Eventually he was forced to retreat without every achieving his goals even though he executed (tactics) his strategy perfectly. A pyrrhic victory is the same idea.
4003
Post by: Nurglitch
Draocs:
You'd be right. After all the point of building this model is to have it reflect Warhammer 40,000 in all relevant ways so that players can plug in their armies, missions, and so on, and calculate strategic equilbria.
Ordinary deployment, Infiltration, Scout moves, and so on would be branches since they're the actions that players take to bridge nodes. It's important that each node has an identical set of bridging decisions, and setting actions as nodes would confuse things because actions connect nodes.
The Movement phase bridges a sub-node of any turn to the sub-node representing the Shooting phase though, but the outcome of all of these actions is perpendicular, if you will, to the game state of the next turn.
So: Game (Tree), Player Turn (Node), Actions (Branches).
Node 1 (Player Turn 1)
Unit 1 (resource): Movement (branch) -> Shooting (branch) -> Assaulting (branch) -> Node 2.1, 2.2, 2.3.
Node 2.1
Node 2.2
etc
Even during the phases, actions are made in sequence: Movement is done unit by unit, shooting is unit by unit, and assault is ordered by Initiative.
12157
Post by: DarkHound
Node 2.1-2.x are each the state of the board at the start of the next turn, so x is a near infinite number?
4003
Post by: Nurglitch
Deleted in Compliance with the Moderators
Darkhound:
The number of states of the game will be finite in number, and not really that big as big goes. The Hawk-Dove game is good for demonstrating game theoretic concepts precisely because it is as small as it can possible be and remain a game rather than a decision problem.
Basically if you take the number of game states as being relational rather than absolute, then you can cut them down significantly. In addition, thanks to the sequential ordering of the phases or sub-nodes and their own sequencing, each previous sub-node both affects the game state of the turn and the game state of the next sub-node.
Think of it like handling re-rolls statistically: they don't increase the potential outcome, but they do modify the curve of whether those outcomes can be expected to happen.
In this we're aided by rules limiting units moving infantry from firing heavy weapons, units from shooting more than one target (barring special rules...), and units from charging targets that they didn't shoot.
So shooting wise, we only really need to concern ourselves with three states depending on the unit-resource and its target, nothing, the expected value, and the potential value.
So a unit of twenty Ork Shoota boys could charge a unit of ten Tactical Space Marines, getting 12 Orks attacking, and cause [0, 2.97, 36] casualties if they don't cause more than three casualties via shooting [0, 2.18, 40] and closest the gap between them and the Tactical Marines at least 5" in the movement phase.
That choice to assault will be nested in the choice to shoot (risk shooting them out of charge range), and the choice to move, and will be one of the options to address a different unit in this fashion.
In other words, because assaulting and shooting are directed at a single unit (again barring special rules like Fire Control, Target Locks, and Power of the Machine Spirit) we can reduce the player's choice down from a massive amount of minuitae to the question of whether or not to address another resource in the game.
By 'address' here I mean 'act in relation', so addressing can mean moving in relation, shooting in relation, assaulting in relation, etc.
By phrasing the question as 'resource' it can also address resources such as objectives, and reflexively itself (it can address itself by the player considering what happens if it doesn't address any unit - a unit of Grots going to ground in some ruins would be addressing themselves).
By making the choices a matter of addressing resources, we can parse a mass of detail down to match the choices facing players.
12157
Post by: DarkHound
So what's the application for all this?
11988
Post by: Dracos
I like how you guys are attacking the theoretical premise for building the model. Your not even able to prove the premise to be false, but detractors in this thread keep trying to apply the premise directly to 40k when it has been repeated ad nauseum that we (primarily Nurglitch with input from the peanut gallery) are working up to it. Can't you just give it time and let us get there?
If you don't think it will work, then leave the thread alone.
BTW in chess you can't work backwards to make a best move. You have to calculate precisely going forward. You can have a strategy that involves getting pieces into a position, but the actual calculation always must be done forwards.
Strategy is your general plan, the "what". Tactics are how that plan is executed, the "how".
4003
Post by: Nurglitch
Darkhound:
The application is building a game theoretic model of Warhammer 40,000 to aid in strategic and tactical analysis.
Deleted in Compliance with Moderators
4003
Post by: Nurglitch
I figured it might help if people had a look at what was being made here.
1
24707
Post by: Hesperus
I'm assuming the model is going to take the uncertainty in 40k into account. The winning strategy will have different moves for each possible outcome. For instance, say part of the strategy is, "Unit X shoots unit Y." You'll then have, "If Y loses 0 models, do A. If Y loses 1 model, do B. If Y loses 2 models, do C...If Y is wiped out do [n]." The strategy is going to be ridiculously complex, but the theory can accomodate such complexity. I'm very interested to say how Nurglitch plans to make this practical, though. Any complete strategy is going to be insanely complex: as I suggested, you'll need different tactics based on every possible outcome of every shooting or assault action. You'll also need different sets of tactics for each possible enemy turn, given each of your possible moves. Let's try an extremely simplified game. You have 2 units and the enemy has 2 units and you can only move forward or back 6" with your units (and run in the same directions) Say your opponent has first turn, and you have no control over deployment. Pretend you're on a huge table and will never have the opportunity to shoot or assault each other. How many possible outcomes do we have at the end of the first turn? With each of his units, your opponent can: (1) not move at all, (2) move forward and run forward 1", (3) move forward and run forward 2",...(7) move forward and run forward 6", (8) move backward and not run, (9) move backward and run backward 1",...(14), move backward and run backward 6". I'll ignore moving forward and running backward, moving backward and running forward, and not moving but running in either direction, because they're unlikely to do those. So your opponent has 14^2 possible moves. That's 196 possible initial states for you. You have the same options, so the total possible number of states is 14^4=38,416 possible outcomes. Let's say the game always ends on the 5th turn. There are 38416^5 possible outcomes after the 5th turn. That's 836.7 septillion outcomes. How are you going to model a real game of 40k? You'll have googols of possible outcomes.
4003
Post by: Nurglitch
Deleted in Compliance with the Moderators
So I'd like to take this opportunity to beg people reading this thread:
If you want to be involved in this thread, please make sure you have something accurate and useful to say about the interaction game theory with Warhammer 40,000. If you don't agree with the premises of game theory, or its application to Warhammer 40,000, I'm begging you not to post in this thread.
Automatically Appended Next Post:
Hesperus:
That's a good question (namely: that there's too damn many permutations of absolute game states on any given turn). Particularly when a benefit of game theory is simplifying large sets of information.
Fortunately the game itself already parses these states down to a series of decisions made by players. This means that by following how the elements of the game are stacked one atop the other, we can chart a game tree through those decisions.
In a word, we form information sets from the value of relations in the game.
The decision for Movement for any unit is [direction, distance, points of coherency]. The direction is its address, which can be defined in relation to another resources (units are resources, so are terrain, objectives, and the board edges), the distance is the value in relation to the address, and points of coherency is the value in relation to being addressed by enemy fire while in that formation).
Direction and speed will affect the Shooting phase by
limiting what weapons they can fire, and what targets they can address if they do fire. It will also affect the Assault phase, whether and where the unit will charge opponents. Both the value in shooting and assault is value expressed as the set of worst, average, and best values expressed as casualties inflicted. So this is where traditional mathhammer as statistics goes (as well as all the other instances where dice are used to determine the next step or value).
I hope you'll agree then that the game tree that players will need to be aware of should match exactly with the game tree described by the game rules. The values will range from game to game, given terrain, size of army, etc, but the mechanics are the same (until we plug in Special Rules). The difference being that for the model we tag out the values of player decisions algebraically in order to discuss it with mathematical rigour, and therefore arrive at certifiably valid conclusions about the strategies and tactics available to players.
Here's an example of these concepts applied to Hawk-Dove:
Game Turn 1 Player Turn 1 & Game Turn 1 Player Turn 2
Player 1 & Player 2
DD[2,2]
DH[0,3]
HD[3,0]
HH[1,1]
Player 1 & Player 2 both know the calculation for domination:
If XX[bb] + XY[ba] > YY[aa] + YX[ab] then Domination.
Similarly the decision to shoot at unit a rather than unit b in Warhammer 40,000 may have a dominant strategy to it that we can single out some property or set of properties and set them to one side as 'no-brainers'. This should also winnow the number of conditional decisions and therefore live options facing any player in any game.
12804
Post by: Cpt. Icanus
I just want to come up with a point which may make the prisoners' dilemma (or Hawk-Dove as you call it) seem not suited: In a PD both players don't want to get better pay off than the other player but only the best possible pay off for themselves. (which still allows for the PD as a model in principle anyways, so i guess I'm just a smartass  )
If this has come up on the former pages I beg your pardon.
Otherwise I find this a great go at out of game theory for ingame decissionmaking! Desirable work
4003
Post by: Nurglitch
I think I established that the blunt application of Hawk-Dove to Warhammer 40,000 wasn't going to work on page 1, and that I was using it to illustrate game theoretic concepts. I have since moved on.
It's probably worth noting why I prefer the Hawk-Dove to the Prisoner's Dilemma, for rhetorical reasons. In the Prisoner's Dilemma you have the option to co-operate with your opponent (Dove) or defect from cooperation with your opponent (Hawk). The problem is that sounds a lot like the goodwill and retribution of reciprocity, which happens over a sequence of nodes, not simultaneously within them.
That's why earlier I pointed out that phases can't be nodes; no opportunity for reciprocity despite the opportunity for defection and cooperation, which can lead to people over-generalizing conclusions about single iteration games with multiple iteration games.
In one sense then, tactics in Warhammer 40,000 are a list sequencing the actions of units in the phase that intersects another list sequencing the actions of the same units in the turn. Strategies would be the index of action between an iteration/node and its following iterations/nodes. This sequencing also lets us pair down all the permutations to combinations.
To hazard a conjecture which I'd be very happy to hear some input on, I'd suggest that to some degree that target selection will involve something like the reciprocity mechanism we first noticed in the Indefinitely Iterated Hawk-Dove. I think this is because some moves are trade-offs between exposing oneself to enemy retribution, and gaining the payoff of killing them first.
24707
Post by: Hesperus
Nah, I don't think it's a reciprocity mechanism. You're not going to garner goodwill by refraining from killing a guy's Rhino. The fact that you'll expose a unit to enemy fire will just affect the expected value of the move: [value of damage]*[chance of damage] - [value of unit/of part of unit]*[chance of damage to unit].
4003
Post by: Nurglitch
I think it's moreso the other kind of reciprocity, retaliation, since you're trying to kill the other guy, he's also risking himself to kill you. There's value to shooting first if you get lucky. Also, whether you can lucky or whether you get unlucky, there may be a cost.
4776
Post by: scuddman
I'm not sure why they were deleted, but my last post referenced game theory links to incorporating uncertainty into game theory.
I think instead of deleting or censoring posters, you should respond to your criticism. THis is how how models become better.
Considering that other theorists took nash's equilibrium and added to it the concept of uncertainty...
I think there is no reason why your model shouldn't incorporate such elements.
Also, if I disprove a bunch of your basic premises, it is on you as a model maker to explain why your model is still useful despite its shortcomings, and to acknowledge its shortcomings. No model is perfect. I don't expect it to be...but when I say your model isn't robust enough...and it isn't robust enough to be useful..it's on YOU to fix it. Shoving the criticism under the carpet doesn't make it less true.
22547
Post by: ChrisCP
? I've never heard of posts being deleted on dakka, it's usually giant red text...
4003
Post by: Nurglitch
scuddman:
If you have an objections to the premises of this thread, please start your own thread on that subject.
Back on topic: The model does incorporate an account of uncertainty. Three of them, actually. See my post on p.2 that starts with a discussion of Ken Binmore's handy little book.
In terms of uncertainty, the three accounts deal with the (1) random element of dice, (2) the lack of complete information due to the random end-game, and the (3) finitude of resources. Each game node accounts for thus uncertainty by valuating actions by both potential and expected value.
It's probably important to distinguish uncertainty from risk here. Risk here is what you risk, so that being less willing to accept additional risk makes one risk-adverse.
21737
Post by: murdog
EvilEggCracker wrote:So... Essentially... we should try not to lose?
Withering hail of lasguns > Game Theory.
rofl
Sorry I couldn't help it - it's pretty funny, if you don't take it seriously.
Good Thread Nurgs.
4003
Post by: Nurglitch
Now that the game nodes are up, I thought I might work backwards a bit and talk about deployment, since deployment affects the game in several important ways.
I'd imagine a similar structure for deployment as the game turn, with normal deployment taking the place of movement, then infiltration deployment, and finally scout moves. The interesting difference being that dice are involved in determining who deploys first, and who has the first turn.
I think we've already seen people subconsciously crunching the numbers for this as we've seen certain strategic equilibria in action: Enemy is entirely contained in Drop Pods? Reserve all your forces to prevent a first strike.
The dice involved in determining who deploys first are complicated because it isn't an automatic: You win, you go first situation. The player who wins the toss gets to decide whether they go first or second. Likewise the player going second can decide to try and seize the initiative. Which means you can have:
(A) Player 1 Deploys 1st, 1st Turn, Player 2 Deploys 2nd, 2nd Turn (5/6)
(B) Player 1 Deploys 1st, 2nd Turn, Player 2 Deploys 2nd, 1st Turn (1/6)
There's something similar going one with Turns 5, 6, and 7 though, as they end on 3-, 2-, and 1+, respectively.
29514
Post by: doctorludo
OK,
There's a lot to go through here, so I may have missed something obvious, but I have a bit of a problem with the argument as it's laid out in the first post.
The outcome of 40k is binary; a win or loss. You can't grade it, unless you're playing a campaign or going for the biggest win.
So, taking a hypothetical example:
Dove-Dove = $10 each.
Hawk-Dove = 7$ to hawk, $4 to dove.
Dove-dove is optimal if value is what matters. But, in 40k, value is irrelevant. The hawk strategy wins because 7>4.
To translate to 40k, both players having 2 objectives is fine. But one player having 1 objective, whilst their opponent has none is better for player holding 1.
This is, of course, why 40k is unrealistic. In reality, preserving your own force would obviously be important.
And, if I've missed something really basic, I apologise.
4003
Post by: Nurglitch
Yes, you're missing the fact that we're not trying to interpret Warhammer 40,000 purely in the context of the Hawk-Dove game, or whatever version you're talking about. The Hawk-Dove game is not the be all and end all of Game Theory. In other words all the posts after the first one.
In fact the Hawk-Dove game is a really bad model for Warhammer 40,000, despite being a nice introduction to game theory. That's why I introduced all the other variants of the Hawk-Dove game, to show how the same concepts applied, but that the results changed, much like arithmetic works the same with different numbers, but the results change.
Also "realism" is irrelevant in this thread since it's about applying one model to another, which is good because it means that, unlike empirical science for example, we can have a complete model of Warhammer 40,000.
At this point I'm busy labeling parts of the 40k game structure with their game theoretic names (game turn = node, for example) so we can have a game tree for Warhammer 40,000 and start discussing particular applications or tactics and strategies.
29514
Post by: doctorludo
So, for example, you want to know how any single decision in a game of 40k could be modelled as a node in a decision tree?
4003
Post by: Nurglitch
doctorludo:
No, not quite. As you may have noticed from reading the thread a node is equal to a single game turn in Warhammer 40,000.
Quite the opposite really, if we can consider game theory has being a top-down perspective on decisions, since I want to model the set of decisions constituting a game of Warhammer 40,000 as a game tree.
24707
Post by: Hesperus
I'm a little confused. Are you modeling each player turn as a node, or each opposing player's turn as a node, or what? I don't understand why a player would treat his own turn as a single node, since he makes a sequence of separate decisions during that turn. But maybe I missed something: is it a way to simplify the decisions, so we don't have an inconceivable number of possible outcomes?
4003
Post by: Nurglitch
Hesperus:
Yes, player turns are interpreted as nodes. The player treats his own turn as a single node because each player turn is made in response to the following player turn.
Remember also that the player turn decisions depend upon each other: you can only assault the unit that you shot at in that Shooting phase, for example. The node is thus broken up by the sequence of the decision, as well as by the units whose actions need deciding.
Nodes are connected by actions, in this case the sets of actions that are decided on during the turn sequence. Since the actions are relational rather than absolute, there's a lot less of them.
But there's a lot of decisions to be made even if decisions are made relationally. So we can construct the following parameters to have the players understand the potential reciprocity their action can prompt in the other player: the upper limit, the lower limit, and the average utility of any action. With these three parameters we can make reasonable inferences about the upper, lower, and average utility parameters of our opponent in the following turn.
Basically the game node considers both player turns,the one in play and the hypothetical one next turn. Because you're indexing potential states of your own choices against the potential states of your opponent's choices in the following turn, you get a much more limited game tree.
In fact, I think it's limited enough to treat the game as a 7x7 payoff grid, corresponding to its maximum 7 game turns.
Edited: Corrected "average utility" for 'expected value'.
24707
Post by: Hesperus
Okay, I'm still not sure how you're going to do the estimating, but 7x7 isn't going to work. The turns are sequential, not simultaneous, so to depict the game in normal form you have to have a cell for each combination of choices, right? So if, for example, a player could pick option 1 or option 2 in each turn, you'd need a cell for [1,1,1,1,1,1,1,], [1,1,1,1,1,1,2], 1,1,1,1,1,2,1], and so on. I assume you'll provide more than two options for each player each turn, but even if you don't, you'll need a 49x49 table.
I find extensive form more intuitive for sequential games, but I guess that's just preference.
4003
Post by: Nurglitch
I agree that the payoff grid is moreso suited for simultaneous games and the game tree for sequential games, but if we think of each actual player turn as a cell, indexing the player's choices against her opponent's potential retaliation, with the game ordered from left to right and up to down so that movement right across the cells towards the last cells of the game favours the player on the x axis (call him player 1), and movement down across the cells towards the last calls of the game favours the player on the y axis (call him player 2).
Think of it like a hybrid of the two, with cell (1, 6) and (6, 1) being something like tabling one's opponent,and cell (6, 6) being a tie. The movement between the cells on the x,y plane is like that of a game-tree, while within the cells, within the nodes, we have something more like a traditional payoff table where a player's potential actions are compared to the other player's potential retaliation.
29514
Post by: doctorludo
OK, I'm going to stick my neck out here and say that I think, from one approach, that this is impossible, unless the nodes are simplified into abstraction (e.g. play aggressively, play defensively etc).
But, before you shoot me down, the principles of decision theory have a lot to add to wargaming. I am aware that game theory and decision theory are not one and the same thing, but they share concepts such as a measure of utility, and the use of statistics and probability to model uncertainty.
My problem is this: At the most detailed level, each node would have to take into account what the player does with each model, in terms of movement, shooting, assault positioning etc. Furthermore, it must take into account wound allocation and enemy equipment in order to make sense of this. So, a squad of 5 orks fighting a space marine devastator squad would have to decide whether to get into charge range, whether to shoot, whether to Waaaagh, and the space marine would decide which model to remove. I suspect that even a fairly simple game of 40k would be beyond the processing power of a home computer to model all iterations.
BUT, that's probably not what you're proposing. We could simplify the grid by giving a certain number of actions: "Close with nearest enemy unit", "move towards nearest cover", "shoot nearest enemy unit" etc. Such simplifications could produce an estimate of optimal decision making in a given situation, sacrificing accuracy for feasibility. There will still be an awful lot of branches in each node, but it would be possible. This is akin to heuristic reasoning in humans.
So, do we need to consider the level of simplification which is necessary? What are the decisions for each node? Can we summarise the table at the end of the preceding turn to influence optimal decisions?
Is this more akin to what you are proposing?
4003
Post by: Nurglitch
doctorludo:
I became aware of game theory through decision theory (and mainly through the horde of fallacies my Decision Theory prof was making in regard to game theory), so I grok where you're coming from.
I discussed this in private messages with another poster and figure I'll answer by quoting some stuff I wrote in those messages (slightly edited):
Nurglitch wrote:Basically the move is from all possible game states to all possible decision states, parsing by relevance, and from all possible decision states to the most likely, parsing by reliability, and adding the uppermost and lowermost boundaries so players can understand where their decision can be expected to land on the bell curve. A lower bound of 0, and upper bound of 20, and an average utility of 2.33 means that anything above 2.33 is gravy.
Given 2.33 as the average, you can then compute the value of retaliation with 0.66 likelihood of two casualties, and 0.33 likelihood of three casualties.
So basically instead of all possible states of the game, we're only considering the parts that we can reasonably expect while retaining our capacity to resolve further detail.
Take movement, for example. While the number of outcome-states for a moving infantry unit is very large, it can be crunched down into three things: Its relation to other units, its relation to terrain, and its relation to itself.
29514
Post by: doctorludo
Fair enough...
My decision theory prof was clear enough, but my experience is with descriptive theories, and a basic understanding of the normative and prescriptive models.
I'm working in healthcare, where the limits of normative/prescriptive approaches are an important part of application.
But we seem to be discussing different subjects; I'll read with interest.
4003
Post by: Nurglitch
doctorludo:
Let me put it this way, a game tree of the player decisions in the game is going to be much more user-friendly than a game tree of all possible game states. Similarly the game tree is going to be user-friendlier if it accounts for these decisions relationally rather than absolutely. It'll also describe the game more accurately by sorting features by relevance.
29514
Post by: doctorludo
Yes, I understand that. You aren't proposing the model I was criticising. This sentence is what seems to sum it up:
"Take movement, for example. While the number of outcome-states for a moving infantry unit is very large, it can be crunched down into three things: Its relation to other units, its relation to terrain, and its relation to itself."
So, if I understand correctly, the decisions to move aren't based on the utility of individual discrete positions, but the position as expressed as a function of these elements, which allows for mathematical operation and/or ranking.
4003
Post by: Nurglitch
doctorludo:
Yup. The notion being to capitalize on the game theoretic approach of thinking in sets of outcomes rather than getting bogged down in particulars and counter-factuals.
16387
Post by: Manchu
As a friendly reminder, this thread is about a specific approach to Math Hammer and NOT about the validity of Math Hammer in general. Off-topic posts will be deleted.
32377
Post by: Tread'Ead
I'm very interestd in this thread and have been passively reading it.
I'd like to point out that it is possible for a game of 40K to end before turn 5 and that possibility should be included in the moel.
As the extreme end of aggresive play is what can lead to tableing the other player before turn 5. The risks in doing so will be high but ending the game early is the maximum payoff.
4003
Post by: Nurglitch
Tread'Ead:
That's a good point and one that I hadn't thought of. Thanks for bringing it up!
4003
Post by: Nurglitch
To follow up on Tread'Ead's point, I think it's an interesting point that in every game of 40k there's two ways to win. The first way to win will be either capturing more objectives than the other person, or scoring more kill points, and the second way to win will be annihilating your opponent.
A player only has to accomplish one of these goals, tournaments that score depending on weird combinations notwithstandning, and stuff we'll be able to adapt anyways. That means that there may be points in the game where the strategy of annihilating your opponent will be a more reliable tactic for winning the game than either scoring objectives or kill points.
Likewise in the Movement, Shooting, and Assault phases of a player turn there are going to be actions that will become 'live' as the turn procedes. I expect it'll be important that an effective strategy will account for these foreshortenings and twists of the game.
If anyone is wondering what I'm talking about, consider the situation of a unit's shooting target removing itself from charge range of that unit via selective casualty removal. Similarly if one has killed all of the priority targets for that turn, it may be useful for units that haven't shot anything yet to engage in running or engaging Star Engines.
Now we're getting into the Warhammer territory that people are more familiar with, the problems of target priority, the issue of wasting a unit's firepower against certain targets, and the choice of using Rapid Fire weapons instead of charging.
4003
Post by: Nurglitch
Okay, so simple problem for the sake of an example. Suppose you have two Imperial Guard Infantry squads facing off on a 2'x2' board with an objective at the exact centre. Is there a dominant strategy? What is it?
24707
Post by: Hesperus
Random game length?
12157
Post by: DarkHound
Nurglitch wrote:Okay, so simple problem for the sake of an example. Suppose you have two Imperial Guard Infantry squads facing off on a 2'x2' board with an objective at the exact centre. Is there a dominant strategy? What is it?
Stand and shoot him until he is dead, then worry about the objective. If you move on the objective before he does, he gets a free round to shoot at you. If both of you move on the objective, you contest it and the winner is down to the dice (an irrational risk). If he stands and shoots, then you are guaranteed at least the tie.
4003
Post by: Nurglitch
Hesperus:
Good point: Random game length, miniaturized Spearhead deployment to fit the 2'x2' board.
24707
Post by: Hesperus
Let's see...
I guess we assume you start deployed 12" from the objective.
Regardless of where your opponent is, it's best for you to be within 3" of the objective when the game ends. So you need to be there at the end of turn 5.
To be certain of that, you need to be <10" away at the end of turn 4 (thanks to run). You can be a little further away if you're willing to gamble. This gets a little complicated, though: If you're 9" or more away, you won't be in position to rapid fire your opponent in turn 5, assuming he's close enough. For player 2, the opponent will definitely be in (move and) rapid fire range on turn 5. However, if Player 1 doesn't have to run on turn 5 and Player 2 is within 9", Player 1 can rapid fire Player 2 on turn 5. That means it's best for Player 1 to be within 9" at the end of turn 3, but best for Player 2 to be just outside 9".
That means it's best for Player 1 to move just inside 9" away in turn 4, and best for Player 2 to move to just outside 9" away on turn 4.
Obviously, neither side wants to get shot first, so neither should move forward in turns 1-3.
The game will tie unless it goes to a sixth turn, in which case Player 1 will probably win (he can charge Player 2 and probably break him).
And this causes a problem. If Player 2 has a 50% shot at a tie, he may try a different strategy: don't move at all and take 2+ turns of shooting at Player 1. He has some non-zero chance of breaking him and winning that way. I don't feel like doing the math, and I don't know how much we're valuing a win compared to a tie, but if the numbers come out that way (and Player 2 is risk-neutral), that's his best strategy.
If it is, we'll have to calculate Player 1's payoffs, given his chances of losing/winning if he moves forward and Player 2 stands and shoots. It may be that the best strategy for both players is to not move at all and take the tie. Depends on the payoffs.
4003
Post by: Nurglitch
Let's call a loss "0", a tie "1", and a win "2" for the sake of reckoning end-game values. These are zero-sum, so there's two points to go around.
Working backwards:
Game Turn 7 ends automatically
Game Turn 6 ends on 3+
Game Turn 5 ends on 4
Game Turn 4
Game Turn 3
Game Turn 2
Game Turn 1
Deployment (5/6 Player 1 deploys first, acts first, 1/6 Player 1 deploys first, acts second)
Deployment-wise, taking a Spearhead-style of deployment at least 12" from the Objective they can deploy within 17" of each other, and beyond 24". So they can either deploy so that the player acting first gets the first shot, or they can deploy so that no-one can shoot first.
So the value of deploying and shooting is [Enemy Squad, 0, 1.68, 10][5/6] or [Enemy Squad, 0, 1.34, 8][1/6] and the value of deploying out of range is 0, and the value of deploying in range and moving is some negative value or -[Enemy Squad, 0, 1.68, 10].
I'll continue this in a bit...
34565
Post by: TheRedArmy
I read over this entire thread last night, and I am watching with excited interest. I really think this model has the potential to become useful to those that know the model, know how to comprehend the model, and know how to utilize the model in actual game play.
What I'd like to do is summarize the thread thus far, for people who might be getting lost (there are quite a few), or people who want the easy way out (also quite a few)  . I could very well be wrong in my assumptions this far, but I'd like to try and make it more simple (there's no real simple way of understanding this in enough detail to be useful, so let's at least make it clear what it does). Again, Nurglitch, please correct me on anything that I might be wrong about. No sense in making more confusion.
The Dove-Hawk (Or Prisoner's Dilemma) Game: This is used primarily to illustrate a simple model - that, with a two options being chosen simultaneous, the option that would benefit both players the most (either $2 or 6 months a piece) is virtually never chosen, because no matter what the opponent does, it is better to go "hawk" or "squeal" . While you end up with less than if you have both co-operated together (two hawks get $1 a piece while two squeals get 10 years), it's the only logical move, but you end up with less.
I believe this applies to Warhammer in the sense that the potential outcomes for actions are what need to be applied to most things when considering what to do.
The Node and Branch System: A system by which we can break the game down into reasonable parts in order to analyze possible endings to the game, possible board positions, possible unit differences, and possible advantages we can gleam from our current position. The key perhaps means that they are dealing in "possibilities". It's possible that 3 railguns will explode 3 Leman Russ tanks in a single phase, but I probably shouldn't expect that to happen. However, this would cover what I should do if I do happen to get into that type of situation, or if the railguns do nothing, or if I get something in-between (most likely to happen).
What this is supposed to do: This is a system by which you can look at the game and see what we can except to happen in the future by working backwards from the expected end of the game. By working backwards from the ending we want from our current position (be it draw or win), we can follow the expected sets of moves for each player back to our current position and see how to get to that ending.
Let's assume for a given game of Warhammer there are 30 possible reasonable endings to a given game turn. That means the total number of possible turn endings are between 30^5 and 30^7. That's between 24,300,000 and 21,870,000,000 possible board endings for a standard game of Warhammer. While that seems daunting, (and in reality it is much more), I think the system Nurglitch is attempting to lay out here has the potential to simply that to the point where it can still be accurate, useful, and detailed (enough).
Again, Nurglitch, please correct me if I am misinforming people.
4003
Post by: Nurglitch
TheRedArmy:
Looks accurate to me. Thanks!
Speaking of which, I should apologize for not continuing where I left off earlier, but I'm trying to fight the need to procrastinate as the end of term hoves into view with its due dates and finals and other stuff. I'll get back to it unless someone else completes it first.
34565
Post by: TheRedArmy
Nurglitch wrote:TheRedArmy:
Looks accurate to me. Thanks!
Speaking of which, I should apologize for not continuing where I left off earlier, but I'm trying to fight the need to procrastinate as the end of term hoves into view with its due dates and finals and other stuff. I'll get back to it unless someone else completes it first.
Glad to see that I actually understand the material. And no one can blame you for end of term tests/paper/ bs. Do what you need to do.
4776
Post by: scuddman
Nurglitch wrote:Let's call a loss "0", a tie "1", and a win "2" for the sake of reckoning end-game values. These are zero-sum, so there's two points to go around.
Working backwards:
Game Turn 7 ends automatically
Game Turn 6 ends on 3+
Game Turn 5 ends on 4
Game Turn 4
Game Turn 3
Game Turn 2
Game Turn 1
Deployment (5/6 Player 1 deploys first, acts first, 1/6 Player 1 deploys first, acts second)
Deployment-wise, taking a Spearhead-style of deployment at least 12" from the Objective they can deploy within 17" of each other, and beyond 24". So they can either deploy so that the player acting first gets the first shot, or they can deploy so that no-one can shoot first.
So the value of deploying and shooting is [Enemy Squad, 0, 1.68, 10][5/6] or [Enemy Squad, 0, 1.34, 8][1/6] and the value of deploying out of range is 0, and the value of deploying in range and moving is some negative value or -[Enemy Squad, 0, 1.68, 10].
I'll continue this in a bit...
Almost. The game ends on turn 5 on a 1 or 2, continues on 3+
Turn 6, Game ends on turn 6 on a 1, 2, or 3, continues on a 4+
There is also the weirdness that the game ends when one side completely obliterates the other, which in theory can happen on any of the turns between 1 and 7.
Regardless of the what I just said, since in the main scheme such little things are trivial (although Nurglitch doesn't actively state so and specificly state his assumptions, which I think is bad), the premise works.
However, there are 3 scenarios in the main rulebook, so when weighing values they change depending on the scenario, but all you need to do is adjust and change the weight <- So I guess this is where Nurglitch is going with this.
"Okay, so simple problem for the sake of an example. Suppose you have two Imperial Guard Infantry squads facing off on a 2'x2' board with an objective at the exact centre. Is there a dominant strategy? What is it? Random game length, spearhead deployment" I'll assume for simplicity, that I must deploy at least 6" away from the objective.
It's a good simple example, so if I'm thinking about it, the strategy begins with the roll of the die to see who deploys first. If I make the assumption that I have 12" deployment and am allowed to deploy on the objective,
If I am going first then I will deploy within scoring of the objective (I can move 6 to score) while being as far back as possible. My thinking is that my range essentially covers most of the board, I'm already scoring the moment I move (thus I'm winning), and I'm shooting first. Since I begin with both a positional and numerical advantage (since I shoot first probably), I want to buy as much time for myself to shoot and hold the objective and maintain the status quo.
If I am going 2nd, I recognize that my opponent should do the above, so the first thing I check is to see if he has done so. In the event he has not, I would deploy accordingly. If my opponent has done the above, he intends to get into a shooting standoff with me. I know at this point he has numerical shooting advantage if things go his way, so I look for some other way to gain an advantage (i.e. hth, cover, etc) If I can deploy within assault range, it is very possible for me to win if I steal the initiative or he has a bad run of the dice on the first turn since charging gives a numerical advantage in hth.
Why wouldn't I stand off and get into a shoot war when going 2nd? Two reasons.
1. The most likely "other" outcome to this strategy is a draw. If my opponent doesn't win, odds are I get a draw. If we're assuming a competitive tournament, I don't value a draw nearly as much as I do a win, so it is to my favor to maximize chance of winning, not drawing.
2. Pressing forward gives me both the advantage of a shot at disrupting his strategy, and me having a huge numerical advantage in the event I steal the initiative. It also has the highest chance of losing (i.e., most of the time), but in this particular case of going 2nd, it also has the highest chance of winning since this strategy almost never becomes a draw.
[Enemy Squad, 0, 1.68, 10][5/6] <- Please notate what you're writing.
I assume 0 is models you lose, 1.68 is models enemy lose (although the number should be 1.67, not 1.68), 10 is the number of models the enemy has, and 5/6 is the probability your opponent does not steal the initiative.
25081
Post by: Lysenis
NO NO dont stop!!!!
All of this is pretty mind bending but this has so far been a HUGE help in me explaing game theory to people.
While the thought that Hawk-Dove is like 40k in an aspect is correct but then again at another look it is not is most interesting the thoughs of nodes apply a sense of reasoing behind the maddness and then the over all hints of randomness of the dice (which control it all some may argue) is just mind blowing and a major headache at the same time.
So please do not stop this thread I need more input!
5394
Post by: reds8n
Thread is being locked due to thread necromancy.
|
|