Ambiguous Decision Making

Discussion in 'Game Design' started by Erenan, May 15, 2014.

  1. Erenan

    Erenan Well-Known Member

    In the past few days, I have been doing a lot of thinking about the ontology of interactive systems, and part of that has been concerned with ambiguous decision making. In the interest of not being insulated too much inside my head, I thought it might be worthwhile to create a thread for discussing its finer details. We've had some discussions about it before, but as far as I can recall, we've never really had an in-depth discussion concerning what precisely decision making entails (except maybe here and here, but neither thread yielded much conversation). Necessary and sufficient conditions and logical implications and all that. Here's how I understand it at the present moment:

    Ambiguous decisions:
    1. Require an agent to select one of two or more options.
    2. Yield irreversible consequences with respect to achieving an unambiguously defined goal.
    3. Are made under circumstances in which the agent's certainty with respect to the consequences is greater than 0% and less than 100%.
    Some consequences of this definition:
    • Ambiguous decision making implies the existence of both win conditions and loss conditions. That is, ambiguous decision making can only occur in a contest. There is no such thing as a toy or a puzzle that involves ambiguous decision making. Games are a proper subset of contests.
    • Ambiguous decision making implies that a true game-end value hierarchy cannot exist. That is, 2nd place, 3rd place, and so on do not reliably and accurately measure the quality of ambiguous decision making beyond a binary success/failure. We discussed this here.
    • As a consequence of point 3, deterministic systems of perfect information (e.g. Chess, Go, Othello, Tic-Tac-Toe, etc.) are games only insofar as the agent is not aware of a solution*. If a solution is known, the certainty referred to in point 3 is at 100%, and therefore the "decisions" made are not ambiguous at all. As games are defined as being contests of ambiguous decision making, any contest involving only "decisions" in which agents simply perform the actions already known to be the correct moves for achieving the win conditions is not a game. Therefore, at least in the context of deterministic games of perfect information, ambiguous decision making (and therefore also deterministic perfect information games) constitutively depends upon more than the details of the system. It depends also upon characteristics of the agent, specifically the agent having incomplete knowledge of the game's decision tree lacking the knowledge of a move that guarantees a win lacking the knowledge of a perfect set of optimal moves.
    Does anyone disagree with any of the above or agree but have something to add to it? Is there a way that the above could be worded better? Is there something I forgot? Any glaring errors that I have overlooked? Anyone agree with any of the above consequences and care to suggest a formal proof thereof?

    One point I am presently very unsure of is the implications of chance and imperfect information on the third consequence. When outcomes are uncertain as a result of randomness or hidden information rather than on just a limited knowledge of the decision tree, we often (or always?) get risk management scenarios rather than just pure tactical decisions. When this is true, is the level of certainty in point 3 always less than 100%? You could theoretically be 100% certain about the probabilities of the possible outcomes. Does this necessarily generate an optimal move, and if so, then does this render the decision-making non-ambiguous?

    * There is an avenue of thought in which I qualify this statement, but I think at the present moment it's beyond the scope of this thread since it has more to do with general ontology than with decision making specifically. I will be exploring this avenue separately.
    Last edited: May 18, 2014
    Dasick, Lemon, keithburgun and 2 others like this.
  2. Disquisitor Sam

    Disquisitor Sam Well-Known Member

    I'm very confident that uncertainty from specifically output randomness, in most commonly used formats (e.g. dice and the like), adds no actual uncertainty to the system. They instead add only probability. We have an entire theory of mathematics dedicated to solving such equations. The key word there is "solve." If I'm playing an RPG for example, and I have a 46% chance-to-hit axe and a 65% chance-to-hit sword, this is not a real decision. There is uncertainty as to whether or not each swing will hit, but there is certainty which is the more reliable option and thus the correct choice.

    No matter what we do to obfuscate the equation, effective randomness (dice especially) can be trivially solved. If our 65% sword deals 2d8 damage and our 46% axe deals 2d12 damage, it may appear to be a decision to a brain that hasn't mastered probability theory. But it's obviously calculable, even if some number of people choose not to. As it happens, the sword has a DPR of 5.85 while the axe has a DPR of 5.98. In our case, this solution doesn't even affect very much. In fact, even if we switch the DPR of the sword and the axe, we might still value the axe for its "spikiness." Even that isn't really ambiguous because it's pretty easy to tell in most systems if a 0.13 DPR loss is even significant, and if not, clearly the axe still wins.

    Let's try a more relevant and reasonable approach: resolving attacks in wargames. If you have these 6 guys attack those 4 guys, you're probably rolling upwards of 30 to 40 dice in a game like Warhammer 40,000. That's an extremely complicated probability equation that is just not possible to process in your brain - you can only learn by trial and error and the resultant "feeling" you get for trying those attacks. This again appears to be ambiguous to people who don't care about computing the answer. That doesn't stop that individual action from being solvable, however. The outcome table is a little more complicated than our DPR example, but it's just a matter of presenting the information in an intuitive way. A nice looking table of possible results with associated probabilities makes the choice really easy. And it's not only possible to compute this stuff - you really should. Being able to look at the deployment of the units, and instead see their potential matchups and conflicts expressed in probabilistic outcome tables makes the game much more transparent to its actual game state. Excessive dice just serve to hide the inherently solved states from people who'd rather play the game than work spreadsheets.

    Now there may still be decisions left in a system like this. The continuous space problem is much less solvable, but only because it's not essentially a single equation. It's quite fair to say that many, if not most (if not all?), configurations of deployments are solvable from turn one. You simply use the finite move and attack ranges, play out the strongest probabilities, and see which side can force the win most often. Take special note that when I say "most often" I mean that the exact same set of moves will lead to differing outcomes due to dice, but that doesn't make the decisions in the solved flowchart any less of good decisions. And when you do get boned by dice, you enter a new configuration branch of the overall tree that is just as solvable, just with different win/loss percentages. Wargames of this type only have the appearance of being ambiguous because it's an unappealingly enormous amount of work to map an entire possibility tree. That doesn't stop it from being solvable, and doing so is still your strongest path of improvement.

    Risk management for dice is completely invalid since it relies on players being too lazy to go through the computational busywork of actually solving the game. And not all players are lazy. Did you lose because the other player managed his risk better? Or did he know the actual probability tables and just followed his flowchart? How would you even know considering that his flowchart sometimes doesn't work, even though the flowchart consists of the best moves he can make?

    This is what I meant by dice probability pushing the games they appear in towards solvability. Someone awhile ago refuted me by using the example of chess-dice, wherein two people roll 6 sided dice vs. each other, with the winner of a previous game of chess adding 1 to their roll. The idea of the argument that was made was that it takes away none of the ambiguity of the chess game, thus making it no more solvable than it was before. While this is technically true, the Chess game is no longer the entire game. The game now includes an additional decision to make, that being "Should I win or lose this Chess game?" the answer to which is obviously "Win." Chess-dice is like a 0.1% larger system than Chess alone, and that additional 0.1% is a solved state, which basically pushes it towards solvability, even by that tiny bit.

    The take-away here isn't really even that systems that involve output dice randomness are 100% solvable. It's that the parts of the system that the dice touch are solvable, and would be better served to be deterministic in keeping with the intended probabilities. This is good for both designers and players: designers can very clearly see whether their system is actually interesting (i.e. a rich possibility space and decision engine) when the shroud of randomness is removed, and the players can actually play the game without being lied to by the game state.
    keithburgun, Decaf and Nachtfischer like this.
  3. 5ro4

    5ro4 Active Member

    @Disquisitor Sam: Based on what you said, does anything add actual uncertainty to the system?
    I mean, with enough process capability, no decision is uncertain.
    Suppose a game that is complex, but not enough that you can't solve it if you devote 50 years to branch every possible decision. Every single player that plays the game instead of just doing that is lazy. Not as lazy a someone who plays instead of doing a spreadsheet, but lazy anyway.
    Dasick likes this.
  4. Disquisitor Sam

    Disquisitor Sam Well-Known Member

    What you're asking is "Is it possible to make a system that is unsolvable?"

    That is an extremely complicated question that I'm not entirely prepared to answer. I will say that you can make a system that might take five years to solve, ten years to solve, hundreds of years to solve, or thousands of years to solve.

    A system whose uncertainty is completely based on output probability is solvable today.
  5. Nachtfischer

    Nachtfischer Well-Known Member

    Theoretical Solvability

    Every finite game can be solved. Which includes, at least, every discrete (time, e.g. turn-based, and space, e.g. grid-based) contest of decision-making. Matches have a defined beginning and an outcome (therefore an end). You also have a finite number of available moves per turn.

    Now, we can look at something like Starcraft or League of Legends. We say those systems operate in "continuous space", but actually they don't. It's just extremely granular (or "high resolution") space. Every unit in Starcraft has a discrete numerically defined position. It's not as easy as "This marine is at 37/291", but you can map it to that, because the computer can't contain infinitely many (real) numbers, because it has finite memory. The resolution might be high enough to push the system to some "practically unsolvable" state (if it can't be reduced to something far more simplistic, because 90% of positions e.g. do clearly not matter), but theoretically it's still solvable if we just take into account continous space.

    [Side note: In the physical world, you could argue that we have "true" continuous space. So continuous-space board games would be theoretically unsolvable. As Sam mentioned above, though, infinitesimally small distances basically never matter and it would probably be pretty unreasonable to have them matter. So these games can be reduced to something not actually having continuous space.]

    The thing is, real-time games also operate in, surprise, "continuous time". Now, this is almost a philosophical question, but I don't think we can look at time as easily as at space. Time just is, by definition, continuous. The thing is, we interact with the game in true continuous time. That's why - and we've critisized that before - Starcraft APM is basically of unlimited usefulness. Now, you could of course argue that a computer can only take in a finite number of commands per second. But I think at this point, that's probably already more than a human being will ever be capable of giving. If you will, in a purely technical sense, real-time video games are solvable due to processing constraints. I wouldn't call them "practically solvable", though (again, unless you can morph them into a simpler system, because 0.00001 seconds do not have any meaning ever, which alone seems really hard to determine).

    [Side note: Again, physical real-time games take place in "true" continuous time. So they're theoretically unsolvable.]

    • Finite games (i.e. games that offer a finite number of possible moves at any of a finite number of decision points) are theoretically solvable.
    • Non-finite games (i.e. games that either offer an infinite number of possible moves at least at one decision point or have an infinite number of decision points or both) are theoretically unsolvable.
    • "Continuous space" video games are theoretically solvable due to memory constraints that technically make them finite.
    • Continuous space physical games are in and of themselves theoretically unsolvable (infinite number of possible moves), unless reduced to a discretized version.
    • "Continuous time" video games are theoretically solvable due to processing constraints that technically make them finite.
    • Continuous time physical games are in and of themselves theoretically unsolvable (infinite number of decision points), unless reduced to a discretized version.
    Practical Solvability

    I think it's safe to say that what we should really care for in game design is practical solvability. You can make any game (finite or not, although it's probably safe to assume that it will be finite if it's supposed to be a contest) practically unsolvable by making the possibility space, respectively the number of possible game states, large enough.

    [Side note: This includes single-player video games with procedural generation. All of them are theoretically solvable. The thing is, they can present you with a huge number of possible situations. So, again, they can be made practically unsolvable by widening the number of possible game states.]


    I think the concept of "depth" is a closely related topic. When we say a game is "deep", we probably mean something like "this game has a huge number of possible game states that matter". In many continuous space/time games, although they theoretically have a (practically) infinite number of game states, the smallest differences between the states barely matter. Extreme example: "Is the x-coordinate of the unit's position 178.28197397 or 178.28197398?" Well, mostly the answer is: "Who cares?" And you can often reduce the number of states that actually matter down much further. In fact, many turn-based games probably have more states that matter than many non-discrete games. That's why we usually think and talk about Chess, Go or Through The Desert when it comes to "deep games" around here.
    Last edited: May 16, 2014
  6. Leartes

    Leartes Well-Known Member

    You start out with a great example but I don't think you draw the right conclusions. From a risk management point of view you have to always consider the situation. The axe is far from always best and I can imagine a lot of cases where it is super hard to compute which one is better even with the simple numbers you give here.
    Say you fight a lot of 2 hp guys, clear choice for the sword. For simplicity sake, assume he can kill you in 3 hits. At how many hp does the optimal play switch from sword to axe? What happens if his damage is also randomized via dice and there is no option to heal in the game? I guess taking damage has a non-linear cost on your chance to win if there are several encounters.
    In this simple example you could still compute everything quite easily, but in a wargame setting this can get out of hand really quickly. Imo people put way to much emphasis on easy to compute stuff like the expected result or chance that some specific event occurs.

    So I guess I agree, there is a lot of easy-to-compute stuff in dice, but I disagree that you can solve the game because of that. You certainly get an edge and the predominant question to me is, how much room of improvement is left after you have done the easy probability stuff. Games* are not only about optimizing your turn but also about minimizing your opponents turn. This type of robust optimization is computationally really hard in general.
    Dasick likes this.
  7. Nachtfischer

    Nachtfischer Well-Known Member

    To make sure, your point was that the "axe vs. sword" situation is "unsolvable", because it depends on the situation? Because actually having a solution means knowing the best move in every situation. So a decision depending on the situation doesn't actually affect solvability.

    Unless you just meant that the situation is "not as easily solved as one could assume". Then I agree.
  8. Leartes

    Leartes Well-Known Member

    I am confused nacht :D

    Solveability requires an optimization criterion to solve towards. If your criterion is maximum expected damage per round, then it is simple. If your optimization criterion is maximal win-chance vs every possible action (with associated dice roles) the opponent can take, then it it a very hard problem. This is mostly due to the fact that there are so many different outcomes.
    I get the impression that Sam wants to break the outcomes down into favorable and unfavorable, but imo this is an oversimplification. Taking 9 vs 10 dmg and winning an encounter might be decisive for the next encounter.
  9. Nachtfischer

    Nachtfischer Well-Known Member

    I'd say the only optimization criterion when we're talking about "solving a game" is maximizing the probability* of a win. "Damage per round" can be an equivalent to that, but it doesn't have to be.

    * This does not mean that the game needs (output) randomness. The thing is, if we haven't solved a game completely (e.g. if a game is in fact practically unsolvable) then the only thing we can ever do is build heuristics of what the best move probably is in every situation. We're going by "rules of thumb". That's what decision-making is after all. And the accuracy of our heuristics represents our skill, our understanding of the system.
  10. Erenan

    Erenan Well-Known Member

    EDIT: Changed "games" to "deterministic games of perfect information"

    Note that the kind of solvable we're interested in with deterministic games of perfect information is not "can you construct a complete map of the decision tree?" but rather "can you find a subset of the decision tree that includes the root node and in which every leaf node is a win?"

    That is, even in a game with an infinite possibility space, it can be possible to find a solution. Consider Tic-Tac-Toe played on an infinite grid instead of a 3x3 one. Every turn offers the player an infinite number of options. Nevertheless, it is trivial to see that player 1 will win on turn 3 every time unless he makes a mistake. This game is easily solvable, in the sense we care about.

    This is why point 3 is worded the way it is. It's not:

    0% < knowledge of the decision tree < 100%

    It's instead:

    0% < certainty of the decision's consequences with respect to achieving the goal < 100%

    So I was wrong when I said that ambiguous decision making depends upon the agent having incomplete knowledge of the game's decision tree. It depends upon the agent lacking a specific set of data in his knowledge of the game's decision tree.
    Last edited: May 16, 2014
    Dasick likes this.
  11. Disquisitor Sam

    Disquisitor Sam Well-Known Member

    Let's analyze those two questions:
    1. "Can you construct a complete map of the decision tree?"
    2. "Can you find a subset of the decision tree that includes the root node and in which every leaf node is a win?"
    I don't think that question 2 is exactly applicable in a dice wargame. The exact same decision tree from the exact same root node often leads to different outcomes. The big question is whether or not that decision tree can be improved upon. That's the paradox of output randomness in that it appears to make a game unsolvable by virtue of making the answer to question 2 incontestably "no," but the answer to question 1 quickly shifts towards "yes."

    Consider this situation: 10 bowman can fire off a volley and wipe out 10 footsoldiers with 90% surety, but if they miss enough, the footsoldiers will close range and wipe out the bowmen with a similarly high surety depending on how many are left. I can't really say "yes" to question 2 due to those probabilities. But I also can't really improve my decisions, making the answer to question 1 "yes." I mean, what else should I have done?

    While this is a simplified example, adding more units and terrain doesn't expand the problem space as much as one might think. Even considering non-grid based wargames that utilize continuous space, the problem space is still highly constrained. This is because the number of options players have in their decision tree is very low, making the number of meaningful game states similarly much lower than it would seem. Bowmen can shoot arrows within a range, or they can fight in melee. One is probabilistically superior and is desirable in all but a few specific situations (the identification of which is part of mapping the decision tree). Movement seems like it would be a lot less mappable, but the space is much less continuous than it appears - all you need to do is divide the total map area by the movement range of the units, and you have something that is functionally similar to a grid. Unless movement is asynchronous or in real time, the perfect information a player has constrains movement to basically three options: toward, away, and toward strategic ground. Turn order and starting position will determine whether certain strategies involving movement are even possible, let alone "good." Your entire decision tree is made up of pairing off matchups of opposed units (informed almost entirely by probability) and controlling or denying strategic ground (i.e. ground that boosts dice rolls). Optimizing those things is quite easy since the permutations of unit matchups, area control tactics, even deployment decisions add up to a number so low that it doesn't even begin to approach the number of say potential games of Chess. There just aren't enough meaningful things you can do.

    So returning to your two questions, which of the two is more applicable in terms of doing well in dice wargames? How ambiguous are the decisions really when such calculations are known? Maybe a better question yet: is having a best move in a situation the same as having a solution for that situation? And what happens when you have an entire series of best moves?
    Erenan likes this.
  12. Erenan

    Erenan Well-Known Member

    Right, I was talking specifically about deterministic games of perfect information. Sorry for not being clear.
  13. Leartes

    Leartes Well-Known Member

    Ok, lets just assume that all combat outcomes are deterministic - like in chess. Then chess is a very simplistic tilebased wargame. If you move to continuous space you have an exponential number of possible relations of all units to eachother in a single turn. Over the course of the game and the different outcomes this tree is huge without rolling dice at all.
    I can only guess that you have not encountered a wargame that uses this space in a meaningful way. But the space is there, so I disagree with your general claims against wargames.
  14. deluks917

    deluks917 Well-Known Member

    I do not see why every discrete contest of decision making has to have a finite maximal number of turns. Like is this even true for Puzzle strike? Seems like you could make up games that never end under current rules. Of course chess is only finite because of the 50 turn rule, and something like that could be added to basically every game so maybe my point is moot.

    Anyway I do not see how one can make a rigorous definition of "ambiguous." In basically every game there actually are a finite number of correct choices at each turn (usually 1) and anything is a msitake.

    Even trying to say whether a decision is practically ambigous or not depends on the palyer, the timer etc. Take Flash Duel endgames. In many cases with 3 cards left in the deck if you had enough time or smarts you could pretty easily calculate your exact odds of winning if you make each of your possible moves. But most people cannot do this in the time allowed.

    However often FD endgame decisions are not even ambiguous in practice, never-mind theory. Say I can see that a certain move I make gives me a 100% chance to win. Then for me this is the only choice. On the other hand a different player may be unable to see this. So for her/him does the situation count as ambiguous?
    Last edited: May 17, 2014
    5ro4 likes this.
  15. Nachtfischer

    Nachtfischer Well-Known Member

    Yes. You've solved that decision, another player might not have. You've basically already "sucked more value out of the game" than others. :p
  16. Disquisitor Sam

    Disquisitor Sam Well-Known Member

    For the purposes of the conversation, I tend to imagine Warhammer and derivatives as the archtype for wargames. Most of them that I've seen tend to be very simulation heavy in that manner.

    With that in mind, continuous space doesn't actually do as much for that type of game system as you might think. The problem is that comparatively few of the possible spatial relationships between units are actually meaningfully distinct or reasonable outcomes. Sure, the two armies could switch sides within a few turns, for example, but that would basically never happen because a situation where that was both desirable and achievable to both sides is probably not reasonably possible. Yes, continuous space provides for a theoretically infinite number of possible configurations, but the vast majority of them you will never see because the game's strategy makes them impractical or even nonsensical.

    Even among the configurations that are strategically viable, there isn't much wiggle room. Lets say you have some cavalry type units with a large move range, and they are positioned within charging distance of some otherwise strong enemy units that cavalry happens to decimate (by the numbers). They don't even have to be exactly where they are - they could be a millimeter, a centimeter, or in some cases even several inches away from that position, but the strategic relevance of the position remains the same: they can make that charge, and from a probabilistic standpoint, they should (In many wargames, players refer to such an attack as the cavalry "earning their points" or "hitting above their weight"). In this way you can group huge swaths of continuous space (perhaps a few millimeters in all 360° directions) into one single meaningful game state, or as we call it in a true grid, a single tile. In fact, you could think of it as functionally being a grid after all - if cavalry have a move speed that can take them the entire length of the battlefield in three turns, then the battlefield is basically a 3x3 space grid for them. There will certainly be some edge cases that turn it into maybe a 3.11x3.11 grid, but the relevance of the difference between a 3x3 and a 3.11x3.11 will only come up in cases where they must make their full move action every time they move. In most cases, they close with their targets before spending every last millimeter of move. The whole thing is a little less constrained than a universal grid, especially when you consider units with different move speeds, but it's not so liberating as it might first appear, and certainly not infinitely so.

    The root of the problem is that the game is turn-based. Turn-based games don't really leverage continuous space as effectively as real-time or even asynchronous games do. In real-time games, the ability to subdivide continuous space is augmented by the ability to also subdivide continuous time. If you take cavalry that can traverse the game board in three turns and translate that to real-time, you get a unit that can cross the field in three arbitrary time units, but doesn't have to commit to their full move speed if something happens along the way. Choosing whether or not to close with an enemy unit is a much more difficult decision when you don't get to succeed automatically, as in the case of a turn-based wargame. In turn-based play, your opponent usually has to have some kind of special unit type or invoke some kind of special rule to interrupt the cavalry's legal charge. In real-time play, almost anything could be used to intercept them. Continuous space has more meaningful and viable states when combined with continuous time, and this is probably the reason many turn-based games are done on a grid - they usually fit together better.

    The biggest difference between the possibility space of Chess and wargames of this type is that the heuristics of wargames are usually limited to "stay at range" vs. "stay in melee" where the movement of all units is a straight line, and they all move at once on a given turn. Contrast with Chess where all unit types have different types (read: directions) of legal movement, and you only get to move one at a time. In fact, I bet that single change - constraining wargame players to moving only one unit per turn - would instantly cause its possibility space to skyrocket. It's too bad that a lot of players of those types of games would never accept it as it's "not realistic."
    Erenan likes this.
  17. Erenan

    Erenan Well-Known Member

    Yes, I haven't really played wargames, but in any case, I don't really see what it is that you disagree with exactly. The second question I posited ("can you find a subset of the decision tree that includes the root node and in which every leaf node is a win?") for what is necessary for non-ambiguity was haphazard and surely inaccurate. The real meat of the point I was trying to make is that knowing the complete decision tree is not necessary. As far as I can tell, and again this is fairly haphazard, so take it with a grain of salt, what is necessary is that the player knows the optimal move with 100% certainty. If that is the case, then it is not an ambiguous decision. Therefore, if the player knows the optimal move with 100% certainty for every decision from game start to game end, then the player was not playing a game, as he was never faced with an ambiguous decision.

    Knowing a subset of the decision tree that guarantees a path from the root node to a winning outcome no matter what the opponent does is sufficient for this. Whether or not it is necessary is an interesting question. Of course, in some games optimal play by both players leads to a draw, so obviously non-ambiguity doesn't guarantee a win, so what I had said definitely requires correction. What is the appropriate correction?

    Specifically, I guess the questions I'm asking are "what constitutes an optimal move?" and "what are the characteristics and extent of the knowledge the player must have in order to know with 100% certainty what the optimal move is?"

    Come to think of it, I guess what is necessary for a game to be solved is that the player knows the optimal move (and knows with 100% certainty that it is in fact the optimal move) for every position reachable given that the player has always played optimal moves up to that point. You don't need to know what to do in obscure corner cases reachable only by playing sub-optimally, just so long as you know how to play optimally (and again, know for sure that such play is optimal) from start to finish.
    Last edited: May 18, 2014
    Disquisitor Sam likes this.
  18. keithburgun

    keithburgun Administrator, Lead Designer Staff Member

    This is kind of what I've been trying to say all along in response to the whole "risk management" argument.

    BTW: can I just say that this is a great thread? This is the kind of thread that is the reason I set these forums up. I wish I had more time to go through all of it, but I'm way behind on everything.
    Erenan and Disquisitor Sam like this.
  19. EnDevero

    EnDevero Well-Known Member

    I'd say an optimal move is the move that leaves the player with the most paths to victory (the most options, basically). How does that sound? At their core, strategy games are about making sure you have more options open than your opponent, right?

    This seems to be the main explanation for why people often desire randomness in games. It can force players to deal with more corner cases. As players get better at a game, they constrict the possibility space of each match, and randomness counteracts that constriction. (Because to play a game is to attempt to control a system; randomness is the refusal of control).

    By the way, this leads me to another point. When we talk about games as being about "decision-making", that is left pretty ambiguous, so Keith created the term "interesting decision-making". That kind of gives a clearer picture, but still leaves a lot to interpretation. Along with that, Keith's theory says that contests measure, but it doesn't specify exactly what contests measure. I mean, a thermometer is not a contest, yet it can measure something about a person. So I'd like to propose that a contest should be regarded as a measurement of an agent's control over a system. (For example, if a you had to control your body temperature to make the thermometer reach a certain point, that would then be a contest). If that's agreed upon, it would then make games about measuring "control through decision-making". I guess you could call this "intelligent control" or something like that, I dunno. It's nothing revolutionary, but it makes it a lot easier to talk about this stuff. Basically:

    1) It allows us to talk about the "contest" part of games as much as the "decision-making" part and understand the meaningful distinction. Think along the lines of "randomness inherently refuses the player's control" versus "randomness is arbitrary noise so the player can't draw enough data to make an informed decision without relying on probability". Both get at a similar issue, but the former addresses contests and systems in general while the latter is about the specific effects on the player's mental process.

    2) It makes the importance of randomness in these discussions more apparent. Games are inherently about order and chaos, and that should be reflected in our terminology.
    Disquisitor Sam and Dasick like this.
  20. Lemon

    Lemon Well-Known Member

    Ive been playing a lot of go recently and thinking about why it has such a deep possibility space. I think it comes down to the following factors:

    Moves are granular. You have 19 degrees of granularity on 2 axes, and often half a dozen or so choices are at least valid moves - especially at the beginning.
    Moves accomplish more than one objective. Each stone can protect your stones, expand your territory, attack your opponents stones and reduce their territory. The best moves accomplish several of these at once, and a large part of the game is trying to nudge your opponent into a situation where they can achieve less per move than you.
    Moves have a large window of consequence. Because stones (usually) remain on the board for the whole game, each move you make will affect the strength of all future moves. A skilled player will make moves that have impact from when they are played until the end of the game. Sometimes a pro player will make a move that doesn't seem to make sense until dozens of turns in the future.

    These 3 factors are at least one way to create ambiguous decisions. You have a wide range of possible choices, which each affect various goals in various ways, and will have consequences for many turns into the future.

Share This Page