Elemental Feedback Structures for Game Design

This paper was one of the two papers nominated for the best paper award at the GAMEON-NA conference at Georgia Tech, Atlanta. It is published in the poroceedings of the conference. The full reference is: J. Dormans (2009) "Machinations: Elemental Feedback Patterns for Game Design" in Joseph Saur & Margaret Loper (eds) GAME-ON-NA 2009: 5th International North American Conference on Intelligent Games and Simulation, pp. 33-40.


There are many ways to look at games. They are entertainment devices geared towards pleasurable interaction on one hand, and they are cultural media contributing to and reflecting on contemporary culture on the other. At the same time, games consist of complex rule systems that model fictional environments and facilitate player agency in those environments. This allows us to approach games in many different ways. Without suggesting that one approach is better than the other, this paper treats games as complex state machines: interactive devices that can be in many different states, and which current state affects the transition to a new state. In particular, this paper focuses on the role feedback loops play in games and sets out to identify the most elemental types of feedback that can occur in games.

It is an approach that focuses on game systems and neglects players. This is a move that can and has been critiqued: without players games would be, quite literary, meaningless. The formal rule systems of games are subject to constant change and reinterpretation. A formal approach always runs the risk of turning a blind eye towards this dynamic and important dimension of games (see Malaby 2007). However, a game designer first and foremost builds games systems. It this system that codifies the player’s possible interaction and generates individual game experiences. The aim of this paper is to understand the elemental structures that contribute to quality gameplay and that ultimately facilitates the expressive and dynamic nature of games.

This is a structural approach to game systems; it focuses on the structure of game systems and patterns that might be found in these structures. It is not what I call a formal approach, as it lacks the mathematical rigor required to deserve such label. Yet, in it intents to do exactly what many other would loosely call a formal approach: to provide a common, abstract and precise language that can be used to increase our understanding of games. It is not the first attempt in this respect. The call for such a formal or structural language has been expressed before with mixed results (LeBlanc 1999, Church 1999, Kreimeier 2002, Grünvogel 2005, Koster 2005, Bura 2006). To date, none of these have been so successful that they have become an industry or academic standard. I feel that is largely due to the fact that they tend either to be too mathematical for the diverse population of game designers and scholars, or were not explored or presented with enough detail. Most importantly, for a framework like this to be of any success, it requires the designers to make an investment by learning the paradigm. Only an obvious return justifies this investment: using the framework should improve design and speed up the design process.

Many of the concepts proposed in these works have found their way into the approach presented in this paper. But I also drew inspiration from fields as diverse as linguistics, semiotics, the science of complexity and modern control theory. This paper expands the ideas presented in an earlier paper (Dormans 2008) which used a similar approach based on UML diagrams. The response on that work, as well as my work with students and my own experience as a game designer, inspired me to focus more closely on feedback patterns in games and their relation with the flow of the game, which I understand here as a particular progression through different game states that is the result of playing a game.

When one sets out to model anything as complex as games, the problem always is that a model can never do justice to the true complexity of that what it one tries to model. This is true for most models, and the best models succeed in stripping down the complexity of the original by leaving out, or abstracting away, many important details. This is certainly the case with the models I present here. However, any model is a tool that can help us understand and work with complex systems. The model presented in this paper certainly is such a tool. To be able to use the model to the best effect understanding the concepts that informed the creation of the model is required. As any model, this model only facilitates understanding; it never is a substitute for it.

This paper starts by investigating games as state machines in the first two sections. It discusses state machine diagrams and Petri nets as possible methods for modeling games. In the next two sections it explores the important structural notion of feedback loops and devise a diagram language based on Petri nets to express feedback structures in games. An example of how understanding of a game’s feedback structure can be utilized to improve its design is next. Finally, a small number of elemental feedback patterns that can be found in many games are presented.

I am not the first author to build a set of game design patterns. Staffan Björk and Jussi Holopainen (2005) do exactly that. Their collection of over two-hundred patterns has a broader scope than what I present here, but describe games mostly from the outside. In contrast, this approach tries to construct game design patterns inside-out. It takes theoretical concepts concerning state machines as its starting point, and tries to identify patterns in the structure of games from there.

The notation for feedback structures developed in this paper is adapted from the interactive diagrams that were developed for my research into feedback structures. These diagrams, an online tool to create these diagrams as well as an extended list and more thorough description of the patterns presented here can be found on the website that accompanies this paper:

Games as state machines

A game can be understood as a state machine: there is an initial state or condition and actions of the player (and often the game, too) can bring about new states until an end state is reached (Grünvogel 2005). In the case of many single-player video games either the player wins or the game ends prematurely. The game’s state usually reflects the player’s location, the location of other players, allies and enemies, and the current distribution of vital game resources. From a game’s state the player progress towards a goal can be read. State machines can be diagrammed. In these diagrams circles represent states and arrows represent transitions between states. Often these transitions are marked with labels that indicate what brings the transition about. For example, figure 1 represents a state machine of a fairly straightforward and relatively simple, generic adventure game.

Figure 1: Adventure game state machine

Many things have been omitted from this diagram. For example, the way the player moves through the world has been left out, which is no trivial aspect in an action adventure game with a strong emphasis on exploring (as is the case with most Legend of Zelda games). Still, we can easily abstract it away from this diagram as movement does not seem to bring any relevant changes to the game state (other than the requirement of being in a certain location to be able to execute a particular action).

The question is whether or not a formal representation of a game is of any use. Looking at the diagram this game does not look complex at all. The possible set of different trajectories trough the state machine is very limited. The only possibilities are ‘abcde’ and ‘abdce’. This game is a machine that cannot produce any other result. It is, to use Jesper Juul’s categories, a game of progression, and not a game of emergence (Juul 2005). To be fair, most adventure games have much larger set of states and player actions that trigger state transitions. There might be side quests for the player to follow, or even optional paths that lack the symmetry of the two branches in figure 1. A game like this might grow in complexity very fast (see for example figure 2), but still the possible trajectories remains ultimately finite. Yet this is what a lot of games have done in the past.

Figure 2: A more complex game state machine, but one that still produces a finite set of possibilities

To really change the character of the output of the machine, a structural change needs to be made to the set up of the game. One possibility is to create recursion in the diagram. Recursion simply means that a transition takes you back to a previous state, allowing you to loop through the diagram in different and more varied ways. Chomsky has shown that including recursion in a state machine the set of possible results quickly becomes infinite (1957: 18-25). For example we could make the supply of keys and locked doors in our previous game endless allowing the player to loop back indefinitely (see figure 3). The possible set of results is now {abcde, abdce, ababcde, ababdce, abababcde, abababdce, ..., etc.}.

Figure 3: an adventure game with recursion

Of course, this has little meaning in the context of the game, unless the number of keys the player collected somehow affects his chance of defeating the end boss. In the later case we might want to consider creating different states for each number of keys the player has collected but this would create an infinite number of states, which is impossible to diagram. In a real implementation of a game, the number of keys would be stored in a variable, but state machine diagrams have no method of representing variables. This is problematic as the state of many games is best expressed using variables like these.

Consider the board game Risk. In this game the state of the game can be expressed by each individual player’s current possession of lands, armies and cards. The number of different distribution of lands, armies and cards over different players is, for all practical means, too large to be diagrammed in a useful manner using classic state machine diagrams. Even if, we abstract away the location of countries on the board and reduce the state to the number of lands, armies, and cards in the player’s possession, the number of possibilities is still too large for a classic state machine diagram. It is impossible to model such a game using a classical state machine diagram; games might be state machines, they are rarely finite.

A different look at game states

Game states are usually much better expressed using a mix of variables and states. Not only allows such a mixture to model the large number of states encountered in most games, it also shifts attention towards the transitions between the states, which corresponds to user actions. If we take Risk again as our example we can construct a diagram for this game with only four states and seven transitions (figure 4) in which each transition affects the number of lands, armies and cards.

Figure 4: a state diagram for Risk

The diagram shows a lot of recursion, and as a result an infinite number of different paths through the state machine are possible. A diagram that focuses on transitions is clearly more capable to capture the nature of games and the varied sessions of play. However, the diagram omits important rules and mechanics. For example, in Risk you can only play cards if you have a valid set of three, the number of armies you gain from a building action depends on the number of lands you control, and the chances of reaching victory in a battle is affected by the number of armies you have. It is possible to write down these rules in or next to the diagram, although this will do little to make the diagram more accessible to most game designers.

Petri nets are an alternative modeling technique suited for game machines (cf. Bura 2006). Petri nets work with a system of nodes and transitions. A particular type of nodes: places, can hold a number of tokens. In a Petri-net a place can never be connected directly to another place, instead a place must be connected to a transition, and a transition must be connected to a place. In a classic Petri net places are represented as empty circles, transitions are represented as squares and tokens are represented as solid circles. In a Petri-net tokens flow from place to place; the distribution of tokens over spaces represents the current state of the Petri net (see figure 5). This way the number of states a Petri net can express is infinitely larger than non-looping, finite state machine diagrams. Petri nets put much more focus on the transitions and have a natural way of representing integer values through the distribution of tokens over the places in the network. However, Petri nets can be somewhat difficult to read, as the transitions are often identified using names and are defined with formal mathematical definitions.

Figure 5: Four iterations of the same Petri net showing movement of tokens through the network

Game economy

One way to enhance the readability of Petri nets is to reduce the number of possible transitions to a small set of basic operations that still allows us to represent game mechanics. Although this would still entail an abstraction of a game’s true logic, I argue it is possible to come up with a set that is able to express a game’s characteristic flow, and therefore can be used to create a useful model for the game. This set is based on the idea that all games can be understood in terms of their internal economy.

According to literature, most, if not all, games have an internal economy and this economy plays a vital role in its emergent behavior (Adams & Rollings 2007). A game's economic system is dominated by the flow of resources. In games resources can be anything: from money and property in Monopoly, via ammo and health in first person shooters, to experience points and equipment in role playing games. Even more abstract aspects of games, such as player skill level and strategic position can be modeled through the use of resources. Once we have identified a game’s most important resources we can look at how these resources are produced, consumed and how they interact. In the case of Risk, we might think of lands, armies and cards as resources, where both lands and cards can be used to produce armies, and armies can be risked to gain lands and cards.

Adams and Rollings identify four basic economic functions for games: sources, drains, converters and traders (ibid.: 331-340). Sources create resources, drains destroy resources. Converters replace one type of resource for another, where as traders allow the exchange of resources between players or game elements. These economic functions set up a network of economic transactions that determine the flow of a game. I found that, together with a concept for pools, or places where resources can gather, these economic functions are indeed the essential operations that can represent most game mechanics. Of these structures, sources and drains are the most elemental. It can be easily shown that a converter can be constructed from a combination of a source and a drain, whereas a trader can be created from a set of interlinked pools. Figure 6 explains the diagrammatic language I use to express these elements. This language is loosely based on Petri nets. It incorporates the ideas of places and tokens. It specifies a number of base transitions that represent the elementary operations required to build an internal economy. In addition, special links represent communication of a pool’s status. These can affect the settings of a particular operation. A special set of indicators can be used to mark different types of unpredictability (see feedback signatures below).

Figure 6: ‘Game economy’ diagrams

Figure 7 is a diagram of the internal economy of Risk. As you can see in this diagram, unlike Petri nets, it is possible to directly connect multiple transitions. The different colors denote different mechanics are structural features of the game. The black elements and connections represent Risk’s main mechanic of risking armies in battle to gain land. The light green connections and elements in the middle of the diagram indicate the building mechanism which in turn takes the number of lands as an input. The dark grey elements on the bottom indicate the bonus armies gained from capturing continents. The dark grey elements on the top represent the card mechanism in risk: a successful attack will get the player a card. Particular sets of three card will get the player more armies as well. The random knot indicates that this mechanism is subject to the randomness generated by the drawing of a card. The light grey elements on the top and the right indicate the effects generated through multiplayer dynamics: loss of armies and lands, which is informed by the number of lands and continents the player has, (for reasons of clarity a similar connections from armies to the multiplayer dynamic mechanism is omitted). In the game of Risk the main resources are easily identifiable: armies, lands and cards are represented by actual playing pieces, positions on the board and playing cards. However, sometimes resources can be more abstract. To stay with the example of Risk, strategic positions can be seen as another resource in the game, as can be player skill. In a games like Go, Chess or Checkers, strategic position is often more advantageous than the number of playing pieces under your control. Likewise, in a platform game, the avatar’s altitude can be vital resource in reaching the end of a level or gaining advantage over his enemies. The use of abstract resources can be vital in understanding a game. In Boulder Dash, for example, a level’s relative instability is an important factor. One might say that the player is constantly converting movement and collected diamonds into more instability. Should the instability exceed the player’s skill level, he loses.  Abstract resources can be modeled just like other resources. To give an example, jumping in a platform game can act like a source of the abstract resource altitude. While at other moments in the game, altitude might be converted into victory points or spent to reduce risk of difficult actions.

Figure 7: A diagram for Risk


Just as recursion is an important structural characteristic of state machines that increases the number of sequences a state machine can produce, feedback is an equivalent structural characteristic of a game’s economy. Here feedback has nothing to do with giving the player information about the game or it state, rather feedback is understood in its original meaning, where the output of a process feeds back into the same process often strengthening the process further. A classic example of feedback in games can be found in Monopoly where money is spend to buy property which in turn generates more money, with which the player can buy more property, etcetera. The concept of feedback comes from classic control theory and has been introduced to the game design community by Marc LeBlanc (1999).

As is the case in classic control theory (DiStefano III, et. al. 1967, Andrei 2005), Marc LeBlanc distinguishes between two types of feedback: positive and negative feedback. Positive feedback strengthens itself and destabilizes a system. Positive feedback occurs whenever a small deviation will create a stronger deviation, which creates a stronger deviation in turn, as is the case with the Monopoly example above. Positive feedback can be applied to positive game effects but also to negative game effects, as is the case with loosing pieces in Chess, which increases the chances of loosing more piece, etcetera. LeBlanc suggests that positive feedback drives the game to a conclusion and magnifies early successes (LeBlanc 1999, see also Salen & Zimmerman 2003: 224-225). Negative feedback is the opposite of positive feedback. It stabilizes a game by diminishing differences between players, by applying a penalty to the player who has done something that takes him closer to his goal and winning the game, or by giving advantages to the trailing players. LeBlanc points out that in most multiplayer games that allow direct interaction some sort of negative feedback is already in place, as most sensible players will target the leader more than any other player. As one might expect negative feedback can prolong a game and magnifies late successes (ibid.).

Control theory, in almost all cases, strives for negative feedback while avoiding positive feedback, as it aims to create stable systems. A large part of control theory concerns itself with determining and optimizing the stability of the system. For games the situation is, of course, very different. Positive feedback loops are much more frequent in games because, in general, designers understand that players do not want to play a game that drags on forever. Yet, negative feedback is also wanted, as most games with only positive feedback will seem too random to many players as they will unable to catch the player who took an early lead. Monopoly is a good example of this effect as an early, lucky break is an accurate prediction of who will win in the end. And despite the lack of negative feedback the game still seems to drag on forever.

Marc LeBlanc observations have been picked up by influential game designers and theorists. It features prominently in the work of Katie Salen and Eric Zimmerman (2003), Ernest Adams and Andrew Rollings (2007), and Tracey Fullerton (2008). They use it as a promising, analytical lens for game design. In an earlier paper I have discussed feedback in relation to emergence in games (Dormans 2008) following suggestions by Jochen Fromm (2005) who states that true emergence can only occur in systems with multiple feedback loops. From these discussions it becomes clear that feedback goes a long way in explaining the flow of a game; many games can be characterized by the particular set up of feedback loops. But, to my knowledge, none of these discussions have attempted to expand on LeBlanc’s original idea. So far, no-one looked beyond positive and negative feedback in any detail or tried to identify the most elemental patterns of feedback in games. This is what I intend to do here.

feedback Signatures

The beauty of the diagrams I propose is that they are very effective in capturing feedback loops. Feedback is realized by a closed flow of resources and/or state connections. In figure 7, there are four feedback loops clearly visible. The first feedback loop involves the capture of lands and the positive effect this has on the number of armies you can build, with which you can capture more lands. In the diagram the loop is closed by green building mechanism. The second feedback loop involves the cards (the top dark gray mechanic), which are rewarded for winning lands, and which can be converted into more armies once a set of three cards is collected. The third feedback loop is formed by the lower dark gray mechanics representing the capture of continents. Finally the top, light gray, multiplayer dynamic mechanism constitute the fourth feedback loop.

The first three feedback loops are positive: more lands or cards will lead to more armies which will lead to more lands and cards. Yet they are not the same. The feedback of cards is much slower that the feedback of lands, but at the same time the feedback of the cards is also much stronger. Feedback from the capturing continents operates fast and strongly. These are important characteristics of the feedback loops that have a big impact on the dynamics of the game. Players are more willing to risk an attack when it is likely that the next card they will get completes a valuable set: it does not improve their chances of winning a battle but it will increase the reward if they do. Likewise the chance of capturing a continent can inspire a player to take more risk than he should.  Only identifying all feedback loops as positive is not enough to explain these gameplay effects.

Table 1 lists seven characteristics that are used to describe the signature of a feedback loop. At a first glance some of these characteristics seem overlapping, but they are not: It is easy to confuse positive feedback with constructive feedback and negative feedback with destructive feedback. However, positive destructive feedback exists as is the case with loosing pieces in a game of Chess. Likewise, the board game Power Grid employs a mechanism in which the game leaders have to invest more resources to build up and fuel their network of power plants: negative constructive feedback.

Table 1: Characteristics of feedback



Type Positive Enhances differences, destabilizes the game.
Negative Dampens differences, stabilzes/balances a game.
Effect Constructive Operates on a game effect that helps you win.
Destructive Operates on a game effect that will make you loose.
Return High The net gain is high.
Low The net gain is low.
Insufficient The gain does not outweigh the investment (net gain is negative).
Investment High Many resources must be invested before the feedback is activated.
Low Few resources must be invested before the feedback is activated.
Speed Fast The effects of the feedback are fast or immediate.
Slow The effects of the feedback take time or several iteration to activate or kick in.
Range Short The feedback operates directly over a few steps.
Long The feedback operates indirectly over many steps.
Durability None The feedback works only once.
Limited The feedback works only over a short period of time.
Extended The feedback works over a long period.
Permanent The effects are permanent.

The strength of a feedback loop is an informal indication of its impact on the game. Strength cannot be attributes to a single characteristic; it is the result of several. For example, permanent feedback with a little return can have a strong effect on the game.

In many games the characteristics of feedback are affected by outside factors such as chance, skill and social interaction (see Table 2). Feedback in a multiplayer game that allows direct player interaction like Risk can change over time. As LeBlanc already pointed out, it often is negative feedback as players act stronger, or even conspire against, the leader. At the same time, it can also be positive as in certain circumstances it can be beneficial to pray on the weaker player. In other cases random chance can affect the nature of the feedback as is the case in many board games that involve dice.

Table 2: Determinability
Deterministic Given a certain game state, the feedback will always act the same.
Random The feedback depends on random factors. The randomness can affect the feedback’s speed and/or return, or the possibility of feedback occurring at all. Or the return might be infrequent. Random feedback is difficult for the player to assess, and increases the chance of deadlocks.
Multiplayer dynamics The type, strength, and/or game effect of the feedback are affected by the direct interaction between players.
Meta-dynamics The type, strength, and/or game effect of the feedback are affected by the strategic interaction between players.
Player skill The type, strength, and/or game effect of the feedback are affected by the player’s manual skill in executing the action.

The skill of player in performing a particular task can also be a decisive factor in the nature of feedback, as is the case for many computer games. For example, in a shooter game there often exists a feedback loop between the ammunition a player invests to defeat opponents and the ammunition these opponents drop when they are killed. The player’s skill is an important factor in this feedback as a skillful player will waste less ammunition; his investment is lower than that of a less skillful player. Here player skill is a factor on the operational or tactical level of the game. Games of chance, tactical skill, or games that involve only deterministic feedback, a whole set of strategic skills can be quite decisive for the outcome. However, that is a result of a players understanding of game’s feedback structures as a whole, and as such it is not an element that can or needs to be modeled within the structure.

Games that feature only deterministic feedback, can still show surprising emergent behavior and unexpected outcomes. In fact, it is my conviction that a well-designed game is build on only a handful feedback loops and relies on chance, multiplayer dynamic, and skill only when it needs to and refrains from using randomness as an easy source of uncertainty.

Combining the characteristics of feedback with its determinability it is possible to describe a signature for each feedback loop and a feedback profile for different games. While a profile like this can be very helpful in identifying the nature of feedback in a game, it does little to reveal the interaction between different feedback loops. This is where diagrams, such as figure 7, excel. Many of the characteristics of feedback loops described above can be read from the diagrams. The effect of the feedback is directly related to the constructive or destructive nature of the feedback loop, whereas return and investment depends on the number of resources involved. Range can be read from the number of elements involved in the feedback loop, speed from the number of iterations required to activate the feedback. Slightly more difficult to read are the return and the type of feedback, but this is possible, too. In the diagram for Risk I have already included a symbol to mark chance factors; this is extended with symbol for multiplayer dynamics, meta-dynamics and player skill (see table 2). The type of feedback (positive or negative) is perhaps the most difficult to read from a static representation, and requires careful inspection of the diagram, but this is possible, too. The plus symbols in the diagrams in the paper do not indicate positive feedback, only that there is positive correlation between the number of resources in the pool and the value it is affecting, which can induce negative or positive feedback.

Feedback analysis

An analysis of a game’s feedback loops can be used to identify structural strengths and flaws in its design. To create interesting and varied gameplay feedback is an important tool, and most successful games incorporate one or several feedback loops.  Structural flaws, or ‘bad smells’ in analogy to software engineering, are constructions that are best avoided. If we take Risk again as our example, we can identify one of its problems from play experience: building as often as you can is an effective, almost dominant, strategy. In fact, the game has a rule that disallows players to build for more than three turns in a row to counter this strategy. Inspection of the feedback structure of the game suggests another way of resolving the problem. Attacking feeds into a triple positive feedback structure (lands, cards and continents), which is a strength of it its design, but apparently the feedback is not effective enough. Adjusting the feedback of lands will help only a little as building is part of the same feedback loop and will probably strengthen the unwanted behavior. Either the feedback through cards or the feedback through continents needs to be improved. The card feedback loop involves two random factors: success of attack and the blind draw of the card itself. This makes the feedback unpredictable and very hard for the player to assess. In general, involving too much randomness in the same loop is best avoided, especially when this randomness affects different steps in the loop. It is very hard to balance and predict the feedback of such a loop, so reducing the randomness, for example by allowing the winner a pick of three open cards, will help a lot.

Alternatively the feedback through the capture of continents can be improved. The problem with this feedback is that is strong, permanent, direct and fast: it is very obvious and will inspire strong reaction by opposing players, in other words it acts as a red flag. Combined with a relative high investment, it is a difficult strategy, but one that is very rewarding if it succeeds. The strength and the obviousness of the feedback which invites a strong negative feedback create a feedback loop that is too crude: it is either on and going strong or it is off. Either the player succeeds in taking and keeping a continent and has a very good shot at winning, or players quickly take the continent away from the player. By making the feedback less strong, and perhaps increase the number of continents (or rather regions) for players to conquer, a far more subtle feedback loop is created that will pay-out more often without unbalancing the game too much.

Elemental feedback patterns

Looking at feedback structures in games, many recurrent patterns emerge. Below is a short list of patterns with some examples. Some patterns are diagrammed as well, although often there are multiple ways to implement the pattern. These descriptions are informal, they are presented here as a sample of what feedback patterns can be found in games. Extended descriptions that follow more closely the format for design patterns used in software engineering (Gamma et. al. 1995), and game design pattern libraries inspired by those patterns (Kreimeier 2002, Björk and Holopainen 2005), including interactive diagrams and multiple sample implementations can be found on the website that accompanies this research.

Dynamic Engine– A resource called energy is produced by a source and can be spent to improve its flow. This constitutes constructive permanent positive feedback (see figure 8). Settlers of Catan at its heart has a dynamic engine that is affected by some randomness: randomly selected tiles produce resources for players that have villages and cities next to the tile. Building villages improves the chances of player to get resources, while upgrading villages to cities increase the resource output of a tile when they are selected.

Figure 8: Dynamic Engine (left) and Converter Engine (right)

Converter Engine – If a player can change one type of resource (energy) into another (fuel) and than change it back into energy to generate a surplus of energy the game includes a converter engine (see figure 8). Power Grid is an example of a converter engine. In this game players spend money to buy fuel and burn fuel to make money. The surplus is used to invest in better power plants, among other things. The risk of feedback engines is the chance of deadlock, if both resources dries up the engine dies out and cannot be revived. Consider combining a feedback engine with a weak static engine to prevent deadlocks (as is the case in Power Grid). A converter engine offers more opportunities to create positive feedback and is therefore well suited as part of the engine building; a higher level pattern in which players compete to build efficient economic engines. Most real-time strategy games follow a complex engine building pattern with destructive feedback between the players, not unlike Risk (cf. Salen & Zimmerman 2003: 222).

Playing Style Reinforcement – Slow, positive, constructive feedback on player’s actions that also have another game effect causes the player’s avatar or units to develop over time. As the actions themselves feed back into this mechanism the avatar or units specialize over time, getting better in a particular task. As long as there are multiple viable strategies and specializations, the player’s avatar or the units will, over time, reflect the player’s preferences and playing style, often this mechanic employs experience points as a resource (see figure 9). Playing Style Reinforcement is a common pattern that can be encountered in most games that include ‘role-playing elements’.

Figure 9: Playing Style Reinforcement

Escalating Complications– The basis of many action games is confronting the player with a task that keeps growing more difficult as the player’s actions to complete a goal also applies feedback to the skill needed to complete the task, reducing the effectiveness of the skill (see figure 10). A classic example can be found in Space Invaders where destroying an alien makes the others move slightly faster, increasing the difficulty of destroying the remaining aliens.

Figure 10: Escalating Complications (left) and Escalating Complexity (right)

Escalating Complexity – Another basic set-up for action games is introducing a system in which the positive feedback to the game complexity is automatic and steady, while the player actions work to reduce complexity. As the game progresses complexity will be created increasingly faster (see figure 10). This way, skilled players can manage the difficulty longer than players with less skill. Games like this remain balanced for a while and then quickly spin out of control. Tetris is an excellent example of this pattern. When the positive feedback mechanism is unsteady or unpredictable, the pace of the game can vary a lot.


There are some limitations to the use of these diagrams. The idea of internal economy is suited better to some games. In particular, it works very well for board games. In games where economy is more abstract these diagram can be difficult to abstract, and many games can be diagrammed in multiple ways depending on the analyst’s focus. Still, feedback loops can go a long way in explaining the flow of a game. Gameplay can be related to more characteristics of feedback loops than positive and negative feedback only. In addition, the delicate interaction between multiple feedback loops must be taken into account to get a complete picture of the dynamics of games as (infinite) state machines. The list of feedback patterns presented in this paper, and on the accompanying website is neither definitive nor complete. For now, my research into feedback structures is still ongoing. Currently, patterns are still being harvested from analysis of existing games, games under production, and by exploring theoretical possibilities suggested by the framework. Future effort is aimed at collecting more patterns and establishing methods and practices for using the patterns to improve the design process of new games. To this end I will be looking at correlating particular game design patterns with specific game design goals in order to provide more grip on the elusive nature of gameplay and its (serious) application.

It is important to note that although examples were illustrated in both words and diagrams, there are many different ways of implementing these patterns. A pattern description is not a prescription of a particular implementation. These patterns have many different ways in which they can be combined, and each individual game provides its own particular opportunities for interesting combinations. Although, I would personally lean towards the simplest possible implementation, as it is usually my objective to create complex, rather than complicated, games. It is best to consider this framework as a set of building blocks (pools, resources and economic functions) that can be used to build an infinite number of different structures some of which are recurrent patterns that can be used to analyze existing games and explore new concepts.


I would like to thank Stéphane Bura for being a great inspiration for the work presented here. Not only did he urge me to explore the diagrams I developed further using an interactive tool, his work, comments, and our discussions set me on the right track to discover some of the key concepts presented here. Furthermore I would like to thank Jacob Brunekreef for reading and commenting on a previous draft of this paper. Finally, I am grateful to the Hogeschool van Amsterdam for supporting my PhD research and allowing me to test this work with game design students.


Adams, Ernest, & Rollings, Andrew. (2007). Fundamentals of Game Design. Upper Saddle River: Pearson Education, Inc.

Andrei, Neculai (2005) “Modern Control Theory: A historical perspective”. Retrieved May 24, 2009, from

Björk, S. & Holopainen, J. (2005) Patterns in Game Design. Boston: Charles River Media.

Bura, Stéphane (2006) “A Game Grammar”. Retrieved May 24, 2009, from

Church, Doug (1999) "Formal Abstract Design Tools" on Gamasutra. Retrieved May 24, 2009, from

Chomsky, Noam (1957) Syntactic Structures. The Hague, Mouton Publishers.

DiStefano III, Joseph J., Stubberud. Allen R. & Williams, Ivan J. (1967) Theory and Problems of Feedback and Control Systems. New York, McGraw-Hill.

Dormans, Joris (2008) “Visualizing Game Mechanics and Emergent Gameplay”. Paper presented at the Meaningful Play Conference, East Lansing, Michigan. Retrieved May 24, 2009, from

Fromm, Jochen. (2005) Types and Forms of Emergence. Retrieved September 8, 2008, from

Fullerton, Tracy (2008) Game Design Workshop: A Playcentric Approach to Creating Innovative Games, 2nd Edition. Morgan Kaufman.

Gamma, Erich, Helm, Richard, Johnson, Ralph & Vlissides, John (1995) Design Patterns: Elements of Reusable Object-Oriented Software. Boston, Addison Wesley.

Grünvogel, Stefan M.  (2005) “Formal Models and Game Design”. On Retrieved May 25, 2009, from

Juul, Jesper. (2005) Half-Real, Video Games between Real Rules and Fictional Worlds. Cambridge: The MIT Press.

Koster, Raph (2005) “A Grammar of Gameplay: game atoms: can games be diagrammed?” Presentation at the Game Developers Congres 2005. Retrieved September 8, 2008, from

Kreimeier, Bernd (2002) “The Case For Game Design Patterns”. Paper on Gamasutra. Retrieved May 25, 2009 from

LeBlanc, Marc (1999) “Formal Design Tools: Feedback Systems and the Dramatic Structure of Completion”, presentation at the Game Developers Conference. Retrieved May 24, 2009, from

Malaby, Thomas M. (2007) “Beyond Play: A New Approach to Games”. Games and Culture 2007. No 2, 95-113.

Salen, Katie.& Zimmerman, Eric. Rules of Play: Game Design Fundamentals. Cambridge: The MIT Press.