## Definition of a strategic game

To describe a situation in which decision-makers interact, we need to specify

- who the decision-makers are
- what each decision-maker can do
- each decision-maker’s payoff from each possible outcome.

A strategic game is one way to specify these components. Precisely, a *strategic game* consists of

- a set of
**players** - for each player, a set of
**actions**(sometimes called strategies) - for each player, a
**payoff function**that gives the player’s payoff to each list of the players’ actions.

An essential feature of this definition is that each player’s payoff depends on the list of all the other players’ actions. In particular, a player’s payoff does not depend only on her own action.

When using the theory of strategic games to study oligopoly, we will specify the components as follows:

**Players**: The set of firms.**Actions**: The set of outputs, or the set of prices, or the advertising budgets, or any other variable chosen by the firm, or any combination of these variables.**Payoffs**: The firms’ profits.

However, the notion of a strategic game can be *-and has been-* used to study a very wide variety of situations, from tariff wars between countries to electoral competition to the design of legal regimes to sibling rivalry to the mating habits of hermaphroditic fish. In particular, the definition of a strategic game does not put any restrictions on the nature of the players’ actions. For example, an action can be a single variable (like an output, or price), or can be a list of variables (like an (output, price) pair), or can be a complicated contingency plan (if X happens, choose x, while if Y happens, choose y, …).

A list of actions, one for each player in the game, is called an action profile (or, sometimes, a strategy profile or strategy combination).

We can compactly represent a strategic game with two players in a table, like the following one.

Player 2 | |||||||

$$L$$ | $$R$$ | ||||||

Player 1 | $$T$$ |
| |||||

$$B$$ |

This table represents a strategic game in which player 1’s actions are and and player 2’s actions are and . The first number in each box is player 1’s payoff to the pair of actions that define the box, while the second number in each box is player 2’s payoff to the pair of actions that define the box. Thus, for example, if player 1 chooses the action B and player 2 chooses the action L then player 1’s payoff is 3 and player 2’s payoff is 0.

# Nash equilibrium

## Definition

What actions will be chosen by the players in a strategic game? We assume that

- each player chooses the action that is best for her, given her beliefs about the other players’ actions.

How do players form beliefs about each other? We consider here the case in which every player is experienced: she has played the game sufficiently many times that she knows the actions the other players will choose. Thus we assume that

- every player’s belief about the other players’ actions is correct.

The notion of equilibrium that embodies these two principles is called Nash equilibrium (after John Nash, who suggested it in the early 1950s). (The notion is sometimes referred to as a “Cournot-Nash equilibrium”.) Precisely:

- A
**Nash equilibrium**of a strategic game is an action profile (list of actions, one for each player) with the property that no player can increase her payoff by choosing a different action, given the other players’ actions. Note that nothing in the definition suggests that a strategic game necessarily has a Nash equilibrium, or that if it does, it has a single Nash equilibrium. A strategic game may have no Nash equilibrium, may have a single Nash equilibrium, or may have many Nash equilibria. Finding Nash equilibria: games with finitely many actions for each player Consider, for example, the game

Player 2 | |||||||

$$L$$ | $$R$$ | ||||||

Player 1 | $$T$$ |
| |||||

$$B$$ |

There are four action profiles ; we can examine each in turn to check whether it is a Nash equilibrium.

- : By choosing rather than , player 1 obtains a payoff of 3 rather than 2, given player 2’s action. Thus is not a Nash equilibrium. Player 2 also can increase her payoff (from 2 to 3) by choosing R rather than L.
- : By choosing rather than , player 1 obtains a payoff of 1 rather than 0, given player 2’s action. Thus is not a Nash equilibrium.
- : By choosing rather than , player 2 obtains a payoff of 1 rather than 0, given player 1’s action. Thus is not a Nash equilibrium.
- : Neither player can increase her payoff by choosing an action different from her current one. Thus this action profile is a Nash equilibrium.

We conclude that the game has a unique Nash equilibrium, .

Notice that in this equilibrium both players are worse off than they are in the action profile . Thus they would like to achieve ; but their individual incentives point them to .

This game is called the **Prisoner’s dilemma**; it has been used to model a very wide variety of situations. The story that gives the game its name is the following. Two suspects in a major crime are in separate cells. There is enough evidence to convict each of them of a minor offense, but not enough evidence to convict either of them of the major crime unless one of them acts as an informer against the other (finks). If they are both quiet, each will be convicted of the minor offense and spend one year in prison. If one and only one of them finks, she will be freed and used as a witness against the other, who will spend four years in prison. If they both fink, each will spend three years in prison.

Assign each player the payoff of 0 for a four-year jail term, the payoff of 1 for a three-year term, the payoff of 2 for a one-year term, and the payoff of 3 for freedom, and associate and for player 1 with the actions Quiet and Fink, and and for player 2 with the actions Quiet and Fink. Then the game above represents this situation.

We conclude from our analysis of the Nash equilibrium of this game that the outcome will be that both players Fink and wind up in jail for three years.

## Finding Nash equilibria: best response functions

In a game in which each player has infinitely many possible actions, we cannot find a Nash equilibrium by examining all action profiles in turn. To develop an alternative method of finding Nash equilibria, we first reformulate the definition of a Nash equilibrium for a two-player game. (The general definition above applies to games with any number of players; for simplicity now I restrict to games with two players.) Call the action of player 1 that maximizes her payoff, given that player 2’s action is , player 1’s best response to . Similarly, call the action of player 2 that maximizes her payoff, given that player 1’s action is , player 2’s best response to . (I am assuming that each player has a single best response.)

Given this definition of best responses, a pair of actions is a Nash equilibrium if and only if

player 1’s action is a best response to player 2’s action a_2 and player 2’s action is a best response to player 1’s action a_1. That is, in order to find a Nash equilibrium we need to find a pair of actions such that is a best response to , and vice versa. If we denote player 1’s best response to by and player 2’s best response to by then we can write the condition for a Nash equilibrium more compactly:

the pair of actions is a Nash equilibrium if and only if and . The method of finding the players’ best response functions and then solving the two simultaneous equations is most useful when considering a game in which each player has infinitely many actions, but it can be applied also to a game in which each player has finitely many actions. Consider, for example, the Prisoner’s dilemma:

Player 2 | |||||||

$$L$$ | $$R$$ | ||||||

Player 1 | $$T$$ |
| |||||

$$B$$ |

Player 1’s best response to is , and her best response to is also . Similarly, player 2’s best response to is and her best response to is . Thus we have

and and and

We see that the only pair of actions with the property that and is , the Nash equilibrium that we found previously.

This is a literal copy of a document taken offline document by (Martin J. Osborne)[https://www.economics.utoronto.ca/osborne/index.html]