Regret Minimization in Games with Incomplete Information (CFR)

The paper Regret Minimization in Games with Incomplete Information introduces counterfactual regret and how minimizing counterfactual regret through self-play can be used to reach Nash equilibrium. The algorithm is called Counterfactual Regret Minimization (CFR).

The paper Monte Carlo Sampling for Regret Minimization in Extensive Games introduces Monte Carlo Counterfactual Regret Minimization (MCCFR), where we sample from the game tree and estimate the regrets.

We tried to keep our Python implementation easy-to-understand like a tutorial. We run it on a very simple imperfect information game called Kuhn poker.

Open In Colab

Introduction

We implement Monte Carlo Counterfactual Regret Minimization (MCCFR) with chance sampling (CS). It iteratively, explores part of the game tree by trying all player actions, but sampling chance events. Chance events are things like dealing cards; they are kept sampled once per iteration. Then it calculates, for each action, the regret of following the current strategy instead of taking that action. Then it updates the strategy based on these regrets for the next iteration, using regret matching. Finally, it computes the average of the strategies throughout the iterations, which is very close to the Nash equilibrium if we ran enough iterations.

We will first introduce the mathematical notation and theory.

Player

A player is denoted by $i \in N$, where $N$ is the set of players.

History

History $h \in H$ is a sequence of actions including chance events, and $H$ is the set of all histories.

$Z \subseteq H$ is the set of terminal histories (game over).

Action

Action $a$, $A(h) = {a: (h, a) \in H}$ where $h \in H$ is a non-terminal history.

Information Set $I_i$

Information set $I_i \in \mathcal{I}_i$ for player $i$ is similar to a history $h \in H$ but only contains the actions visible to player $i$. That is, the history $h$ will contain actions/events such as cards dealt to the opposing player while $I_i$ will not have them.

$\mathcal{I}_i$ is known as the information partition of player $i$.

$h \in I$ is the set of all histories that belong to a given information set; i.e. all those histories look the same in the eye of the player.

Strategy

Strategy of player $i$, $\sigma_i \in \Sigma_i$ is a distribution over actions $A(I_i)$, where $\Sigma_i$ is the set of all strategies for player $i$. Strategy on $t$-th iteration is denoted by $\sigma^t_i$.

Strategy is defined as a probability for taking an action $a$ in for a given information set $I$,

$\sigma$ is the strategy profile which consists of strategies of all players $\sigma_1, \sigma_2, \ldots$

$\sigma_{-i}$ is strategies of all players except $\sigma_i$

Probability of History

$\pi^\sigma(h)$ is the probability of reaching the history $h$ with strategy profile $\sigma$. $\pi^\sigma(h)_{-i}$ is the probability of reaching $h$ without player $i$’s contribution; i.e. player $i$ took the actions to follow $h$ with a probability of $1$.

$\pi^\sigma(h)_{i}$ is the probability of reaching $h$ with only player $i$’s contribution. That is,

Probability of reaching a information set $I$ is,

Utility (Pay off)

The terminal utility is the utility (or pay off) of a player $i$ for a terminal history $h$.

where $h \in Z$

$u_i(\sigma)$ is the expected utility (payoff) for player $i$ with strategy profile $\sigma$.

Nash Equilibrium

Nash equilibrium is a state where none of the players can increase their expected utility (or payoff) by changing their strategy alone.

For two players, Nash equilibrium is a strategy profile where

$\epsilon$-Nash equilibrium is,

Regret Minimization

Regret is the utility (or pay off) that the player didn’t get because she didn’t follow the optimal strategy or took the best action.

Average overall regret for Player $i$ is the average regret of not following the optimal strategy in all $T$ rounds of iterations.

where $\sigma^t$ is the strategy profile of all players in iteration $t$, and

is the strategy profile $\sigma^t$ with player $i$’s strategy replaced with $\sigma^*_i$.

The average strategy is the average of strategies followed in each round, for all $I \in \mathcal{I}, a \in A(I)$

That is the mean regret of not playing with the optimal strategy.

If $R^T_i < \epsilon$ for all players then $\bar{\sigma}^T_i(I)(a)$ is a $2\epsilon$-Nash equilibrium.

Since $u_1 = -u_2$ because it’s a zero-sum game, we can add $R^T_1$ and $R^T_i$ and the second term will cancel out.

The average of utilities over a set of strategies is equal to the utility of the average strategy.

Therefore,

From the definition of $\max$,

Then,

This is $2\epsilon$-Nash equilibrium. You can similarly prove for games with more than 2 players.

So we need to minimize $R^T_i$ to get close to a Nash equilibrium.

Counterfactual regret

Counterfactual value $\color{pink}{v_i(\sigma, I)}$ is the expected utility for player $i$ if if player $i$ tried to reach $I$ (took the actions leading to $I$ with a probability of $1$).

where $Z_I$ is the set of terminal histories reachable from $I$, and $z[I]$ is the prefix of $z$ up to $I$. $\pi^\sigma(z[I], z)$ is the probability of reaching z from $z[I]$.

Immediate counterfactual regret is,

where

where $\sigma |_{I \rightarrow a}$ is the strategy profile $\sigma$ with the modification of always taking action $a$ at information set $I$.

The paper proves that (Theorem 3),

where

Regret Matching

The strategy is calculated using regret matching.

The regret for each information set and action pair $\color{orange}{R^T_i(I, a)}$ is maintained,

and the strategy is calculated with regret matching,

where $\color{orange}{R^{T,+}_i(I, a)} = \max \Big(\color{orange}{R^T_i(I, a)}, 0 \Big)$

The paper The paper Regret Minimization in Games with Incomplete Information proves that if the strategy is selected according to above equation $R^T_i$ gets smaller proportionate to $\frac{1}{\sqrt T}$, and therefore reaches $\epsilon$-Nash equilibrium.

Monte Carlo CFR (MCCFR)

Computing $\color{coral}{r^t_i(I, a)}$ requires expanding the full game tree on each iteration.

The paper Monte Carlo Sampling for Regret Minimization in Extensive Games shows we can sample from the game tree and estimate the regrets.

$\mathcal{Q} = {Q_1, \ldots, Q_r}$ is a set of subsets of $Z$ ($Q_j \subseteq Z$) where we look at only a single block $Q_j$ in an iteration. Union of all subsets spans $Z$ ($Q_1 \cap \ldots \cap Q_r = Z$). $q_j$ is the probability of picking block $Q_j$.

$q(z)$ is the probability of picking $z$ in current iteration; i.e. $q(z) = \sum_{j:z \in Q_j} q_j$ - the sum of $q_j$ where $z \in Q_j$.

Then we get sampled counterfactual value fro block $j$,

The paper shows that

with a simple proof.

Therefore we can sample a part of the game tree and calculate the regrets. We calculate an estimate of regrets

And use that to update $\color{orange}{R^T_i(I, a)}$ and calculate the strategy $\color{lightgreen}{\sigma_i^{T+1}(I)(a)}$ on each iteration. Finally, we calculate the overall average strategy $\color{cyan}{\bar{\sigma}^T_i(I)(a)}$.

Here is a Kuhn Poker implementation to try CFR on Kuhn Poker.

Let’s dive into the code!

317from typing import NewType, Dict, List, Callable, cast
318
319from labml import monit, tracker, logger, experiment
320from labml.configs import BaseConfigs, option

A player $i \in N$ where $N$ is the set of players

323Player = NewType('Player', int)

Action $a$, $A(h) = {a: (h, a) \in H}$ where $h \in H$ is a non-terminal history

325Action = NewType('Action', str)

History

History $h \in H$ is a sequence of actions including chance events, and $H$ is the set of all histories.

This class should be extended with game specific logic.

328class History:

Whether it’s a terminal history; i.e. game over. $h \in Z$

339    def is_terminal(self):
344        raise NotImplementedError()

Utility of player $i$ for a terminal history. $u_i(h)$ where $h \in Z$

346    def terminal_utility(self, i: Player) -> float:
352        raise NotImplementedError()

Get current player, denoted by $P(h)$, where $P$ is known as Player function.

If $P(h) = c$ it means that current event is a chance $c$ event. Something like dealing cards, or opening common cards in poker.

354    def player(self) -> Player:
361        raise NotImplementedError()

Whether the next step is a chance step; something like dealing a new card. $P(h) = c$

363    def is_chance(self) -> bool:
368        raise NotImplementedError()

Sample a chance when $P(h) = c$.

370    def sample_chance(self) -> Action:
374        raise NotImplementedError()

Add an action to the history.

376    def __add__(self, action: Action):
380        raise NotImplementedError()

Get information set for the current player

382    def info_set_key(self) -> str:
386        raise NotImplementedError

Create a new information set for the current player

388    def new_info_set(self) -> 'InfoSet':
392        raise NotImplementedError()

Human readable representation

394    def __repr__(self):
398        raise NotImplementedError()

Information Set $I_i$

401class InfoSet:

Unique key identifying the information set

408    key: str

$\sigma_i$, the strategy of player $i$

410    strategy: Dict[Action, float]

Total regret of not taking each action $A(I_i)$,

We maintain $T \color{orange}{R^T_i(I, a)}$ instead of $\color{orange}{R^T_i(I, a)}$ since $\frac{1}{T}$ term cancels out anyway when computing strategy $\color{lightgreen}{\sigma_i^{T+1}(I)(a)}$

425    regret: Dict[Action, float]

We maintain the cumulative strategy to compute overall average strategy

432    cumulative_strategy: Dict[Action, float]

Initialize

434    def __init__(self, key: str):
438        self.key = key
439        self.regret = {a: 0 for a in self.actions()}
440        self.cumulative_strategy = {a: 0 for a in self.actions()}
441        self.calculate_strategy()

Actions $A(I_i)$

443    def actions(self) -> List[Action]:
447        raise NotImplementedError()

Load information set from a saved dictionary

449    @staticmethod
450    def from_dict(data: Dict[str, any]) -> 'InfoSet':
454        raise NotImplementedError()

Save the information set to a dictionary

456    def to_dict(self):
460        return {
461            'key': self.key,
462            'regret': self.regret,
463            'average_strategy': self.cumulative_strategy,
464        }

Load data from a saved dictionary

466    def load_dict(self, data: Dict[str, any]):
470        self.regret = data['regret']
471        self.cumulative_strategy = data['average_strategy']
472        self.calculate_strategy()

Calculate strategy

Calculate current strategy using regret matching.

where $\color{orange}{R^{T,+}_i(I, a)} = \max \Big(\color{orange}{R^T_i(I, a)}, 0 \Big)$

474    def calculate_strategy(self):

493        regret = {a: max(r, 0) for a, r in self.regret.items()}

495        regret_sum = sum(regret.values())

if $\sum_{a’\in A(I)}\color{orange}{R^{T,+}_i(I, a’)} \gt 0$,

497        if regret_sum > 0:

500            self.strategy = {a: r / regret_sum for a, r in regret.items()}

Otherwise,

502        else:

$\lvert A(I) \rvert$

504            count = len(list(a for a in self.regret))

507            self.strategy = {a: 1 / count for a, r in regret.items()}

Get average strategy

509    def get_average_strategy(self):

518        cum_strategy = {a: self.cumulative_strategy.get(a, 0.) for a in self.actions()}

522        strategy_sum = sum(cum_strategy.values())

If $\sum_{t=1}^T \pi_i^{\sigma^t}(I) > 0$,

524        if strategy_sum > 0:

528            return {a: s / strategy_sum for a, s in cum_strategy.items()}

Otherwise,

530        else:

$\lvert A(I) \rvert$

532            count = len(list(a for a in cum_strategy))

535            return {a: 1 / count for a, r in cum_strategy.items()}

Human readable representation

537    def __repr__(self):
541        raise NotImplementedError()

Counterfactual Regret Minimization (CFR) Algorithm

We do chance sampling (CS) where all the chance events (nodes) are sampled and all other events (nodes) are explored.

We can ignore the term $q(z)$ since it’s the same for all terminal histories since we are doing chance sampling and it cancels out when calculating strategy (common in numerator and denominator).

544class CFR:

$\mathcal{I}$ set of all information sets.

557    info_sets: Dict[str, InfoSet]
  • create_new_history creates a new empty history
  • epochs is the number of iterations to train on $T$
  • n_players is the number of players
559    def __init__(self, *,
560                 create_new_history: Callable[[], History],
561                 epochs: int,
562                 n_players: int = 2):
568        self.n_players = n_players
569        self.epochs = epochs
570        self.create_new_history = create_new_history

A dictionary for $\mathcal{I}$ set of all information sets

572        self.info_sets = {}

Tracker for analytics

574        self.tracker = InfoSetTracker()

Returns the information set $I$ of the current player for a given history $h$

576    def _get_info_set(self, h: History):
580        info_set_key = h.info_set_key()
581        if info_set_key not in self.info_sets:
582            self.info_sets[info_set_key] = h.new_info_set()
583        return self.info_sets[info_set_key]

Walk Tree

This function walks the game tree.

  • h is the current history $h$
  • i is the player $i$ that we are computing regrets of
  • pi_i is $\pi^{\sigma^t}_i(h)$
  • pi_neg_i is $\pi^{\sigma^t}_{-i}(h)$

It returns the expected utility, for the history $h$ where $Z_h$ is the set of terminal histories with prefix $h$

While walking the tee it updates the total regrets $\color{orange}{R^T_i(I, a)}$.

585    def walk_tree(self, h: History, i: Player, pi_i: float, pi_neg_i: float) -> float:

If it’s a terminal history $h \in Z$ return the terminal utility $u_i(h)$.

606        if h.is_terminal():
607            return h.terminal_utility(i)

If it’s a chance event $P(h) = c$ sample a and go to next step.

609        elif h.is_chance():
610            a = h.sample_chance()
611            return self.walk_tree(h + a, i, pi_i, pi_neg_i)

Get current player’s information set for $h$

614        I = self._get_info_set(h)

To store $\sum_{z \in Z_h} \pi^\sigma(h, z) u_i(z)$

616        v = 0

To store for each action $a \in A(h)$

620        va = {}

Iterate through all actions

623        for a in I.actions():

If the current player is $i$,

625            if i == h.player():

630                va[a] = self.walk_tree(h + a, i, pi_i * I.strategy[a], pi_neg_i)

Otherwise,

632            else:

637                va[a] = self.walk_tree(h + a, i, pi_i, pi_neg_i * I.strategy[a])

642            v = v + I.strategy[a] * va[a]

If the current player is $i$, update the cumulative strategies and total regrets

646        if h.player() == i:

Update cumulative strategies

651            for a in I.actions():
652                I.cumulative_strategy[a] = I.cumulative_strategy[a] + pi_i * I.strategy[a]

665            for a in I.actions():
666                I.regret[a] += pi_neg_i * (va[a] - v)

Update the strategy $\color{lightgreen}{\sigma^t(I)(a)}$

669            I.calculate_strategy()

Return the expected utility for player $i$,

673        return v

Iteratively update $\color{lightgreen}{\sigma^t(I)(a)}$

This updates the strategies for $T$ iterations.

675    def iterate(self):

Loop for epochs times

683        for t in monit.iterate('Train', self.epochs):

Walk tree and update regrets for each player

685            for i in range(self.n_players):
686                self.walk_tree(self.create_new_history(), cast(Player, i), 1, 1)

Track data for analytics

689            tracker.add_global_step()
690            self.tracker(self.info_sets)
691            tracker.save()

Save checkpoints every $1,000$ iterations

694            if (t + 1) % 1_000 == 0:
695                experiment.save_checkpoint()

Print the information sets

698        logger.inspect(self.info_sets)

Information set tracker

This is a small helper class to track data from information sets

701class InfoSetTracker:

Set tracking indicators

707    def __init__(self):
711        tracker.set_histogram(f'strategy.*')
712        tracker.set_histogram(f'average_strategy.*')
713        tracker.set_histogram(f'regret.*')

Track the data from all information sets

715    def __call__(self, info_sets: Dict[str, InfoSet]):
719        for I in info_sets.values():
720            avg_strategy = I.get_average_strategy()
721            for a in I.actions():
722                tracker.add({
723                    f'strategy.{I.key}.{a}': I.strategy[a],
724                    f'average_strategy.{I.key}.{a}': avg_strategy[a],
725                    f'regret.{I.key}.{a}': I.regret[a],
726                })

Configurable CFR module

729class CFRConfigs(BaseConfigs):
733    create_new_history: Callable[[], History]
734    epochs: int = 1_00_000
735    cfr: CFR = 'simple_cfr'

Initialize CFR algorithm

738@option(CFRConfigs.cfr)
739def simple_cfr(c: CFRConfigs):
743    return CFR(create_new_history=c.create_new_history,
744               epochs=c.epochs)