Repeated Games
Exported
GameTheory.RepeatedGame — TypeRepeatedGame{N,T,TD}Type representing an N-player repeated game.
Fields
sg::NormalFormGame{N, T}: The stage game used to create the repeated game.delta::TD: The common discount rate at which all players discount the future.
GameTheory.RepeatedGame — MethodRepeatedGame(p1, p2, delta)Helper constructor that builds a repeated game for two players.
Arguments
p1::Player: The first player.p2::Player: The second player.delta::TD: The common discount rate at which all players discount the future.
Returns
::RepeatedGame: The repeated game.
GameTheory.AS — MethodAS(rpd; maxiter=1000, plib=default_library(2, Float64), tol=1e-5,
u=nothing, verbose=false)Using AS algorithm to compute the set of payoff pairs of all pure-strategy subgame-perfect equilibria with public randomization for any repeated two-player games with perfect monitoring and discounting, following Abreu and Sannikov (2014).
If the payoffs are Integer or Rational and the discount factor is Rational (and the library specified by plib supports exact arithmetic), this function performs exact arithmetic and returns a Matrix{Rational{BigInt}}. Otherwise, it defaults to Float64 computation.
Arguments
rpd::RepeatedGame{2, T, TD}: Two player repeated game with T<:Real, TD<:Real.maxiter::Integer: Maximum number of iterations.plib: Allows users to choose a particular package for the geometry computations. (See Polyhedra.jl docs for more info). By default, it chooses to usedefault_library.tol::Float64: Tolerance in differences of set.u: The punishment payoff pair if any player deviates. In default, we use minimax payoff pair. If there is better guess, you can specify it by passing aVectorwith length 2 (it will be clamped to the minimum payoffs internally if it is otherwise lower).verbose::Bool: If true, print convergence information. Defaults to false.
Returns
::Matrix{S}: Vertices of the set of payoff pairs, whereSisRational{BigInt}if the payoffs are Integer or Rational and the discount factor is Rational, andFloat64otherwise.
Examples
For Prisoners' Dilemma state game:
julia> pd_payoff = [9.0 1.0; 10.0 3.0];
julia> g = NormalFormGame(pd_payoff); # symmetric 2-player game
julia> println(g)
2×2 NormalFormGame{2, Float64}:
[9.0, 9.0] [1.0, 10.0]
[10.0, 1.0] [3.0, 3.0]
julia> rpd = RepeatedGame(g, 0.75);
julia> vertices = AS(rpd; tol=1e-9)
4×2 Matrix{Float64}:
3.0 3.0
3.0 9.75
9.0 9.0
9.75 3.0If payoffs are Integer or Rational and the discount factor is Rational, the computation will be in exact arithmetic:
julia> g_int = NormalFormGame(Int, g) # convert to Int-valued game
2×2 NormalFormGame{2, Int64}
julia> rpd_rat = RepeatedGame(g_int, 3//4); # discount factor as Rational
julia> using CDDLib
julia> vertices_rat = AS(rpd_rat; tol=1e-9, plib=CDDLib.Library());
julia> typeof(vertices_rat)
Matrix{Rational{BigInt}} (alias for Array{Rational{BigInt}, 2})
julia> size(vertices_rat)
(6, 2)There is a utility uniquetolrows which can be used to remove approximately duplicate rows:
julia> uniquetolrows(vertices_rat, 1e-8)
4×2 Matrix{BigFloat}:
9.0 9.0
9.75 3.0
3.0 9.75
3.0 3.0GameTheory.outerapproximation — Methodouterapproximation(rpd; nH=32, tol=1e-8, maxiter=500, check_pure_nash=true,
verbose=false, nskipprint=50,
plib=CDDLib.Library(),
lp_solver=GameTheory.clp_optimizer_silent)Approximates the set of equilibrium values for a repeated game with the outer hyperplane approximation described by Judd, Yeltekin, Conklin (2002).
Arguments
rpd::RepGame2: Two player repeated game.nH::Int: Number of subgradients used for the approximation.tol::Float64: Tolerance in differences of set.maxiter::Int: Maximum number of iterations.check_pure_nash: Whether to perform a check about whether a pure Nash equilibrium exists.verbose::Bool: Whether to display updates about iterations and distance.nskipprint::Int: Number of iterations between printing information (assuming verbose=true).plib::Polyhedra.Library: Allows users to choose a particular package for the geometry computations. (See Polyhedra.jl docs for more info). By default, it chooses to useCDDLib.Library().lp_solver: Linear programming solver to be used internally. Pass aMathOptInterface.AbstractOptimizertype (such asClp.Optimizer) if no option is needed, or a function (such asGameTheory.clp_optimizer_silent) to supply options.
Returns
vertices::Matrix{Float64}: Vertices of the outer approximation of the value set.
GameTheory.uniquetolrows — Methoduniquetolrows(V, tol)Remove near-duplicate rows from matrix V using tolerance-based deduplication.
Arguments
V::AbstractMatrix{T}: Input matrix where T<:Real.tol::Real: Tolerance for considering points as duplicates.
Returns
::Matrix: Matrix of floats with duplicate rows removed within tolerance.
GameTheory.unpack — Methodunpack(rpd)Helper function that unpacks the elements of a repeated game.
Arguments
rpd::RepeatedGame: The repeated game.
Returns
::Tuple{NormalFormGame, TD}: A tuple containing the stage game and the delta.
GameTheory.worst_value_1 — FunctionSee worst_value_i for documentation
GameTheory.worst_value_2 — FunctionSee worst_value_i for documentation
GameTheory.worst_value_i — Functionworst_value_i(rpd, H, C, i, lp_solver=clp_optimizer_silent)Given a constraint w ∈ W, this finds the worst possible payoff for agent i.
Arguments
rpd::RepGame2: Two player repeated game.H::Matrix{Float64}: Matrix of shape(nH, 2)containing the subgradients herenHis the number of subgradients.C::Vector{Float64}: The array containing the hyperplane levels.i::Int: The player of interest.lp_solver: Linear programming solver to be used internally. Pass aMathOptInterface.AbstractOptimizertype (such asClp.Optimizer) if no option is needed, or a function (such asGameTheory.clp_optimizer_silent) to supply options.
Returns
out::Float64: Worst possible payoff for player i.
Internal
GameTheory.RepGame2 — TypeRepGame2Type representing a 2-player repeated game; alias for RepeatedGame{2}.
GameTheory._best_dev_gains — Method_best_dev_gains(g)Calculate the payoff gains from deviating from the current action to the best response for each player.
Arguments
g::NormalFormGame{2, T}: Two-player NormalFormGame.
Returns
::Tuple{Matrix{T}, Matrix{T}}: Tuple of best deviating gain matrices for two players. For example, for the first matrixbest_dev_gains1,best_dev_gains1[i, j]is the payoff gain for player 1 for deviating to the best response from ith action given player 2 choosing jth action.
GameTheory._coefficient_type — Method_coefficient_type(rpd)Helper function that determines the coefficient type for computations based on the repeated game's payoff and delta types.
Arguments
rpd::RepeatedGame: The repeated game.
Returns
::Type: Rational{BigInt} if payoffs are rational/integer and delta is rational, Float64 otherwise.
GameTheory._payoff_points — Method_payoff_points(::Type{T}, g)Return a matrix with each row being a payoff pair point in the two dimensional space.
Arguments
g::NormalFormGame{2}: Two-player NormalFormGame.
Returns
v::Matrix{T}: Matrix with size n by 2, where n is the number of action profiles. Each row corresponds to one payoff pair.
GameTheory.initialize_LP_matrices — Methodinitialize_LP_matrices(rpd, H)Initialize matrices for the linear programming problems.
Arguments
rpd::RepeatedGame: Two player repeated game.H::Matrix{Float64}: Matrix of shape(nH, 2)containing the subgradients used to approximate the value set, wherenHis the number of subgradients.
Returns
c::Vector{Float64}: Vector of lengthnHused to determine which subgradient should be used, wherenHis the number of subgradients.A::Matrix{Float64}: Matrix of shape(nH+2, 2)with nH set constraints and to be filled with 2 additional incentive compatibility constraints.b::Vector{Float64}: Vector of lengthnH+2to be filled with the values for the constraints.
GameTheory.initialize_sg_hpl — Methodinitialize_sg_hpl(nH, o, r)Initializes subgradients, extreme points and hyperplane levels for the approximation of the convex value set of a 2 player repeated game.
Arguments
nH::Int: Number of subgradients used for the approximation.o::Vector{Float64}: Origin for the approximation.r::Float64: Radius for the approximation.
Returns
C::Vector{Float64}: Vector of lengthnHcontaining the hyperplane levels.H::Matrix{Float64}: Matrix of shape(nH, 2)containing the subgradients.Z::Matrix{Float64}: Matrix of shape(nH, 2)containing the extreme points of the value set.
GameTheory.initialize_sg_hpl — Methodinitialize_sg_hpl(rpd, nH)Initializes subgradients, extreme points and hyperplane levels for the approximation of the convex value set of a 2 player repeated game by choosing an appropriate origin and radius.
Arguments
rpd::RepeatedGame: Two player repeated game.nH::Int: Number of subgradients used for the approximation.
Returns
C::Vector{Float64}: Vector of lengthnHcontaining the hyperplane levels.H::Matrix{Float64}: Matrix of shape(nH, 2)containing the subgradients.Z::Matrix{Float64}: Matrix of shape(nH, 2)containing the extreme points of the value set.
GameTheory.unitcircle — Methodunitcircle(npts)Places npts equally spaced points along the 2 dimensional unit circle and returns the points with x coordinates in first column and y coordinates in second column.
Arguments
npts::Int: Number of points to be placed.
Returns
pts::Matrix{Float64}: Matrix of shape(nH, 2)containing the coordinates of the points.