API Reference

Exported

Model Construction

ContinuousDPs.ContinuousDPType
ContinuousDP{N,Tf,Tg,TR,Tlb,Tub,TI}

Type representing a continuous-state dynamic program with N-dimensional state space.

Fields

  • f::Tf: Reward function f(s, x).
  • g::Tg: State transition function g(s, x, e).
  • discount::Float64: Discount factor.
  • shocks::TR<:AbstractVecOrMat: Discretized shock nodes.
  • weights::Vector{Float64}: Probability weights for the shock nodes.
  • x_lb::Tlb: Lower bound of the action variable as a function of state.
  • x_ub::Tub: Upper bound of the action variable as a function of state.
  • interp::TI<:Interp{N}: Object that contains the information about the interpolation scheme.
source
ContinuousDPs.ContinuousDPMethod
ContinuousDP(f, g, discount, shocks, weights, x_lb, x_ub, basis)

Constructor for ContinuousDP.

Arguments

  • f: Reward function f(s, x).
  • g: State transition function g(s, x, e).
  • discount::Real: Discount factor.
  • shocks::AbstractVecOrMat: Discretized shock nodes.
  • weights::Vector{Float64}: Probability weights for the shock nodes.
  • x_lb: Lower bound of the action variable as a function of state.
  • x_ub: Upper bound of the action variable as a function of state.
  • basis::Basis: Object that contains the interpolation basis information.
source
ContinuousDPs.ContinuousDPMethod
ContinuousDP(cdp::ContinuousDP; f=cdp.f, g=cdp.g, discount=cdp.discount,
             shocks=cdp.shocks, weights=cdp.weights,
             x_lb=cdp.x_lb, x_ub=cdp.x_ub, basis=cdp.interp.basis)

Construct a copy of cdp, optionally replacing selected model components.

source

Solving the Model

QuantEcon.solveFunction
solve(cdp, method=PFI; v_init=zeros(cdp.interp.length), tol=sqrt(eps()),
      max_iter=500, verbose=2, print_skip=50, kwargs...)

Solve the continuous-state dynamic program by the specified method.

Arguments

  • cdp::ContinuousDP: The dynamic program to solve.
  • method::Type{<:DPAlgorithm}: Solution method. VFI for value function iteration, PFI for policy function iteration, or LQA for linear quadratic approximation. Default is PFI.
  • v_init::Vector{Float64}: Initial value function values at interpolation nodes.
  • tol::Real: Convergence tolerance.
  • max_iter::Integer: Maximum number of iterations.
  • verbose::Integer: Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration).
  • print_skip::Integer: If verbose == 2, how many iterations between print messages.
  • point::Tuple{ScalarOrArray, ScalarOrArray, ScalarOrArray}: Keyword argument required when method is LQA. Specify the steady state (s, x, e) around which the LQ approximation is constructed.

Returns

  • res::CDPSolveResult: Solution object of the dynamic program.
source
ContinuousDPs.LQAType
LQA

Linear-quadratic approximation algorithm for solve.

Use as solve(cdp, LQA; point=(s, x, e)) to approximate the model around a reference point and solve the resulting LQ problem.

source

Evaluation and Simulation

ContinuousDPs.set_eval_nodes!Function
set_eval_nodes!(res, s_nodes_coord)

Set the evaluation nodes and recompute the value/policy functions.

Arguments

  • res::CDPSolveResult: Solution object to update in place.
  • s_nodes_coord::NTuple{N,AbstractVector}: Coordinate vectors of the new evaluation nodes.
source
QuantEcon.simulateFunction
simulate([rng=GLOBAL_RNG], res, s_init, ts_length)

Generate a sample path of state variable(s) from a solved model.

Arguments

  • rng::AbstractRNG: Random number generator.
  • res::CDPSolveResult: Solution object of the dynamic program.
  • s_init: Initial value of state variable(s).
  • ts_length::Integer: Length of simulation.

Returns

  • s_path::VecOrMat: Generated sample path of state variable(s).
source
QuantEcon.simulate!Function
simulate!([rng=GLOBAL_RNG], s_path, res, s_init)

Generate a sample path of state variable(s) from a solved model.

Arguments

  • rng::AbstractRNG: Random number generator.
  • s_path::VecOrMat: Array to store the generated sample path.
  • res::CDPSolveResult: Solution object of the dynamic program.
  • s_init: Initial value of state variable(s).

Returns

  • s_path::VecOrMat: Generated sample path of state variable(s).
source

LQ Approximation

ContinuousDPs.approx_lqFunction
approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star,
          discount)

Construct a QuantEcon.LQ instance that approximates the dynamic program around a steady state.

Arguments

  • s_star::ScalarOrArray{T}: Steady-state value of the state variable(s).
  • x_star::ScalarOrArray{T}: Steady-state value of the action variable(s).
  • f_star::Real: Reward function evaluated at the steady state.
  • Df_star::AbstractVector{T}: Gradient of the reward function f at the steady state, Df_star = [f_s', f_x'].
  • DDf_star::AbstractMatrix{T}: Hessian of the reward function f at the steady state, DDf_star = [f_ss f_sx; f_xs f_xx].
  • g_star::ScalarOrArray{T}: State transition function evaluated at the steady state.
  • Dg_star::AbstractMatrix{T}: Jacobian of the transition function g at the steady state, Dg_star = [g_s, g_x].
  • discount::Real: Discount factor.

Returns

  • lq::QuantEcon.LQ: The LQ approximation.
source

Internal

ContinuousDPs.CDPSolveResultType
CDPSolveResult{Algo,N,TCDP,TE}

Type storing the solution of a continuous-state dynamic program obtained by algorithm Algo.

Fields

  • cdp::TCDP<:ContinuousDP{N}: The dynamic program that was solved.
  • tol::Float64: Convergence tolerance used by the solver.
  • max_iter::Int: Maximum number of iterations allowed.
  • C::Vector{Float64}: Basis coefficient vector for the fitted value function.
  • converged::Bool: Whether the algorithm converged.
  • num_iter::Int: Number of iterations performed.
  • eval_nodes::TE<:VecOrMat: Nodes at which the solution is evaluated. Defaults to cdp.interp.S.
  • eval_nodes_coord::NTuple{N,Vector{Float64}}: Coordinate vectors of the evaluation nodes along each dimension. Defaults to cdp.interp.Scoord.
  • V::Vector{Float64}: Value function evaluated at eval_nodes.
  • X::Vector{Float64}: Policy function evaluated at eval_nodes.
  • resid::Vector{Float64}: Approximation residuals at eval_nodes.
source
ContinuousDPs.CDPSolveResultMethod
(res::CDPSolveResult)(s_nodes)

Evaluate the solved model at user-supplied state nodes.

Returns (V, X, resid), where V is the value function, X is the greedy policy, and resid is the Bellman residual at s_nodes.

source
ContinuousDPs.InterpType
Interp{N,TB,TS,TM,TL}

Type representing an interpolation scheme on an N-dimensional domain.

Fields

  • basis::TB<:Basis{N}: Object that contains the interpolation basis information.
  • S::TS<:VecOrMat: Vector or Matrix that contains interpolation nodes (collocation points).
  • Scoord::NTuple{N,Vector{Float64}}: Coordinate vectors of the interpolation nodes along each dimension.
  • length::Int: Total number of interpolation nodes on the tensor grid.
  • size::NTuple{N,Int}: Number of interpolation nodes along each dimension.
  • lb::NTuple{N,Float64}: Lower bounds of the domain.
  • ub::NTuple{N,Float64}: Upper bounds of the domain.
  • Phi::TM<:AbstractMatrix: Basis matrix evaluated at the interpolation nodes.
  • Phi_lu::TL<:Factorization: LU factorization of Phi.
source
ContinuousDPs.InterpMethod
Interp(basis)

Construct an Interp from a Basis.

Arguments

  • basis::Basis: Object that contains the interpolation basis information.
source
ContinuousDPs._s_wise_max!Method
_s_wise_max!(cdp, s, C, sp)

Find the optimal value and action at a given state s.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • s: State point at which to maximize.
  • C: Basis coefficient vector for the value function.
  • sp::Matrix{Float64}: Workspace for next-state evaluations.

Returns

  • v::Float64: Optimal value at s.
  • x::Float64: Optimal action at s.
source
ContinuousDPs._solve!Method
_solve!(cdp, res, verbose, print_skip; point)

Implement linear quadratic approximation. See solve for further details.

source
ContinuousDPs.evaluate!Method
evaluate!(res)

Evaluate the value function and the policy function at the evaluation nodes.

Arguments

  • res::CDPSolveResult: Solution object to update in place.
source
ContinuousDPs.evaluate_policy!Method
evaluate_policy!(cdp, X, C)

Compute the value function for a given policy and update the basis coefficients.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • X::Vector{Float64}: Policy function vector.
  • C::Vector{Float64}: A buffer array to hold the basis coefficients.

Returns

  • C::Vector{Float64}: Updated basis coefficient vector.
source
ContinuousDPs.operator_iteration!Method
operator_iteration!(T, C, tol, max_iter; verbose=2, print_skip=50)

Iterate an operator on the basis coefficients until convergence.

Arguments

  • T::Function: Operator that updates basis coefficients (one step of VFI or PFI).
  • C::Vector{Float64}: Initial basis coefficient vector.
  • tol::Float64: Convergence tolerance.
  • max_iter::Integer: Maximum number of iterations.
  • verbose::Integer: Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration).
  • print_skip::Integer: If verbose == 2, how many iterations between print messages.

Returns

  • converged::Bool: Whether the iteration converged.
  • i::Int: Number of iterations performed.
source
ContinuousDPs.policy_iteration_operator!Method
policy_iteration_operator!(cdp, C, X)

Perform one step of policy function iteration and update the basis coefficients.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • C::Vector{Float64}: Basis coefficient vector for the value function.
  • X::Vector{Float64}: A buffer array to hold the updated policy function.

Returns

  • C::Vector{Float64}: Updated basis coefficient vector.
source
ContinuousDPs.s_wise_max!Method
s_wise_max!(cdp, ss, C, Tv, X)

Find optimal value and action for each grid point.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • ss::AbstractArray{Float64}: Interpolation nodes.
  • C::Vector{Float64}: Basis coefficient vector for the value function.
  • Tv::Vector{Float64}: A buffer array to hold the updated value function.
  • X::Vector{Float64}: A buffer array to hold the updated policy function.

Returns

  • Tv::Vector{Float64}: Updated value function vector.
  • X::Vector{Float64}: Updated policy function vector.
source
ContinuousDPs.s_wise_max!Method
s_wise_max!(cdp, ss, C, Tv)

Find optimal value for each grid point.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • ss::AbstractArray{Float64}: Interpolation nodes.
  • C::Vector{Float64}: Basis coefficient vector for the value function.
  • Tv::Vector{Float64}: A buffer array to hold the updated value function.

Returns

  • Tv::Vector{Float64}: Updated value function vector.
source
ContinuousDPs.s_wise_maxMethod
s_wise_max(cdp, ss, C)

Find optimal value and action for each grid point.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • ss::AbstractArray{Float64}: Interpolation nodes.
  • C::Vector{Float64}: Basis coefficient vector for the value function.

Returns

  • Tv::Vector{Float64}: Value function vector.
  • X::Vector{Float64}: Policy function vector.
source
QuantEcon.bellman_operator!Method
bellman_operator!(cdp, C, Tv)

Apply the Bellman operator and update the basis coefficients. Values are stored in Tv.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • C::Vector{Float64}: Basis coefficient vector for the value function.
  • Tv::Vector{Float64}: Vector to store values.

Returns

  • C::Vector{Float64}: Updated basis coefficient vector.
source
QuantEcon.compute_greedy!Method
compute_greedy!(cdp, C, X)
compute_greedy!(cdp, ss, C, X)

Compute the greedy policy for the given basis coefficients.

Arguments

  • cdp::ContinuousDP: The dynamic program.
  • ss::AbstractArray{Float64}: Interpolation nodes.
  • C::Vector{Float64}: Basis coefficient vector for the value function.
  • X::Vector{Float64}: A buffer array to hold the updated policy function.

Returns

  • X::Vector{Float64}: Updated policy function vector.
source