API Reference
Exported
Model Construction
ContinuousDPs.ContinuousDP — Type
ContinuousDP{N,Tf,Tg,TR,Tlb,Tub,TI}Type representing a continuous-state dynamic program with N-dimensional state space.
Fields
f::Tf: Reward functionf(s, x).g::Tg: State transition functiong(s, x, e).discount::Float64: Discount factor.shocks::TR<:AbstractVecOrMat: Discretized shock nodes.weights::Vector{Float64}: Probability weights for the shock nodes.x_lb::Tlb: Lower bound of the action variable as a function of state.x_ub::Tub: Upper bound of the action variable as a function of state.interp::TI<:Interp{N}: Object that contains the information about the interpolation scheme.
ContinuousDPs.ContinuousDP — Method
ContinuousDP(f, g, discount, shocks, weights, x_lb, x_ub, basis)Constructor for ContinuousDP.
Arguments
f: Reward functionf(s, x).g: State transition functiong(s, x, e).discount::Real: Discount factor.shocks::AbstractVecOrMat: Discretized shock nodes.weights::Vector{Float64}: Probability weights for the shock nodes.x_lb: Lower bound of the action variable as a function of state.x_ub: Upper bound of the action variable as a function of state.basis::Basis: Object that contains the interpolation basis information.
ContinuousDPs.ContinuousDP — Method
ContinuousDP(cdp::ContinuousDP; f=cdp.f, g=cdp.g, discount=cdp.discount,
shocks=cdp.shocks, weights=cdp.weights,
x_lb=cdp.x_lb, x_ub=cdp.x_ub, basis=cdp.interp.basis)Construct a copy of cdp, optionally replacing selected model components.
Solving the Model
QuantEcon.solve — Function
solve(cdp, method=PFI; v_init=zeros(cdp.interp.length), tol=sqrt(eps()),
max_iter=500, verbose=2, print_skip=50, kwargs...)Solve the continuous-state dynamic program by the specified method.
Arguments
cdp::ContinuousDP: The dynamic program to solve.method::Type{<:DPAlgorithm}: Solution method.VFIfor value function iteration,PFIfor policy function iteration, orLQAfor linear quadratic approximation. Default isPFI.v_init::Vector{Float64}: Initial value function values at interpolation nodes.tol::Real: Convergence tolerance.max_iter::Integer: Maximum number of iterations.verbose::Integer: Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration).print_skip::Integer: Ifverbose == 2, how many iterations between print messages.point::Tuple{ScalarOrArray, ScalarOrArray, ScalarOrArray}: Keyword argument required whenmethodisLQA. Specify the steady state(s, x, e)around which the LQ approximation is constructed.
Returns
res::CDPSolveResult: Solution object of the dynamic program.
QuantEcon.VFI — Type
VFIValue function iteration algorithm for solve.
QuantEcon.PFI — Type
PFIPolicy function iteration algorithm for solve.
ContinuousDPs.LQA — Type
LQALinear-quadratic approximation algorithm for solve.
Use as solve(cdp, LQA; point=(s, x, e)) to approximate the model around a reference point and solve the resulting LQ problem.
Evaluation and Simulation
ContinuousDPs.set_eval_nodes! — Function
set_eval_nodes!(res, s_nodes_coord)Set the evaluation nodes and recompute the value/policy functions.
Arguments
res::CDPSolveResult: Solution object to update in place.s_nodes_coord::NTuple{N,AbstractVector}: Coordinate vectors of the new evaluation nodes.
QuantEcon.simulate — Function
simulate([rng=GLOBAL_RNG], res, s_init, ts_length)Generate a sample path of state variable(s) from a solved model.
Arguments
rng::AbstractRNG: Random number generator.res::CDPSolveResult: Solution object of the dynamic program.s_init: Initial value of state variable(s).ts_length::Integer: Length of simulation.
Returns
s_path::VecOrMat: Generated sample path of state variable(s).
QuantEcon.simulate! — Function
simulate!([rng=GLOBAL_RNG], s_path, res, s_init)Generate a sample path of state variable(s) from a solved model.
Arguments
rng::AbstractRNG: Random number generator.s_path::VecOrMat: Array to store the generated sample path.res::CDPSolveResult: Solution object of the dynamic program.s_init: Initial value of state variable(s).
Returns
s_path::VecOrMat: Generated sample path of state variable(s).
LQ Approximation
ContinuousDPs.approx_lq — Function
approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star,
discount)Construct a QuantEcon.LQ instance that approximates the dynamic program around a steady state.
Arguments
s_star::ScalarOrArray{T}: Steady-state value of the state variable(s).x_star::ScalarOrArray{T}: Steady-state value of the action variable(s).f_star::Real: Reward function evaluated at the steady state.Df_star::AbstractVector{T}: Gradient of the reward functionfat the steady state,Df_star = [f_s', f_x'].DDf_star::AbstractMatrix{T}: Hessian of the reward functionfat the steady state,DDf_star = [f_ss f_sx; f_xs f_xx].g_star::ScalarOrArray{T}: State transition function evaluated at the steady state.Dg_star::AbstractMatrix{T}: Jacobian of the transition functiongat the steady state,Dg_star = [g_s, g_x].discount::Real: Discount factor.
Returns
lq::QuantEcon.LQ: The LQ approximation.
Internal
ContinuousDPs.CDPSolveResult — Type
CDPSolveResult{Algo,N,TCDP,TE}Type storing the solution of a continuous-state dynamic program obtained by algorithm Algo.
Fields
cdp::TCDP<:ContinuousDP{N}: The dynamic program that was solved.tol::Float64: Convergence tolerance used by the solver.max_iter::Int: Maximum number of iterations allowed.C::Vector{Float64}: Basis coefficient vector for the fitted value function.converged::Bool: Whether the algorithm converged.num_iter::Int: Number of iterations performed.eval_nodes::TE<:VecOrMat: Nodes at which the solution is evaluated. Defaults tocdp.interp.S.eval_nodes_coord::NTuple{N,Vector{Float64}}: Coordinate vectors of the evaluation nodes along each dimension. Defaults tocdp.interp.Scoord.V::Vector{Float64}: Value function evaluated ateval_nodes.X::Vector{Float64}: Policy function evaluated ateval_nodes.resid::Vector{Float64}: Approximation residuals ateval_nodes.
ContinuousDPs.CDPSolveResult — Method
(res::CDPSolveResult)(s_nodes)Evaluate the solved model at user-supplied state nodes.
Returns (V, X, resid), where V is the value function, X is the greedy policy, and resid is the Bellman residual at s_nodes.
ContinuousDPs.Interp — Type
Interp{N,TB,TS,TM,TL}Type representing an interpolation scheme on an N-dimensional domain.
Fields
basis::TB<:Basis{N}: Object that contains the interpolation basis information.S::TS<:VecOrMat: Vector or Matrix that contains interpolation nodes (collocation points).Scoord::NTuple{N,Vector{Float64}}: Coordinate vectors of the interpolation nodes along each dimension.length::Int: Total number of interpolation nodes on the tensor grid.size::NTuple{N,Int}: Number of interpolation nodes along each dimension.lb::NTuple{N,Float64}: Lower bounds of the domain.ub::NTuple{N,Float64}: Upper bounds of the domain.Phi::TM<:AbstractMatrix: Basis matrix evaluated at the interpolation nodes.Phi_lu::TL<:Factorization: LU factorization ofPhi.
ContinuousDPs.Interp — Method
Interp(basis)Construct an Interp from a Basis.
Arguments
basis::Basis: Object that contains the interpolation basis information.
ContinuousDPs._s_wise_max! — Method
_s_wise_max!(cdp, s, C, sp)Find the optimal value and action at a given state s.
Arguments
cdp::ContinuousDP: The dynamic program.s: State point at which to maximize.C: Basis coefficient vector for the value function.sp::Matrix{Float64}: Workspace for next-state evaluations.
Returns
v::Float64: Optimal value ats.x::Float64: Optimal action ats.
ContinuousDPs._solve! — Method
_solve!(cdp, res, verbose, print_skip; point)Implement linear quadratic approximation. See solve for further details.
ContinuousDPs._solve! — Method
_solve!(cdp, res, verbose, print_skip)Implement policy iteration. See solve for further details.
ContinuousDPs._solve! — Method
_solve!(cdp, res, verbose, print_skip)Implement value iteration. See solve for further details.
ContinuousDPs.evaluate! — Method
evaluate!(res)Evaluate the value function and the policy function at the evaluation nodes.
Arguments
res::CDPSolveResult: Solution object to update in place.
ContinuousDPs.evaluate_policy! — Method
evaluate_policy!(cdp, X, C)Compute the value function for a given policy and update the basis coefficients.
Arguments
cdp::ContinuousDP: The dynamic program.X::Vector{Float64}: Policy function vector.C::Vector{Float64}: A buffer array to hold the basis coefficients.
Returns
C::Vector{Float64}: Updated basis coefficient vector.
ContinuousDPs.operator_iteration! — Method
operator_iteration!(T, C, tol, max_iter; verbose=2, print_skip=50)Iterate an operator on the basis coefficients until convergence.
Arguments
T::Function: Operator that updates basis coefficients (one step of VFI or PFI).C::Vector{Float64}: Initial basis coefficient vector.tol::Float64: Convergence tolerance.max_iter::Integer: Maximum number of iterations.verbose::Integer: Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration).print_skip::Integer: Ifverbose == 2, how many iterations between print messages.
Returns
converged::Bool: Whether the iteration converged.i::Int: Number of iterations performed.
ContinuousDPs.policy_iteration_operator! — Method
policy_iteration_operator!(cdp, C, X)Perform one step of policy function iteration and update the basis coefficients.
Arguments
cdp::ContinuousDP: The dynamic program.C::Vector{Float64}: Basis coefficient vector for the value function.X::Vector{Float64}: A buffer array to hold the updated policy function.
Returns
C::Vector{Float64}: Updated basis coefficient vector.
ContinuousDPs.s_wise_max! — Method
s_wise_max!(cdp, ss, C, Tv, X)Find optimal value and action for each grid point.
Arguments
cdp::ContinuousDP: The dynamic program.ss::AbstractArray{Float64}: Interpolation nodes.C::Vector{Float64}: Basis coefficient vector for the value function.Tv::Vector{Float64}: A buffer array to hold the updated value function.X::Vector{Float64}: A buffer array to hold the updated policy function.
Returns
Tv::Vector{Float64}: Updated value function vector.X::Vector{Float64}: Updated policy function vector.
ContinuousDPs.s_wise_max! — Method
s_wise_max!(cdp, ss, C, Tv)Find optimal value for each grid point.
Arguments
cdp::ContinuousDP: The dynamic program.ss::AbstractArray{Float64}: Interpolation nodes.C::Vector{Float64}: Basis coefficient vector for the value function.Tv::Vector{Float64}: A buffer array to hold the updated value function.
Returns
Tv::Vector{Float64}: Updated value function vector.
ContinuousDPs.s_wise_max — Method
s_wise_max(cdp, ss, C)Find optimal value and action for each grid point.
Arguments
cdp::ContinuousDP: The dynamic program.ss::AbstractArray{Float64}: Interpolation nodes.C::Vector{Float64}: Basis coefficient vector for the value function.
Returns
Tv::Vector{Float64}: Value function vector.X::Vector{Float64}: Policy function vector.
QuantEcon.bellman_operator! — Method
bellman_operator!(cdp, C, Tv)Apply the Bellman operator and update the basis coefficients. Values are stored in Tv.
Arguments
cdp::ContinuousDP: The dynamic program.C::Vector{Float64}: Basis coefficient vector for the value function.Tv::Vector{Float64}: Vector to store values.
Returns
C::Vector{Float64}: Updated basis coefficient vector.
QuantEcon.compute_greedy! — Method
compute_greedy!(cdp, C, X)
compute_greedy!(cdp, ss, C, X)Compute the greedy policy for the given basis coefficients.
Arguments
cdp::ContinuousDP: The dynamic program.ss::AbstractArray{Float64}: Interpolation nodes.C::Vector{Float64}: Basis coefficient vector for the value function.X::Vector{Float64}: A buffer array to hold the updated policy function.
Returns
X::Vector{Float64}: Updated policy function vector.