ContinuousDPs.jl
Routines for solving continuous state dynamic programs by the Bellman equation collocation method.
Installation
To install the package, open the Julia package manager (Pkg) and type:
add https://github.com/QuantEcon/ContinuousDPs.jlSupported problem class
ContinuousDPs.jl solves infinite-horizon dynamic programs of the form
\[V(s) = \max_{x\in[x_{lb}(s), x_{ub}(s)]} \left \{ f(s,x) + \beta \mathbb{E}_{\varepsilon} \left [ V(g(s,x,\varepsilon)) \right ] \right \}\]
where
\[s \in \mathbb{R}^d\]
is the state (continuous, possibly multi-dimensional),\[x \in \mathbb{R}\]
is the action (continuous, 1-dimensional),\[f(s, x)\]
is the reward function,\[g(s, x, \varepsilon)\]
is the state transition function,\[\varepsilon\]
is a random shock, (i.i.d. across periods, independent of the state and the action),\[\beta \in (0, 1)\]
is the discount factor, and\[x_{\mathrm{lb}}(s)\]
and $x_{\mathrm{ub}}(s)$ are state-dependent action bounds.
This package employs the Bellman equation collocation method (Miranda and Fackler 2002, Chapter 9): The value function $V$ is approximated by a linear combination of basis functions (Chebyshev polynomials, B-splines, or linear functions) and is required to satisfy the Bellman equation at the collocation nodes.
To solve the problem, construct a ContinuousDP instance by passing the primitives of the model:
cdp = ContinuousDP(f, g, discount, shocks, weights, x_lb, x_ub, basis)where
f,g,x_lb, andx_ubare callable objects that represent the reward function, the state transition function, and the lower and upper action bounds functions, respectively,discountis the discount factor,shocksandweightsspecify a discretization of the distribution of $\varepsilon$ (a vector of nodes and their probability weights), andbasisis aBasisobject fromBasisMatrices.jlthat contains the interpolation basis information.
Then call solve(cdp) to obtain the value function, policy function, and residuals.
Example usage
A stochastic optimal growth model from a QuantEcon lecture:
using BasisMatrices
using ContinuousDPs
using Random
# For reproducible results
seed = 42
rng = MersenneTwister(seed);
# Specify the parameters
function OptimalGrowthModel(;
alpha = 0.4, beta = 0.96, s_min = 1e-5, s_max = 4.,
mu = 0.0, sigma = 0.1
)
f(s, x) = log(x)
g(s, x, e) = (s - x)^alpha * e
x_lb(s) = s_min
x_ub(s) = s
return (; alpha, beta, s_min, s_max, mu, sigma,
f, g, x_lb, x_ub) # NamedTuple
end
p = OptimalGrowthModel();
# Set shocks and weights
shock_size = 250
shocks = exp.(p.mu .+ p.sigma * randn(rng, shock_size))
weights = fill(1/shock_size, shock_size);
# Construct a `Basis` object
n = 30
basis = Basis(ChebParams(n, p.s_min, p.s_max))
# Solve DP
cdp = ContinuousDP(p.f, p.g, p.beta, shocks, weights, p.x_lb, p.x_ub, basis);
res = solve(cdp);
# Evaluate on grids
grid_size = 200
grid_y = collect(range(p.s_min, stop=p.s_max, length=grid_size))
set_eval_nodes!(res, grid_y);See the demo notebooks for further examples.