QuantEcon
API documentation
Index
QuantEcon.ARMA
QuantEcon.CFEUtility
QuantEcon.CRRAUtility
QuantEcon.DPSolveResult
QuantEcon.DiscreteDP
QuantEcon.DiscreteDP
QuantEcon.DiscreteDP
QuantEcon.DiscreteRV
QuantEcon.EllipticalUtility
QuantEcon.LAE
QuantEcon.LQ
QuantEcon.LQ
QuantEcon.LSS
QuantEcon.LinInterp
QuantEcon.LogUtility
QuantEcon.MPFI
QuantEcon.MarkovChain
QuantEcon.MarkovChain
QuantEcon.PFI
QuantEcon.RBLQ
QuantEcon.SimplexGrid
QuantEcon.VAREstimationMethod
QuantEcon.VFI
Base.:*
Base.rand
Base.rand
DSP.Periodograms.periodogram
Graphs.period
QuantEcon.F_to_K
QuantEcon.K_to_F
QuantEcon.RQ_sigma
QuantEcon.RQ_sigma
QuantEcon._compute_sequence
QuantEcon._compute_sequence
QuantEcon._generate_a_indptr!
QuantEcon._has_sorted_sa_indices
QuantEcon._random_stochastic_matrix
QuantEcon._solve!
QuantEcon._solve!
QuantEcon._solve!
QuantEcon.allcomb3
QuantEcon.ar_periodogram
QuantEcon.autocovariance
QuantEcon.b_operator
QuantEcon.backward_induction
QuantEcon.bellman_operator
QuantEcon.bellman_operator!
QuantEcon.bellman_operator!
QuantEcon.bellman_operator!
QuantEcon.bisect
QuantEcon.brent
QuantEcon.brenth
QuantEcon.ckron
QuantEcon.communication_classes
QuantEcon.compute_deterministic_entropy
QuantEcon.compute_fixed_point
QuantEcon.compute_greedy
QuantEcon.compute_greedy!
QuantEcon.compute_loglikelihood
QuantEcon.compute_sequence
QuantEcon.construct_1D_grid
QuantEcon.construct_1D_grid
QuantEcon.construct_1D_grid
QuantEcon.construct_prior_guess
QuantEcon.construct_prior_guess
QuantEcon.construct_prior_guess
QuantEcon.d_operator
QuantEcon.discrete_approximation
QuantEcon.discrete_var
QuantEcon.divide_bracket
QuantEcon.do_quad
QuantEcon.entropy_grad!
QuantEcon.entropy_hess!
QuantEcon.entropy_obj
QuantEcon.estimate_mc_discrete
QuantEcon.evaluate_F
QuantEcon.evaluate_policy
QuantEcon.evaluate_policy
QuantEcon.expand_bracket
QuantEcon.filtered_to_forecast!
QuantEcon.fix
QuantEcon.getZ
QuantEcon.getZ
QuantEcon.getZ
QuantEcon.go_backward
QuantEcon.golden_method
QuantEcon.gridmake
QuantEcon.gridmake!
QuantEcon.gth_solve
QuantEcon.gth_solve!
QuantEcon.hamilton_filter
QuantEcon.hamilton_filter
QuantEcon.hp_filter
QuantEcon.impulse_response
QuantEcon.interp
QuantEcon.is_aperiodic
QuantEcon.is_irreducible
QuantEcon.is_stable
QuantEcon.is_stable
QuantEcon.k_array_rank
QuantEcon.lae_est
QuantEcon.log_likelihood
QuantEcon.m_quadratic_sum
QuantEcon.min_var_trace
QuantEcon.moment_sequence
QuantEcon.n_states
QuantEcon.next_k_array!
QuantEcon.nnash
QuantEcon.num_compositions
QuantEcon.polynomial_moment
QuantEcon.prior_to_filtered!
QuantEcon.qnwbeta
QuantEcon.qnwcheb
QuantEcon.qnwdist
QuantEcon.qnwequi
QuantEcon.qnwgamma
QuantEcon.qnwlege
QuantEcon.qnwlogn
QuantEcon.qnwnorm
QuantEcon.qnwsimp
QuantEcon.qnwtrap
QuantEcon.qnwunif
QuantEcon.quadrect
QuantEcon.random_discrete_dp
QuantEcon.random_markov_chain
QuantEcon.random_probvec
QuantEcon.random_stochastic_matrix
QuantEcon.recurrent_classes
QuantEcon.remove_constants
QuantEcon.replicate
QuantEcon.ridder
QuantEcon.robust_rule
QuantEcon.robust_rule_simple
QuantEcon.rouwenhorst
QuantEcon.s_wise_max
QuantEcon.s_wise_max!
QuantEcon.s_wise_max!
QuantEcon.s_wise_max!
QuantEcon.s_wise_max!
QuantEcon.simplex_grid
QuantEcon.simplex_index
QuantEcon.simulate
QuantEcon.simulate!
QuantEcon.simulate_indices
QuantEcon.simulate_indices!
QuantEcon.simulation
QuantEcon.smooth
QuantEcon.smooth
QuantEcon.smooth
QuantEcon.solve
QuantEcon.solve_discrete_lyapunov
QuantEcon.solve_discrete_riccati
QuantEcon.spectral_density
QuantEcon.standardize_var
QuantEcon.standardize_var
QuantEcon.stationary_distributions
QuantEcon.stationary_distributions
QuantEcon.stationary_values
QuantEcon.stationary_values!
QuantEcon.tauchen
QuantEcon.todense
QuantEcon.todense
QuantEcon.update!
QuantEcon.update_values!
QuantEcon.var_quadratic_sum
QuantEcon.warn_persistency
QuantEcon.@def_sim
Exported
QuantEcon.ARMA
— TypeRepresents a scalar ARMA(p, q) process
If $\phi$ and $\theta$ are scalars, then the model is understood to be
\[ X_t = \phi X_{t-1} + \epsilon_t + \theta \epsilon_{t-1}\]
where $\epsilon_t$ is a white noise process with standard deviation sigma.
If $\phi$ and $\theta$ are arrays or sequences, then the interpretation is the ARMA(p, q) model
\[ X_t = \phi_1 X_{t-1} + ... + \phi_p X_{t-p} + \epsilon_t + \theta_1 \epsilon_{t-1} + \ldots + \theta_q \epsilon_{t-q}\]
where
- $\phi = (\phi_1, \phi_2, \ldots , \phi_p)$
- $\theta = (\theta_1, \theta_2, \ldots , \theta_q)$
- $\sigma$ is a scalar, the standard deviation of the white noise
Fields
phi::Vector
: AR parameters $\phi_1, \ldots, \phi_p$theta::Vector
: MA parameters $\theta_1, \ldots, \theta_q$p::Integer
: Number of AR coefficientsq::Integer
: Number of MA coefficientssigma::Real
: Standard deviation of white noisema_poly::Vector
: MA polynomial –- filtering representatoinar_poly::Vector
: AR polynomial –- filtering representation
Examples
using QuantEcon
phi = 0.5
theta = [0.0, -0.8]
sigma = 1.0
lp = ARMA(phi, theta, sigma)
require(joinpath(dirname(@__FILE__),"..", "examples", "arma_plots.jl"))
quad_plot(lp)
QuantEcon.CFEUtility
— TypeType used to evaluate constant Frisch elasticity (CFE) utility. CFE utility takes the form
v(l) = ξ l^(1 + 1/ϕ) / (1 + 1/ϕ)
Additionally, this code assumes that if l < 1e-10 then
v(l) = ξ (1e-10^(1 + 1/ϕ) / (1 + 1/ϕ) - 1e-10^(1/ϕ) * (1e-10 - l))
QuantEcon.CRRAUtility
— TypeType used to evaluate CRRA utility. CRRA utility takes the form
u(c) = ξ c^(1 - γ) / (1 - γ)
Additionally, this code assumes that if c < 1e-10 then
u(c) = ξ (1e-10^(1 - γ) / (1 - γ) + 1e-10^(-γ) * (c - 1e-10))
QuantEcon.DiscreteDP
— TypeDiscreteDP type for specifying paramters for discrete dynamic programming model
Parameters
R::Array{T,NR}
: Reward ArrayQ::Array{T,NQ}
: Transition Probability Arraybeta::Float64
: Discount Factora_indices::Vector{Tind}
: Action Indices. Empty unless using SA formulationa_indptr::Vector{Tind}
: Action Index Pointers. Empty unless using SA formulation
Returns
ddp::DiscreteDP
: DiscreteDP object
QuantEcon.DiscreteDP
— MethodDiscreteDP type for specifying parameters for discrete dynamic programming model Dense Matrix Formulation
Parameters
R::Array{T,NR}
: Reward ArrayQ::Array{T,NQ}
: Transition Probability Arraybeta::Float64
: Discount Factor
Returns
ddp::DiscreteDP
: Constructor for DiscreteDP object
QuantEcon.DiscreteDP
— MethodDiscreteDP type for specifying parameters for discrete dynamic programming model State-Action Pair Formulation
Parameters
R::Array{T,NR}
: Reward ArrayQ::Array{T,NQ}
: Transition Probability Arraybeta::Float64
: Discount Factors_indices::Vector{Tind}
: State Indices. Empty unless using SA formulationa_indices::Vector{Tind}
: Action Indices. Empty unless using SA formulationa_indptr::Vector{Tind}
: Action Index Pointers. Empty unless using SA formulation
Returns
ddp::DiscreteDP
: Constructor for DiscreteDP object
QuantEcon.DiscreteRV
— TypeGenerates an array of draws from a discrete random variable with vector of probabilities given by q
.
Fields
q::AbstractVector
: A vector of non-negative probabilities that sum to 1Q::AbstractVector
: The cumulative sum ofq
QuantEcon.EllipticalUtility
— TypeType used to evaluate elliptical utility function. Elliptical utility takes form
v(l) = b (1 - l^μ)^(1 / μ)
QuantEcon.LAE
— TypeA look ahead estimator associated with a given stochastic kernel p
and a vector of observations X
.
Fields
p::Function
: The stochastic kernel. Signature isp(x, y)
and it should be vectorized in both inputsX::Matrix
: A vector containing observations. Note that this can be passed as any kind ofAbstractArray
and will be coerced into ann x 1
vector.
QuantEcon.LQ
— TypeLinear quadratic optimal control of either infinite or finite horizon
The infinite horizon problem can be written
\[\min \mathbb{E} \sum_{t=0}^{\infty} \beta^t r(x_t, u_t)\]
with
\[r(x_t, u_t) := x_t' R x_t + u_t' Q u_t + 2 u_t' N x_t\]
The finite horizon form is
\[\min \mathbb{E} \sum_{t=0}^{T-1} \beta^t r(x_t, u_t) + \beta^T x_T' R_f x_T\]
Both are minimized subject to the law of motion
\[x_{t+1} = A x_t + B u_t + C w_{t+1}\]
Here $x$ is n x 1, $u$ is k x 1, $w$ is j x 1 and the matrices are conformable for these dimensions. The sequence ${w_t}$ is assumed to be white noise, with zero mean and $\mathbb{E} w_t w_t' = I$, the j x j identity.
For this model, the time $t$ value (i.e., cost-to-go) function $V_t$ takes the form
\[x' P_T x + d_T\]
and the optimal policy is of the form $u_T = -F_T x_T$. In the infinite horizon case, $V, P, d$ and $F$ are all stationary.
Fields
Q::ScalarOrArray
:k x k
payoff coefficient for control variable u. Must be symmetric and nonnegative definiteR::ScalarOrArray
:n x n
payoff coefficient matrix for state variable x. Must be symmetric and nonnegative definiteA::ScalarOrArray
:n x n
coefficient on state in state transitionB::ScalarOrArray
:n x k
coefficient on control in state transitionC::ScalarOrArray
:n x j
coefficient on random shock in state transitionN::ScalarOrArray
:k x n
cross product in payoff equationbet::Real
: Discount factor in[0, 1]
capT::Union{Int, Void}
: Terminal period in finite horizon problemrf::ScalarOrArray
:n x n
terminal payoff in finite horizon problem. Must be symmetric and nonnegative definiteP::ScalarOrArray
:n x n
matrix in value function representation $V(x) = x'Px + d$d::Real
: Constant in value function representationF::ScalarOrArray
: Policy rule that specifies optimal control in each period
QuantEcon.LQ
— TypeMain constructor for LQ type
Specifies default argumets for all fields not part of the payoff function or transition equation.
Arguments
Q::ScalarOrArray
:k x k
payoff coefficient for control variable u. Must be symmetric and nonnegative definiteR::ScalarOrArray
:n x n
payoff coefficient matrix for state variable x. Must be symmetric and nonnegative definiteA::ScalarOrArray
:n x n
coefficient on state in state transitionB::ScalarOrArray
:n x k
coefficient on control in state transition;C::ScalarOrArray{zero(size(R}(1)))
:n x j
coefficient on random shock in state transition;N::ScalarOrArray{zero(size(B,1)}(size(A, 2)))
:k x n
cross product in payoff equation;bet::Real(1.0)
: Discount factor in[0, 1]
capT::Union{Int, Void}(Void)
: Terminal period in finite horizon problemrf::ScalarOrArray{fill(NaN}(size(R)...))
:n x n
terminal payoff in finite horizon problem. Must be symmetric and nonnegative definite.
QuantEcon.LSS
— TypeA type that describes the Gaussian Linear State Space Model of the form:
\[ x_{t+1} = A x_t + C w_{t+1} \\ y_t = G x_t + H v_t\]
where ${w_t}$ and ${v_t}$ are independent and standard normal with dimensions k
and l
respectively. The initial conditions are $\mu_0$ and $\Sigma_0$ for $x_0 \sim N(\mu_0, \Sigma_0)$. When $\Sigma_0=0$, the draw of $x_0$ is exactly $\mu_0$.
Fields
A::Matrix
Part of the state transition equation. It should ben x n
C::Matrix
Part of the state transition equation. It should ben x m
G::Matrix
Part of the observation equation. It should bek x n
H::Matrix
Part of the observation equation. It should bek x l
k::Int
Dimensionn::Int
Dimensionm::Int
Dimensionl::Int
Dimensionmu_0::Vector
This is the mean of initial draw and is of lengthn
Sigma_0::Matrix
This is the variance of the initial draw and isn x n
and also should be positive definite and symmetric
QuantEcon.LinInterp
— TypeLinear interpolation in one dimension
Fields
breaks::AbstractVector
: A sorted array of grid points on which to interpolatevals::AbstractVector
: The function values associated with each of the grid points
Examples
breaks = cumsum(0.1 .* rand(20))
vals = 0.1 .* sin.(breaks)
li = LinInterp(breaks, vals)
# do interpolation via `call` method on a LinInterp object
li(0.2)
# use broadcasting to evaluate at multiple points
li.([0.1, 0.2, 0.3])
QuantEcon.LogUtility
— TypeType used to evaluate log utility. Log utility takes the form
u(c) = \log(c)
Additionally, this code assumes that if c < 1e-10 then
u(c) = log(1e-10) + 1e10*(c - 1e-10)
QuantEcon.MPFI
— TypeThis refers to the Modified Policy Iteration solution algorithm.
References
https://lectures.quantecon.org/jl/discrete_dp.html
QuantEcon.MarkovChain
— TypeFinite-state discrete-time Markov chain.
Methods are available that provide useful information such as the stationary distributions, and communication and recurrent classes, and allow simulation of state transitions.
Fields
p::AbstractMatrix
: The transition matrix. Must be square, all elements must be nonnegative, and all rows must sum to unity.state_values::AbstractVector
: Vector containing the values associated with the states.
QuantEcon.MarkovChain
— MethodReturns the controlled Markov chain for a given policy sigma
.
Parameters
ddp::DiscreteDP
: Object that contains the model parametersddpr::DPSolveResult
: Object that contains result variables
Returns
mc : MarkovChain Controlled Markov chain.
QuantEcon.PFI
— TypeThis refers to the Policy Iteration solution algorithm.
References
https://lectures.quantecon.org/jl/discrete_dp.html
QuantEcon.RBLQ
— TypeRepresents infinite horizon robust LQ control problems of the form
\[ \min_{u_t} \sum_t \beta^t {x_t' R x_t + u_t' Q u_t }\]
subject to
\[ x_{t+1} = A x_t + B u_t + C w_{t+1}\]
and with model misspecification parameter $\theta$.
Fields
Q::Matrix{Float64}
: The cost(payoff) matrix for the controls. See above for more. $Q$ should bek x k
and symmetric and positive definiteR::Matrix{Float64}
: The cost(payoff) matrix for the state. See above for more. $R$ should ben x n
and symmetric and non-negative definiteA::Matrix{Float64}
: The matrix that corresponds with the state in the state space system. $A$ should ben x n
B::Matrix{Float64}
: The matrix that corresponds with the control in the state space system. $B$ should ben x k
C::Matrix{Float64}
: The matrix that corresponds with the random process in the state space system. $C$ should ben x j
beta::Real
: The discount factor in the robust control problemtheta::Real
The robustness factor in the robust control problemk, n, j::Int
: Dimensions of input matrices
QuantEcon.SimplexGrid
— TypeSimplexGrid
Iterator version of simplex_grid
, i.e., iterator that iterates over the integer points in the (m-1)-dimensional simplex $\{x \mid x_1 + \cdots + x_m = n, x_i \geq 0\}$, or equivalently, the m-part compositions of n, in lexicographic order.
Fields
m::Int
: Dimension of each point. Must be a positive integer.n::Int
: Number which the coordinates of each point sum to. Must be a nonnegative integer.
Examples
julia> sg = SimplexGrid(3, 4);
julia> for x in sg
@show x
end
x = [0, 0, 4]
x = [0, 1, 3]
x = [0, 2, 2]
x = [0, 3, 1]
x = [0, 4, 0]
x = [1, 0, 3]
x = [1, 1, 2]
x = [1, 2, 1]
x = [1, 3, 0]
x = [2, 0, 2]
x = [2, 1, 1]
x = [2, 2, 0]
x = [3, 0, 1]
x = [3, 1, 0]
x = [4, 0, 0]
QuantEcon.VFI
— TypeThis refers to the Value Iteration solution algorithm.
References
https://lectures.quantecon.org/jl/discrete_dp.html
DSP.Periodograms.periodogram
— FunctionComputes the periodogram
\[I(w) = \frac{1}{n} | \sum_{t=0}^{n-1} x_t e^{itw} |^2\]
at the Fourier frequences $w_j := 2 \frac{\pi j}{n}, j = 0, \ldots, n - 1$, using the fast Fourier transform. Only the frequences $w_j$ in $[0, \pi]$ and corresponding values $I(w_j)$ are returned. If a window type is given then smoothing is performed.
Arguments
x::Array
: An array containing the data to smoothwindow_len::Int(7)
: An odd integer giving the length of the windowwindow::AbstractString("hanning")
: A string giving the window type. Possible values areflat
,hanning
,hamming
,bartlett
, orblackman
Returns
w::Array{Float64}
: Fourier frequencies at which the periodogram is evaluatedI_w::Array{Float64}
: The periodogram at frequencesw
Graphs.period
— MethodReturn the period of the Markov chain mc
.
Arguments
mc::MarkovChain
: MarkovChain instance.
Returns
::Int
: Period ofmc
.
QuantEcon.F_to_K
— MethodCompute agent 2's best cost-minimizing response $K$, given $F$.
Arguments
rlq::RBLQ
: Instance ofRBLQ
typeF::Matrix{Float64}
: Ak x n
array representing agent 1's policy
Returns
K::Matrix{Float64}
: Agent's best cost minimizing response corresponding to $F$P::Matrix{Float64}
: The value function corresponding to $F$
QuantEcon.K_to_F
— MethodCompute agent 1's best cost-minimizing response $K$, given $F$.
Arguments
rlq::RBLQ
: Instance ofRBLQ
typeK::Matrix{Float64}
: Ak x n
array representing the worst case matrix
Returns
F::Matrix{Float64}
: Agent's best cost minimizing response corresponding to $K$P::Matrix{Float64}
: The value function corresponding to $K$
QuantEcon.RQ_sigma
— MethodMethod of RQ_sigma
that extracts sigma from a DPSolveResult
See other docstring for details
QuantEcon.RQ_sigma
— MethodGiven a policy sigma
, return the reward vector R_sigma
and the transition probability matrix Q_sigma
.
Parameters
ddp::DiscreteDP
: Object that contains the model parameterssigma::AbstractVector{Int}
: policy rule vector
Returns
R_sigma::Array{Float64}
: Reward vector forsigma
, of lengthn
.Q_sigma::Array{Float64}
: Transition probability matrix forsigma
, of shape(n, n)
.
QuantEcon.ar_periodogram
— FunctionCompute periodogram from data x
, using prewhitening, smoothing and recoloring. The data is fitted to an AR(1) model for prewhitening, and the residuals are used to compute a first-pass periodogram with smoothing. The fitted coefficients are then used for recoloring.
Arguments
x::Array
: An array containing the data to smoothwindow_len::Int(7)
: An odd integer giving the length of the windowwindow::AbstractString("hanning")
: A string giving the window type. Possible values areflat
,hanning
,hamming
,bartlett
, orblackman
Returns
w::Array{Float64}
: Fourier frequencies at which the periodogram is evaluatedI_w::Array{Float64}
: The periodogram at frequencesw
QuantEcon.autocovariance
— MethodCompute the autocovariance function from the ARMA parameters over the integers range(num_autocov
) using the spectral density and the inverse Fourier transform.
Arguments
arma::ARMA
: Instance ofARMA
type;num_autocov::Integer(16)
: The number of autocovariances to calculate
QuantEcon.b_operator
— MethodThe $D$ operator, mapping $P$ into
\[ B(P) := R - \beta^2 A'PB(Q + \beta B'PB)^{-1}B'PA + \beta A'PA\]
and also returning
\[ F := (Q + \beta B'PB)^{-1} \beta B'PA\]
Arguments
rlq::RBLQ
: Instance ofRBLQ
typeP::Matrix{Float64}
: size isn x n
Returns
F::Matrix{Float64}
: The $F$ matrix as defined abovenew_p::Matrix{Float64}
: The matrix $P$ after applying the $B$ operator
QuantEcon.backward_induction
— Methodbackward_induction(ddp, J[, v_term=zeros(num_states(ddp))])
Solve by backward induction a $J$-period finite horizon discrete dynamic program with stationary reward $r$ and transition probability functions $q$ and discount factor $\beta \in [0, 1]$.
The optimal value functions $v^{\ast}_1, \ldots, v^{\ast}_{J+1}$ and policy functions $\sigma^{\ast}_1, \ldots, \sigma^{\ast}_J$ are obtained by $v^{\ast}_{J+1} = v_{J+1}$, and
\[v^{\ast}_j(s) = \max_{a \in A(s)} r(s, a) + \beta \sum_{s' \in S} q(s'|s, a) v^{\ast}_{j+1}(s') \quad (s \in S)\]
and
\[\sigma^{\ast}_j(s) \in \operatorname*{arg\,max}_{a \in A(s)} r(s, a) + \beta \sum_{s' \in S} q(s'|s, a) v^*_{j+1}(s') \quad (s \in S)\]
for $j= J, \ldots, 1$, where the terminal value function $v_{J+1}$ is exogenously given by v_term
.
Parameters
ddp::DiscreteDP{T}
: Object that contains the Model ParametersJ::Integer
: Number of decision periodsv_term::AbstractVector{<:Real}=zeros(num_states(ddp))
: Terminal value function of length equal to n (the number of states)
Returns
vs::Matrix{S}
: Array of shape (n, J+1) wherevs[:,j]
contains the optimal value function at period j = 1, ..., J+1.sigmas::Matrix{Int}
: Array of shape (n, J) wheresigmas[:,j]
contains the optimal policy function at period j = 1, ..., J.
QuantEcon.bellman_operator!
— MethodThe Bellman operator, which computes and returns the updated value function $Tv$ for a value function $v$.
Parameters
ddp::DiscreteDP
: Object that contains the model parametersv::AbstractVector{T<:AbstractFloat}
: The current guess of the value functionTv::AbstractVector{T<:AbstractFloat}
: A buffer array to hold the updated value function. Initial value not used and will be overwrittensigma::AbstractVector
: A buffer array to hold the policy function. Initial values not used and will be overwritten
Returns
Tv::typeof(Tv)
: Updated value function vectorsigma::typeof(sigma)
: Updated policiy function vector
QuantEcon.bellman_operator!
— MethodApply the Bellman operator using v=ddpr.v
, Tv=ddpr.Tv
, and sigma=ddpr.sigma
Notes
Updates ddpr.Tv
and ddpr.sigma
inplace
QuantEcon.bellman_operator!
— MethodThe Bellman operator, which computes and returns the updated value function $Tv$ for a given value function $v$.
This function will fill the input v
with Tv
and the input sigma
with the corresponding policy rule.
Parameters
ddp::DiscreteDP
: The ddp modelv::AbstractVector{T<:AbstractFloat}
: The current guess of the value function. This array will be overwrittensigma::AbstractVector
: A buffer array to hold the policy function. Initial values not used and will be overwritten
Returns
Tv::Vector
: Updated value function vectorsigma::typeof(sigma)
: Policy rule
QuantEcon.bellman_operator
— MethodThe Bellman operator, which computes and returns the updated value function $Tv$ for a given value function $v$.
Parameters
ddp::DiscreteDP
: The ddp modelv::AbstractVector
: The current guess of the value function
Returns
Tv::Vector
: Updated value function vector
QuantEcon.bisect
— MethodFind the root of the f
on the bracketing inverval [x1, x2]
via bisection.
Arguments
f::Function
: The function you want to bracketx1::T
: Lower border for search intervalx2::T
: Upper border for search interval;maxiter::Int(500)
: Maximum number of bisection iterations;xtol::Float64(1e-12)
: The routine converges when a root is known to lie withinxtol
of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps())
:The routine converges when a root is known to lie withinrtol
times the value returned of the value returned. Should be ≥ 0
Returns
x::T
: The found root
Exceptions
- Throws an
ArgumentError
if[x1, x2]
does not form a bracketing interval - Throws a
ConvergenceError
if the maximum number of iterations is exceeded
References
Matches bisect
function from scipy/scipy/optimize/Zeros/bisect.c
QuantEcon.brent
— MethodFind the root of the f
on the bracketing inverval [x1, x2]
via brent's algo.
Arguments
f::Function
: The function you want to bracketx1::T
: Lower border for search intervalx2::T
: Upper border for search interval;maxiter::Int(500)
: Maximum number of bisection iterations;xtol::Float64(1e-12)
: The routine converges when a root is known to lie withinxtol
of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps())
:The routine converges when a root is known to lie withinrtol
times the value returned of the value returned. Should be ≥ 0
Returns
x::T
: The found root
Exceptions
- Throws an
ArgumentError
if[x1, x2]
does not form a bracketing interval - Throws a
ConvergenceError
if the maximum number of iterations is exceeded
References
Matches brentq
function from scipy/scipy/optimize/Zeros/bisectq.c
QuantEcon.brenth
— MethodFind a root of the f
on the bracketing inverval [x1, x2]
via modified brent
This routine uses a hyperbolic extrapolation formula instead of the standard inverse quadratic formula. Otherwise it is the original Brent's algorithm, as implemented in the brent
function.
Arguments
f::Function
: The function you want to bracketx1::T
: Lower border for search intervalx2::T
: Upper border for search interval;maxiter::Int(500)
: Maximum number of bisection iterations;xtol::Float64(1e-12)
: The routine converges when a root is known to lie withinxtol
of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps())
:The routine converges when a root is known to lie withinrtol
times the value returned of the value returned. Should be ≥ 0
Returns
x::T
: The found root
Exceptions
- Throws an
ArgumentError
if[x1, x2]
does not form a bracketing interval - Throws a
ConvergenceError
if the maximum number of iterations is exceeded
References
Matches brenth
function from scipy/scipy/optimize/Zeros/bisecth.c
QuantEcon.ckron
— Functionckron(arrays::AbstractArray...)
Repeatedly apply kronecker products to the arrays. Equilvalent to reduce(kron, arrays)
QuantEcon.communication_classes
— MethodFind the communication classes of the Markov chain mc
.
Arguments
mc::MarkovChain
: MarkovChain instance.
Returns
::Vector{Vector{Int}}
: Vector of vectors that describe the communication classes ofmc
.
QuantEcon.compute_deterministic_entropy
— MethodGiven $K$ and $F$, compute the value of deterministic entropy, which is $\sum_t \beta^t x_t' K'K x_t$ with $x_{t+1} = (A - BF + CK) x_t$.
Arguments
rlq::RBLQ
: Instance ofRBLQ
typeF::Matrix{Float64}
The policy function, ak x n
arrayK::Matrix{Float64}
The worst case matrix, aj x n
arrayx0::Vector{Float64}
: The initial condition for state
Returns
e::Float64
The deterministic entropy
QuantEcon.compute_fixed_point
— MethodRepeatedly apply a function to search for a fixed point
Approximates $T^∞ v$, where $T$ is an operator (function) and $v$ is an initial guess for the fixed point. Will terminate either when T^{k+1}(v) - T^k v < err_tol
or max_iter
iterations has been exceeded.
Provided that $T$ is a contraction mapping or similar, the return value will be an approximation to the fixed point of $T$.
Arguments
T
: A function representing the operator $T$v::TV
: The initial condition. An object of type $TV$;err_tol(1e-3)
: Stopping tolerance for iterations;max_iter(50)
: Maximum number of iterations;verbose(2)
: Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration);print_skip(10)
: ifverbose == 2
, how many iterations to apply between print messages
Returns
- '::TV': The fixed point of the operator $T$. Has type $TV$
Example
using QuantEcon
T(x, μ) = 4.0 * μ * x * (1.0 - x)
x_star = compute_fixed_point(x->T(x, 0.3), 0.4) # (4μ - 1)/(4μ)
QuantEcon.compute_greedy!
— MethodCompute the $v$-greedy policy
Parameters
ddp::DiscreteDP
: Object that contains the model parametersddpr::DPSolveResult
: Object that contains result variables
Returns
sigma::Vector{Int}
: Array containingv
-greedy policy rule
Notes
modifies ddpr.sigma and ddpr.Tv in place
QuantEcon.compute_greedy
— MethodCompute the $v$-greedy policy.
Arguments
v::AbstractVector
Value function vector of lengthn
ddp::DiscreteDP
Object that contains the model parameters
Returns
sigma:: v-greedy policy vector, of length
n`
QuantEcon.compute_loglikelihood
— Methodcomputes log-likelihood of entire observations
Arguments
kn::Kalman
:Kalman
specifying the model. Initial value must be the prior for t=1 period observation, i.e. $x_{1|0}$.y::AbstractMatrix
:n x T
matrix of observed data.n
is the number of observed variables in one period. Each column is a vector of observations at each period.
Returns
logL::Real
: log-likelihood of all observations
QuantEcon.compute_sequence
— FunctionCompute and return the optimal state and control sequence, assuming innovation $N(0,1)$
Arguments
lq::LQ
: instance ofLQ
typex0::ScalarOrArray
: initial statets_length::Integer(100)
: maximum number of periods for which to return process. Iflq
instance is finite horizon type, the sequenes are returned only formin(ts_length, lq.capT)
Returns
x_path::Matrix{Float64}
: Ann x T+1
matrix, where the t-th column represents $x_t$u_path::Matrix{Float64}
: Ak x T
matrix, where the t-th column represents $u_t$w_path::Matrix{Float64}
: An x T+1
matrix, where the t-th column representslq.C*N(0,1)
QuantEcon.d_operator
— MethodThe $D$ operator, mapping $P$ into
\[ D(P) := P + PC(\theta I - C'PC)^{-1} C'P\]
Arguments
rlq::RBLQ
: Instance ofRBLQ
typeP::Matrix{Float64}
: size isn x n
Returns
dP::Matrix{Float64}
: The matrix $P$ after applying the $D$ operator
QuantEcon.discrete_var
— FunctionCompute a finite-state Markov chain approximation to a VAR(1) process of the form
\[ y_{t+1} = b + By_{t} + \Psi^{\frac{1}{2}}\epsilon_{t+1}\]
where $\epsilon_{t+1}$ is an vector of independent standard normal innovations of length M
P, X = discrete_var(b, B, Psi, Nm, n_moments, method, n_sigmas)
Arguments
b::Union{Real, AbstractVector}
: constant vector of lengthM
.M=1
corresponds scalar caseB::Union{Real, AbstractMatrix}
:M x M
matrix of impact coefficientsPsi::Union{Real, AbstractMatrix}
:M x M
variance-covariance matrix of the innovationsdiscrete_var
only accepts non-singular variance-covariance matrices,Psi
.
Nm::Integer > 3
: Desired number of discrete points in each dimension
Optional
n_moments::Integer
: Desired number of moments to match. The default is 2.method::VAREstimationMethod
: Specify the method used to determine the grid points. Accepted inputs areEven()
,Quantile()
, orQuadrature()
. Please see the paper for more details.n_sigmas::Real
: If theEven()
option is specified,n_sigmas
is used to determine the number of unconditional standard deviations used to set the endpoints of the grid. The default issqrt(Nm-1)
.
Returns
P
:Nm^M x Nm^M
probability transition matrix. Each row corresponds to a discrete conditional probability distribution over the state M-tuples inX
X
:M x Nm^M
matrix of states. Each column corresponds to an M-tuple of values which correspond to the state associated with each row ofP
NOTES
- discrete_var only constructs tensor product grids where each dimension contains the same number of points. For this reason it is recommended that this code not be used for problems of more than about 4 or 5 dimensions due to curse of dimensionality issues.
- Future updates will allow for singular variance-covariance matrices and sparse grid specifications.
Reference
- Farmer, L. E., & Toda, A. A. (2017). "Discretizing nonlinear, non‐Gaussian Markov processes with exact conditional moments," Quantitative Economics, 8(2), 651-683.
QuantEcon.divide_bracket
— MethodGiven a function f
defined on the interval [x1, x2]
, subdivide the interval into n
equally spaced segments, and search for zero crossings of the function. nroot
will be set to the number of bracketing pairs found. If it is positive, the arrays xb1[1..nroot]
and xb2[1..nroot]
will be filled sequentially with any bracketing pairs that are found.
Arguments
f::Function
: The function you want to bracketx1::T
: Lower border for search intervalx2::T
: Upper border for search intervaln::Int(50)
: The number of sub-intervals to divide[x1, x2]
into
Returns
x1b::Vector{T}
:Vector
of lower borders of bracketing intervalsx2b::Vector{T}
:Vector
of upper borders of bracketing intervals
References
This is zbrack
from Numerical Recepies Recepies in C++
QuantEcon.do_quad
— MethodApproximate the integral of f
, given quadrature nodes
and weights
Arguments
f::Function
: A callable function that is to be approximated over the domain spanned bynodes
.nodes::Array
: Quadrature nodesweights::Array
: Quadrature nodesargs...(Void)
: additional positional arguments to pass tof
;kwargs...(Void)
: additional keyword arguments to pass tof
Returns
out::Float64
: The scalar that approximates integral off
on the hypercube formed by[a, b]
QuantEcon.estimate_mc_discrete
— MethodAccepts the simulation of a discrete state Markov chain and estimates the transition probabilities
Let $S = s_1, s_2, \ldots, s_N$ with $s_1 < s_2 < \ldots < s_N$ be the discrete states of a Markov chain. Furthermore, let $P$ be the corresponding stochastic transition matrix.
Given a history of observations, $\{X\}_{t=0}^{T}$ with $x_t \in S \forall t$, we would like to estimate the transition probabilities in $P$ with $p_{ij}$ as the ith row and jth column of $P$. For $x_t = s_i$ and $x_{t-1} = s_j$, let $P(x_t | x_{t-1})$ be defined as $p_{i,j}$ element of the stochastic matrix. The likelihood function is then given by
\[ L(\{X\}^t; P) = \text{Prob}(x_1) \prod_{t=2}^{T} P(x_t | x_{t-1})\]
The maximum likelihood estimate is then just given by the number of times a transition from $s_i$ to $s_j$ is observed divided by the number of times $s_i$ was observed.
Note: Because of the estimation procedure used, only states that are observed in the history appear in the estimated Markov chain... It can't divine whether there are unobserved states in the original Markov chain.
For more info, refer to:
- http://www.stat.cmu.edu/~cshalizi/462/lectures/06/markov-mle.pdf
- https://stats.stackexchange.com/questions/47685/calculating-log-likelihood-for-given-mle-markov-chains
Arguments
X::Vector{T}
: Simulated history of Markov states
Returns
mc::MarkovChain{T}
: A Markov chain holding the state values and transition matrix
QuantEcon.evaluate_F
— MethodGiven a fixed policy $F$, with the interpretation $u = -F x$, this function computes the matrix $P_F$ and constant $d_F$ associated with discounted cost $J_F(x) = x' P_F x + d_F$.
Arguments
rlq::RBLQ
: Instance ofRBLQ
typeF::Matrix{Float64}
: The policy function, ak x n
array
Returns
P_F::Matrix{Float64}
: Matrix for discounted costd_F::Float64
: Constant for discounted costK_F::Matrix{Float64}
: Worst case policyO_F::Matrix{Float64}
: Matrix for discounted entropyo_F::Float64
: Constant for discounted entropy
QuantEcon.evaluate_policy
— MethodMethod of evaluate_policy
that extracts sigma from a DPSolveResult
See other docstring for details
QuantEcon.evaluate_policy
— MethodCompute the value of a policy.
Parameters
ddp::DiscreteDP
: Object that contains the model parameterssigma::AbstractVector{T<:Integer}
: Policy rule vector
Returns
v_sigma::Array{Float64}
: Value vector ofsigma
, of lengthn
.
QuantEcon.expand_bracket
— MethodGiven a function f
and an initial guessed range x1
to x2
, the routine expands the range geometrically until a root is bracketed by the returned values x1
and x2
(in which case zbrac returns true) or until the range becomes unacceptably large (in which case a ConvergenceError
is thrown).
Arguments
f::Function
: The function you want to bracketx1::T
: Initial guess for lower border of bracketx2::T
: Initial guess ofr upper border of bracket;ntry::Int(50)
: The maximum number of expansion iterations;fac::Float64(1.6)
: Expansion factor (higher ⟶ larger interval size jumps)
Returns
x1::T
: The lower end of an actual bracketing intervalx2::T
: The upper end of an actual bracketing interval
References
This method is zbrac
from numerical recipies in C++
Exceptions
- Throws a
ConvergenceError
if the maximum number of iterations is exceeded
QuantEcon.filtered_to_forecast!
— MethodUpdates the moments of the time $t$ filtering distribution to the moments of the predictive distribution, which becomes the time $t+1$ prior
Arguments
k::Kalman
An instance of the Kalman filter
QuantEcon.golden_method
— MethodApplies Golden-section search to search for the maximum of a function in the interval (a, b)
https://en.wikipedia.org/wiki/Golden-section_search
QuantEcon.gridmake
— Functiongridmake(arrays::Union{AbstractVector,AbstractMatrix}...)
Expand one or more vectors (or matrices) into a matrix where rows span the cartesian product of combinations of the input arrays. Each column of the input arrays will correspond to one column of the output matrix. The first array varies the fastest (see example)
Example
julia> x = [1, 2, 3]; y = [10, 20]; z = [100, 200];
julia> gridmake(x, y, z)
12×3 Matrix{Int64}:
1 10 100
2 10 100
3 10 100
1 20 100
2 20 100
3 20 100
1 10 200
2 10 200
3 10 200
1 20 200
2 20 200
3 20 200
QuantEcon.gridmake!
— Methodgridmake!(out::AbstractMatrix, arrays::AbstractVector...)
Like gridmake
, but fills a pre-populated array. out
must have size prod(map(length, arrays), dims = length(arrays))
QuantEcon.gth_solve
— MethodThis routine computes the stationary distribution of an irreducible Markov transition matrix (stochastic matrix) or transition rate matrix (generator matrix) $A$.
More generally, given a Metzler matrix (square matrix whose off-diagonal entries are all nonnegative) $A$, this routine solves for a nonzero solution $x$ to $x (A - D) = 0$, where $D$ is the diagonal matrix for which the rows of $A - D$ sum to zero (i.e., $D_{ii} = \sum_j A_{ij}$ for all $i$). One (and only one, up to normalization) nonzero solution exists corresponding to each reccurent class of $A$, and in particular, if $A$ is irreducible, there is a unique solution; when there are more than one solution, the routine returns the solution that contains in its support the first index $i$ such that no path connects $i$ to any index larger than $i$. The solution is normalized so that its 1-norm equals one. This routine implements the Grassmann-Taksar-Heyman (GTH) algorithm (Grassmann, Taksar, and Heyman 1985), a numerically stable variant of Gaussian elimination, where only the off-diagonal entries of $A$ are used as the input data. For a nice exposition of the algorithm, see Stewart (2009), Chapter 10.
Arguments
A::Matrix{T}
: Stochastic matrix or generator matrix. Must be of shape n x n.
Returns
x::Vector{T}
: Stationary distribution of $A$.
References
- W. K. Grassmann, M. I. Taksar and D. P. Heyman, "Regenerative Analysis and Steady State Distributions for Markov Chains, " Operations Research (1985), 1107-1116.
- W. J. Stewart, Probability, Markov Chains, Queues, and Simulation, Princeton University Press, 2009.
QuantEcon.hamilton_filter
— MethodThis function applies "Hamilton filter" to AbstractVector
.
http://econweb.ucsd.edu/~jhamilto/hp.pdf
Arguments
y::AbstractVector
: data to be filteredh::Integer
: Time horizon that we are likely to predict incorrectly. Original paper recommends 2 for annual data, 8 for quarterly data, 24 for monthly data.p::Integer
: Number of lags in regression. Must be greater thanh
.
Note: For seasonal data, it's desirable for p
and h
to be integer multiples of the number of obsevations in a year. e.g. For quarterly data, h = 8
and p = 4
are recommended.
Returns
y_cycle::Vector
: cyclical componenty_trend::Vector
: trend component
QuantEcon.hamilton_filter
— MethodThis function applies "Hamilton filter" to <:AbstractVector
under random walk assumption.
http://econweb.ucsd.edu/~jhamilto/hp.pdf
Arguments
y::AbstractVector
: data to be filteredh::Integer
: Time horizon that we are likely to predict incorrectly. Original paper recommends 2 for annual data, 8 for quarterly data, 24 for monthly data.
Note: For seasonal data, it's desirable for h
to be an integer multiple of the number of obsevations in a year. e.g. For quarterly data, h = 8
is recommended.
Returns
y_cycle::Vector
: cyclical componenty_trend::Vector
: trend component
QuantEcon.hp_filter
— Methodapply Hodrick-Prescott filter to AbstractVector
.
Arguments
y::AbstractVector
: data to be detrendedλ::Real
: penalty on variation in trend
Returns
y_cyclical::Vector
: cyclical componenty_trend::Vector
: trend component
QuantEcon.impulse_response
— MethodGet the impulse response corresponding to our model.
Arguments
arma::ARMA
: Instance ofARMA
type;impulse_length::Integer(30)
: Length of horizon for calcluating impulse reponse. Must be at least as long as thep
fields ofarma
Returns
psi::Vector{Float64}
:psi[j]
is the response at lag j of the impulse response. We takepsi[1]
as unity.
QuantEcon.interp
— Methodinterp(grid::AbstractVector, function_vals::AbstractVector)
Linear interpolation in one dimension
Examples
breaks = cumsum(0.1 .* rand(20))
vals = 0.1 .* sin.(breaks)
li = interp(breaks, vals)
# Do interpolation by treating `li` as a function you can pass scalars to
li(0.2)
# use broadcasting to evaluate at multiple points
li.([0.1, 0.2, 0.3])
QuantEcon.is_aperiodic
— MethodIndicate whether the Markov chain mc
is aperiodic.
Arguments
mc::MarkovChain
: MarkovChain instance.
Returns
::Bool
QuantEcon.is_irreducible
— MethodIndicate whether the Markov chain mc
is irreducible.
Arguments
mc::MarkovChain
: MarkovChain instance.
Returns
::Bool
QuantEcon.is_stable
— Methodis_stable(A)
General function for testing for stability of matrix $A$. Just checks that eigenvalues are less than 1 in absolute value.
Arguments
A::Matrix
The matrix we want to check
Returns
stable::Bool
Whether or not the matrix is stable
QuantEcon.is_stable
— MethodTest for stability of linear state space system. First removes the constant row and column.
Arguments
lss::LSS
The linear state space system
Returns
stable::Bool
Whether or not the system is stable
QuantEcon.k_array_rank
— Methodk_array_rank([T=Int], a)
Given an array a
of k distinct positive integers, sorted in ascending order, return its ranking in the lexicographic ordering of the descending sequences of the elements, following Combinatorial number system.
Notes
InexactError
exception will be thrown, or an incorrect value will be returned without warning if overflow occurs during the computation. It is the user's responsibility to ensure that the rank of the input array fits within the range of T
; a sufficient condition for it is binomial(BigInt(a[end]), BigInt(length(a))) <= typemax(T)
.
Arguments
T::Type{<:Integer}
: The numeric type of ranking to be returned.a::Vector{<:Integer}
: Array of length k.
Returns
idx::T
: Ranking ofa
.
QuantEcon.lae_est
— MethodA vectorized function that returns the value of the look ahead estimate at the values in the array y
.
Arguments
l::LAE
: Instance ofLAE
typey::Array
: Array that becomes they
inl.p(l.x, y)
Returns
psi_vals::Vector
: Density at(x, y)
QuantEcon.m_quadratic_sum
— MethodComputes the quadratic sum
\[ V = \sum_{j=0}^{\infty} A^j B A^{j'}\]
$V$ is computed by solving the corresponding discrete lyapunov equation using the doubling algorithm. See the documentation of solve_discrete_lyapunov
for more information.
Arguments
A::Matrix{Float64}
: Ann x n
matrix as described above. We assume in order for convergence that the eigenvalues of $A$ have moduli bounded by unityB::Matrix{Float64}
: Ann x n
matrix as described above. We assume in order for convergence that the eigenvalues of $B$ have moduli bounded by unitymax_it::Int(50)
: Maximum number of iterations
Returns
gamma1::Matrix{Float64}
: Represents the value $V$
QuantEcon.moment_sequence
— MethodCreate an iterator to calculate the population mean and variance-convariance matrix for both $x_t$ and $y_t$, starting at the initial condition (self.mu_0, self.Sigma_0)
. Each iteration produces a 4-tuple of items (mu_x, mu_y, Sigma_x, Sigma_y)
for the next period.
Arguments
lss::LSS
An instance of the Gaussian linear state space model
QuantEcon.n_states
— MethodNumber of states in the Markov chain mc
QuantEcon.next_k_array!
— Methodnext_k_array!(a)
Given an array a
of k distinct positive integers, sorted in ascending order, return the next k-array in the lexicographic ordering of the descending sequences of the elements, following Combinatorial number system. a
is modified in place.
Arguments
a::Vector{<:Integer}
: Array of length k.
Returns
a::Vector{<:Integer}
: View ofa
.
Examples
julia> n, k = 4, 2;
julia> a = collect(1:2);
julia> while a[end] <= n
@show a
next_k_array!(a)
end
a = [1, 2]
a = [1, 3]
a = [2, 3]
a = [1, 4]
a = [2, 4]
a = [3, 4]
QuantEcon.nnash
— MethodCompute the limit of a Nash linear quadratic dynamic game.
Player i
minimizes
\[ \sum_{t=1}^{\infty}(x_t' r_i x_t + 2 x_t' w_i u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}' m_i u_{it})\]
subject to the law of motion
\[ x_{t+1} = A x_t + b_1 u_{1t} + b_2 u_{2t}\]
and a perceived control law $u_j(t) = - f_j x_t$ for the other player.
The solution computed in this routine is the $f_i$ and $p_i$ of the associated double optimal linear regulator problem.
Arguments
A
: Corresponds to the above equation, should be of size(n, n)
B1
: As above, size(n, k_1)
B2
: As above, size(n, k_2)
R1
: As above, size(n, n)
R2
: As above, size(n, n)
Q1
: As above, size(k_1, k_1)
Q2
: As above, size(k_2, k_2)
S1
: As above, size(k_1, k_1)
S2
: As above, size(k_2, k_2)
W1
: As above, size(n, k_1)
W2
: As above, size(n, k_2)
M1
: As above, size(k_2, k_1)
M2
: As above, size(k_1, k_2)
;beta::Float64(1.0)
Discount rate;tol::Float64(1e-8)
: Tolerance level for convergence;max_iter::Int(1000)
: Maximum number of iterations allowed
Returns
F1::Matrix{Float64}
:(k_1, n)
matrix representing feedback law for agent 1F2::Matrix{Float64}
:(k_2, n)
matrix representing feedback law for agent 2P1::Matrix{Float64}
:(n, n)
matrix representing the steady-state solution to the associated discrete matrix ticcati equation for agent 1P2::Matrix{Float64}
:(n, n)
matrix representing the steady-state solution to the associated discrete matrix riccati equation for agent 2
QuantEcon.num_compositions
— Methodnum_compositions(m, n)
The total number of m-part compositions of n, which is equal to (n + m - 1) choose (m - 1).
Arguments
m::Int
: Number of parts of compositionn::Int
: Integer to decompose
Returns
::Int
: Total number of m-part compositions of n
QuantEcon.prior_to_filtered!
— MethodUpdates the moments (cur_x_hat
, cur_sigma
) of the time $t$ prior to the time $t$ filtering distribution, using current measurement $y_t$. The updates are according to
\[ \hat{x}^F = \hat{x} + \Sigma G' (G \Sigma G' + R)^{-1} (y - G \hat{x}) \\ \Sigma^F = \Sigma - \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma\]
Arguments
k::Kalman
An instance of the Kalman filtery
The current measurement
QuantEcon.qnwbeta
— MethodComputes nodes and weights for beta distribution.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: First parameter of the beta distribution, along each dimensionb::Union{Real, Vector{Real}}
: Second parameter of the beta distribution, along each dimension
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwcheb
— MethodComputes multivariate Guass-Checbychev quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}
: Upper endpoint along each dimension
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwdist
— Methodqnwdist(
d::Distributions.ContinuousUnivariateDistribution, N::Int,
q0::Real=0.001, qN::Real=0.999, method::Union{T,Type{T}}=Quantile
) where T
Construct N
quadrature weights and nodes for distribution d
from the quantile q0
to the quantile qN
. method
can be one of:
Even
: nodes will be evenly spaced between the quantilesQuantile
: nodes will be placed at evenly spaced quantile values
To construct the weights, consider splitting the nodes into cells centered at each node. Specifically, let notation z_i
mean the i
th node and let z_{i-1/2}
be 1/2 between nodes z_{i-1}
and z_i
. Then, weights are determined as follows:
weights[1] = cdf(d, z_{1+1/2})
weights[N] = 1 - cdf(d, z_{N-1/2})
weights[i] = cdf(d, z_{i+1/2}) - cdf(d, z_{i-1/2})
for all i in 2:N-1
In effect, this strategy assigns node i
all the probability associated with a random variable occuring within the node i
s cell.
The weights always sum to 1, so they can be used as a proper probability distribution. This means that E[f(x) | x ~ d] ≈ dot(f.(nodes), weights)
.
QuantEcon.qnwequi
— FunctionGenerates equidistributed sequences with property that averages value of integrable function evaluated over the sequence converges to the integral as n goes to infinity.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}
: Upper endpoint along each dimensionkind::AbstractString("N")
: One of the following:- N - Neiderreiter (default)
- W - Weyl
- H - Haber
- R - pseudo Random
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwgamma
— FunctionComputes nodes and weights for beta distribution
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Shape parameter of the gamma distribution, along each dimension. Must be positive. Default is 1b::Union{Real, Vector{Real}}
: Scale parameter of the gamma distribution, along each dimension. Must be positive. Default is 1
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwlege
— MethodComputes multivariate Guass-Legendre quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}
: Upper endpoint along each dimension
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwlogn
— MethodComputes quadrature nodes and weights for multivariate uniform distribution
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensionmu::Union{Real, Vector{Real}}
: Mean along each dimensionsig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n)))
: Covariance structure
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
See also the documentation for qnwnorm
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwnorm
— MethodComputes nodes and weights for multivariate normal distribution.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensionmu::Union{Real, Vector{Real}}
: Mean along each dimensionsig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n)))
: Covariance structure
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
This function has many methods. I try to describe them here.
n
or mu
can be a vector or a scalar. If just one is a scalar the other is repeated to match the length of the other. If both are scalars, then the number of repeats is inferred from sig2
.
sig2
can be a matrix, vector or scalar. If it is a matrix, it is treated as the covariance matrix. If it is a vector, it is considered the diagonal of a diagonal covariance matrix. If it is a scalar it is repeated along the diagonal as many times as necessary, where the number of repeats is determined by the length of either n
and/or mu
(which ever is a vector).
If all 3 are scalars, then 1d nodes are computed. mu
and sig2
are treated as the mean and variance of a 1d normal distribution
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwsimp
— MethodComputes multivariate Simpson quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}
: Upper endpoint along each dimension
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwtrap
— MethodComputes multivariate trapezoid quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}
: Upper endpoint along each dimension
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwunif
— MethodComputes quadrature nodes and weights for multivariate uniform distribution.
Arguments
n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}
: Upper endpoint along each dimension
Returns
nodes::Array{Float64}
: An array of quadrature nodesweights::Array{Float64}
: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are vectors of length n
, the the scalar parameter is repeated n
times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.quadrect
— FunctionIntegrate the d-dimensional function f
on a rectangle with lower and upper bound for dimension i defined by a[i]
and b[i]
, respectively; using n[i]
points.
Arguments
f::Function
The function to integrate over. This should be a function that accepts as its first argument a matrix representing points along each dimension (each dimension is a column). Other arguments that need to be passed to the function are caught byargs...
andkwargs...
`n::Union{Int, Vector{Int}}
: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}
: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}
: Upper endpoint along each dimensionkind::AbstractString("lege")
Specifies which type of integration to perform. Valid values are:"lege"
: Gauss-Legendre"cheb"
: Gauss-Chebyshev"trap"
: trapezoid rule"simp"
: Simpson rule"N"
: Neiderreiter equidistributed sequence"W"
: Weyl equidistributed sequence"H"
: Haber equidistributed sequence"R"
: Monte Carloargs...(Void)
: additional positional arguments to pass tof
;kwargs...(Void)
: additional keyword arguments to pass tof
Returns
out::Float64
: The scalar that approximates integral off
on the hypercube formed by[a, b]
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.random_discrete_dp
— Functionrandom_discrete_dp([rng], num_states, num_actions[, beta];
k=num_states, scale=1)
Generate a DiscreteDP randomly. The reward values are drawn from the normal distribution with mean 0 and standard deviation scale
.
Arguments
rng::AbstractRNG=GLOBAL_RNG
: Random number generator.num_states::Integer
: Number of states.num_actions::Integer
: Number of actions.beta::Real=rand(rng)
: Discount factor. Randomly chosen from[0, 1)
if not specified.;k::Integer(num_states)
: Number of possible next states for each state-action pair. Equal tonum_states
if not specified.scale::Real(1)
: Standard deviation of the normal distribution for the reward values.
Returns
ddp::DiscreteDP
: An instance of DiscreteDP.
QuantEcon.random_markov_chain
— Functionrandom_markov_chain([rng], n[, k])
Return a randomly sampled MarkovChain instance with n
states, where each state has k
states with positive transition probability.
Arguments
rng::AbstractRNG=GLOBAL_RNG
: Random number generator.n::Integer
: Number of states.k::Integer=n
: Number of nonzero entries in each column of the matrix. Set ton
if none specified.
Returns
mc::MarkovChain
: MarkovChain instance.
Examples
julia> using QuantEcon, Random
julia> rng = MersenneTwister(1234);
julia> mc = random_markov_chain(rng, 3);
julia> mc.p
3×3 LinearAlgebra.Transpose{Float64,Array{Float64,2}}:
0.590845 0.175952 0.233203
0.460085 0.106152 0.433763
0.794026 0.0601209 0.145853
julia> mc = random_markov_chain(rng, 3, 2);
julia> mc.p
3×3 LinearAlgebra.Transpose{Float64,Array{Float64,2}}:
0.0 0.200586 0.799414
0.701386 0.0 0.298614
0.753163 0.246837 0.0
QuantEcon.random_stochastic_matrix
— Functionrandom_stochastic_matrix([rng], n[, k])
Return a randomly sampled n x n
stochastic matrix with k
nonzero entries for each row.
Arguments
rng::AbstractRNG=GLOBAL_RNG
: Random number generator.n::Integer
: Number of states.k::Integer=n
: Number of nonzero entries in each column of the matrix. Set ton
if none specified.
Returns
p::Array
: Stochastic matrix.
QuantEcon.recurrent_classes
— MethodFind the recurrent classes of the Markov chain mc
.
Arguments
mc::MarkovChain
: MarkovChain instance.
Returns
::Vector{Vector{Int}}
: Vector of vectors that describe the recurrent classes ofmc
.
QuantEcon.remove_constants
— MethodFinds the row and column, if any, that correspond to the constant term in a LSS
system and removes them to get the matrix that needs to be checked for stability.
Arguments
lss::LSS
The linear state space system
Returns
A::Matrix
The matrix A with constant row and column removed
QuantEcon.replicate
— FunctionSimulate num_reps
observations of $x_T$ and $y_T$ given $x_0 \sim N(\mu_0, \Sigma_0)$.
Arguments
lss::LSS
An instance of the Gaussian linear state space model.t::Int = 10
The period that we want to replicate values for.num_reps::Int = 100
The number of replications we want
Returns
x::Matrix
Ann x num_reps
matrix, where the j-th column is the jth observation of ``xT``y::Matrix
Ank x num_reps
matrix, where the j-th column is the jth observation of ``yT``
QuantEcon.ridder
— MethodFind a root of the f
on the bracketing inverval [x1, x2]
via ridder algo
Arguments
f::Function
: The function you want to bracketx1::T
: Lower border for search intervalx2::T
: Upper border for search interval;maxiter::Int(500)
: Maximum number of bisection iterations;xtol::Float64(1e-12)
: The routine converges when a root is known to lie withinxtol
of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps())
:The routine converges when a root is known to lie withinrtol
times the value returned of the value returned. Should be ≥ 0
Returns
x::T
: The found root
Exceptions
- Throws an
ArgumentError
if[x1, x2]
does not form a bracketing interval - Throws a
ConvergenceError
if the maximum number of iterations is exceeded
References
Matches ridder
function from scipy/scipy/optimize/Zeros/ridder.c
QuantEcon.robust_rule
— MethodSolves the robust control problem.
The algorithm here tricks the problem into a stacked LQ problem, as described in chapter 2 of Hansen- Sargent's text "Robustness". The optimal control with observed state is
\[ u_t = - F x_t\]
And the value function is $-x'Px$
Arguments
rlq::RBLQ
: Instance ofRBLQ
type
Returns
F::Matrix{Float64}
: The optimal control matrix from aboveP::Matrix{Float64}
: The positive semi-definite matrix defining the value functionK::Matrix{Float64}
: the worst-case shock matrix $K$, where $w_{t+1} = K x_t$ is the worst case shock
QuantEcon.robust_rule_simple
— FunctionSolve the robust LQ problem
A simple algorithm for computing the robust policy $F$ and the corresponding value function $P$, based around straightforward iteration with the robust Bellman operator. This function is easier to understand but one or two orders of magnitude slower than self.robust_rule()
. For more information see the docstring of that method.
Arguments
rlq::RBLQ
: Instance ofRBLQ
typeP_init::Matrix{Float64}(zeros(rlq.n, rlq.n))
: The initial guess for the
value function matrix
;max_iter::Int(80)
: Maximum number of iterations that are allowed;tol::Real(1e-8)
The tolerance for convergence
Returns
F::Matrix{Float64}
: The optimal control matrix from aboveP::Matrix{Float64}
: The positive semi-definite matrix defining the value functionK::Matrix{Float64}
: the worst-case shock matrix $K$, where $w_{t+1} = K x_t$ is the worst case shock
QuantEcon.rouwenhorst
— FunctionRouwenhorst's method to approximate AR(1) processes.
The process follows
\[ y_t = \mu + \rho y_{t-1} + \epsilon_t\]
where $\epsilon_t \sim N (0, \sigma^2)$
Arguments
N::Integer
: Number of points in markov processρ::Real
: Persistence parameter in AR(1) processσ::Real
: Standard deviation of random component of AR(1) processμ::Real(0.0)
: Mean of AR(1) process
Returns
mc::MarkovChain{Float64}
: Markov chain holding the state values and transition matrix
QuantEcon.simplex_grid
— Methodsimplex_grid(m, n)
Construct an array consisting of the integer points in the (m-1)-dimensional simplex $\{x \mid x_1 + \cdots + x_m = n, x_i \geq 0\}$, or equivalently, the m-part compositions of n, which are listed in lexicographic order. The total number of the points (hence the length of the output array) is L = (n+m-1)!/(n!*(m-1)!) (i.e., (n+m-1) choose (m-1)).
Arguments
m::Int
: Dimension of each point. Must be a positive integer.n::Int
: Number which the coordinates of each point sum to. Must be a nonnegative integer.
Returns
out::Matrix{Int}
: Array of shape (m, L) containing the integer points in the simplex, aligned in lexicographic order.
Notes
A grid of the (m-1)-dimensional unit simplex with n subdivisions along each dimension can be obtained by simplex_grid(m, n) / n
.
Examples
julia> simplex_grid(3, 4)
3×15 Matrix{Int64}:
0 0 0 0 0 1 1 1 1 2 2 2 3 3 4
0 1 2 3 4 0 1 2 3 0 1 2 0 1 0
4 3 2 1 0 3 2 1 0 2 1 0 1 0 0
References
A. Nijenhuis and H. S. Wilf, Combinatorial Algorithms, Chapter 5, Academic Press, 1978.
QuantEcon.simplex_index
— Methodsimplex_index(x, m, n)
Return the index of the point x in the lexicographic order of the integer points of the (m-1)-dimensional simplex $\{x \mid x_0 + \cdots + x_{m-1} = n\}$.
Arguments
x::Vector{Int}
: Integer point in the simplex, i.e., an array of m nonnegative integers that sum to n.m::Int
: Dimension of each point. Must be a positive integer.n::Int
: Number which the coordinates of each point sum to. Must be a nonnegative integer.
Returns
idx::Int
: Index of x.
QuantEcon.simulate!
— MethodFill X
with sample paths of the Markov chain mc
as columns. The resulting matrix has the state values of mc
as elements.
Arguments
X::Matrix
: Preallocated matrix to be filled with sample paths
of the Markov chain mc
. The element types in X
should be the same as the type of the state values of mc
mc::MarkovChain
: MarkovChain instance.;init=rand(1:n_states(mc))
: Can be one of the following- blank: random initial condition for each chain
- scalar: same initial condition for each chain
- vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)
QuantEcon.simulate
— MethodSimulate one sample path of the Markov chain mc
. The resulting vector has the state values of mc
as elements.
Arguments
mc::MarkovChain
: MarkovChain instance.ts_length::Int
: Length of simulation;init::Int=rand(1:n_states(mc))
: Initial state
Returns
X::Vector
: Vector containing the sample path, with length ts_length
QuantEcon.simulate_indices!
— MethodFill X
with sample paths of the Markov chain mc
as columns. The resulting matrix has the indices of the state values of mc
as elements.
Arguments
X::Matrix{Int}
: Preallocated matrix to be filled with indices
of the sample paths of the Markov chain mc
.
mc::MarkovChain
: MarkovChain instance.;init=rand(1:n_states(mc))
: Can be one of the following- blank: random initial condition for each chain
- scalar: same initial condition for each chain
- vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)
QuantEcon.simulate_indices
— MethodSimulate one sample path of the Markov chain mc
. The resulting vector has the indices of the state values of mc
as elements.
Arguments
mc::MarkovChain
: MarkovChain instance.ts_length::Int
: Length of simulation;init::Int=rand(1:n_states(mc))
: Initial state
Returns
X::Vector{Int}
: Vector containing the sample path, with length ts_length
QuantEcon.simulation
— MethodCompute a simulated sample path assuming Gaussian shocks.
Arguments
arma::ARMA
: Instance ofARMA
type;ts_length::Integer(90)
: Length of simulation;impulse_length::Integer(30)
: Horizon for calculating impulse response (see also docstring forimpulse_response
)
Returns
X::Vector{Float64}
: Simulation of the ARMA modelarma
QuantEcon.smooth
— FunctionSmooth the data in x using convolution with a window of requested size and type.
Arguments
x::Array
: An array containing the data to smoothwindow_len::Int(7)
: An odd integer giving the length of the windowwindow::AbstractString("hanning")
: A string giving the window type. Possible values areflat
,hanning
,hamming
,bartlett
, orblackman
Returns
out::Array
: The array of smoothed data
QuantEcon.smooth
— MethodVersion of smooth
where window_len
and window
are keyword arguments
QuantEcon.smooth
— MethodArguments
kn::Kalman
:Kalman
specifying the model. Initial value must be the prior for t=1 period observation, i.e. $x_{1|0}$.y::AbstractMatrix
:n x T
matrix of observed data.n
is the number of observed variables in one period. Each column is a vector of observations at each period.
Returns
x_smoothed::AbstractMatrix
:k x T
matrix of smoothed mean of states.k
is the number of states.logL::Real
: log-likelihood of all observationssigma_smoothed::AbstractArray
k x k x T
array of smoothed covariance matrix of states.
QuantEcon.solve
— MethodSolve the dynamic programming problem.
Parameters
ddp::DiscreteDP
: Object that contains the Model Parametersmethod::Type{T<Algo}(VFI)
: Type name specifying solution method. Acceptable arguments areVFI
for value function iteration orPFI
for policy function iteration orMPFI
for modified policy function iteration;max_iter::Int(250)
: Maximum number of iterations;epsilon::Float64(1e-3)
: Value for epsilon-optimality. Only used ifmethod
isVFI
;k::Int(20)
: Number of iterations for partial policy evaluation in modified policy iteration (irrelevant for other methods).
Returns
ddpr::DPSolveResult{Algo}
: Optimization result represented as aDPSolveResult
. SeeDPSolveResult
for details.
QuantEcon.solve_discrete_lyapunov
— FunctionSolves the discrete lyapunov equation.
The problem is given by
\[ AXA' - X + B = 0\]
$X$ is computed by using a doubling algorithm. In particular, we iterate to convergence on $X_j$ with the following recursions for $j = 1, 2, \ldots$ starting from $X_0 = B, a_0 = A$:
\[ a_j = a_{j-1} a_{j-1} \\ X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}'\]
Arguments
A::Matrix{Float64}
: Ann x n
matrix as described above. We assume in order for convergence that the eigenvalues of $A$ have moduli bounded by unityB::Matrix{Float64}
: Ann x n
matrix as described above. We assume in order for convergence that the eigenvalues of $B$ have moduli bounded by unitymax_it::Int(50)
: Maximum number of iterations
Returns
gamma1::Matrix{Float64}
Represents the value $X$
QuantEcon.solve_discrete_riccati
— FunctionSolves the discrete-time algebraic Riccati equation
The prolem is defined as
\[ X = A'XA - (N + B'XA)'(B'XB + R)^{-1}(N + B'XA) + Q\]
via a modified structured doubling algorithm. An explanation of the algorithm can be found in the reference below.
Arguments
A
:k x k
array.B
:k x n
arrayR
:n x n
, should be symmetric and positive definiteQ
:k x k
, should be symmetric and non-negative definiteN::Matrix{Float64}(zeros(size(R, 1), size(Q, 1)))
:n x k
arraytolerance::Float64(1e-10)
Tolerance level for convergencemax_iter::Int(50)
: The maximum number of iterations allowed
Note that A, B, R, Q
can either be real (i.e. k, n = 1
) or matrices.
Returns
X::Matrix{Float64}
The fixed point of the Riccati equation; ak x k
array representing the approximate solution
References
Chiang, Chun-Yueh, Hung-Yuan Fan, and Wen-Wei Lin. "STRUCTURED DOUBLING ALGORITHM FOR DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS WITH SINGULAR CONTROL WEIGHTING MATRICES." Taiwanese Journal of Mathematics 14, no. 3A (2010): pp-935.
QuantEcon.spectral_density
— MethodCompute the spectral density function.
The spectral density is the discrete time Fourier transform of the autocovariance function. In particular,
\[ f(w) = \sum_k \gamma(k) \exp(-ikw)\]
where $\gamma$ is the autocovariance function and the sum is over the set of all integers.
Arguments
arma::ARMA
: Instance ofARMA
type;two_pi::Bool(true)
: Compute the spectral density function over $[0, \pi]$ if false and $[0, 2 \pi]$ otherwise.;res(1200)
: Ifres
is a scalar then the spectral density is computed atres
frequencies evenly spaced around the unit circle, but ifres
is an array then the function computes the response at the frequencies given by the array
Returns
w::Vector{Float64}
: The normalized frequencies at which h was computed, in radians/samplespect::Vector{Float64}
: The frequency response
QuantEcon.stationary_distributions
— FunctionCompute stationary distributions of the Markov chain mc
, one for each recurrent class.
Arguments
mc::MarkovChain{T}
: MarkovChain instance.
Returns
stationary_dists::Vector{Vector{T1}}
: Vector of vectors that represent stationary distributions, where the element typeT1
isRational
ifT
isInt
(and equal toT
otherwise).
QuantEcon.stationary_distributions
— MethodCompute the moments of the stationary distributions of $x_t$ and $y_t$ if possible. Computation is by iteration, starting from the initial conditions lss.mu_0
and lss.Sigma_0
Arguments
lss::LSS
An instance of the Guassian linear state space model;max_iter::Int = 200
The maximum number of iterations allowed;tol::Float64 = 1e-5
The tolerance level one wishes to achieve
Returns
mu_x::Vector
Represents the stationary mean of $x_t$mu_y::Vector
Represents the stationary mean of $y_t$Sigma_x::Matrix
Represents the var-cov matrixSigma_y::Matrix
Represents the var-cov matrix
QuantEcon.stationary_values!
— MethodComputes value and policy functions in infinite horizon model.
Arguments
lq::LQ
: instance ofLQ
type
Returns
P::ScalarOrArray
: n x n matrix in value function representation $V(x) = x'Px + d$d::Real
: Constant in value function representationF::ScalarOrArray
: Policy rule that specifies optimal control in each period
Notes
This function updates the P
, d
, and F
fields on the lq
instance in addition to returning them
QuantEcon.stationary_values
— MethodNon-mutating routine for solving for P
, d
, and F
in infinite horizon model
See docstring for stationary_values!
for more explanation
QuantEcon.tauchen
— MethodTauchen's (1996) method for approximating AR(1) process with finite markov chain
The process follows
\[ y_t = \mu + \rho y_{t-1} + \epsilon_t\]
where $\epsilon_t \sim N (0, \sigma^2)$
Arguments
N::Integer
: Number of points in markov processρ::Real
: Persistence parameter in AR(1) processσ::Real
: Standard deviation of random component of AR(1) processμ::Real(0.0)
: Mean of AR(1) processn_std::Real(3)
: The number of standard deviations to each side the process should span
Returns
mc::MarkovChain
: Markov chain holding the state values and transition matrix
QuantEcon.update!
— MethodUpdates cur_x_hat
and cur_sigma
given array y
of length k
. The full update, from one period to the next
Arguments
k::Kalman
An instance of the Kalman filtery
An array representing the current measurement
QuantEcon.update_values!
— MethodUpdate P
and d
from the value function representation in finite horizon case
Arguments
lq::LQ
: instance ofLQ
type
Returns
P::ScalarOrArray
:n x n
matrix in value function representation $V(x) = x'Px + d$d::Real
: Constant in value function representation
Notes
This function updates the P
and d
fields on the lq
instance in addition to returning them
QuantEcon.var_quadratic_sum
— MethodComputes the expected discounted quadratic sum
\[ q(x_0) = \mathbb{E} \sum_{t=0}^{\infty} \beta^t x_t' H x_t\]
Here ${x_t}$ is the VAR process $x_{t+1} = A x_t + C w_t$ with ${w_t}$ standard normal and $x_0$ the initial condition.
Arguments
A::Union{Float64, Matrix{Float64}}
Then x n
matrix described above (scalar) ifn = 1
C::Union{Float64, Matrix{Float64}}
Then x n
matrix described above (scalar) ifn = 1
H::Union{Float64, Matrix{Float64}}
Then x n
matrix described above (scalar) ifn = 1
beta::Float64
: Discount factor in(0, 1)
x_0::Union{Float64, Vector{Float64}}
The initial condtion. A conformable array (of lengthn
) or a scalar ifn = 1
Returns
q0::Float64
: Represents the value $q(x_0)$
Notes
The formula for computing $q(x_0)$ is $q(x_0) = x_0' Q x_0 + v$ where
- $Q$ is the solution to $Q = H + \beta A' Q A$ and
- $v = \frac{trace(C' Q C) \beta}{1 - \beta}$
QuantEcon.@def_sim
— Macro@def_sim sim_name default_type_params begin
obs_typedef
end
Given a type definition for a single observation in a simulation (obs_typedef
), evaluate that type definition as is, but also creates a second type named sim_name
as well as various methods on the new type.
The fields of sim_name
will have the same name as the fields of obs_typedef
, but will be arrays of whatever the type of the corresponding obs_typedef
field was. The intention is for sim_name
to be a struct of arrays (see https://en.wikipedia.org/wiki/AOSandSOA). If you want an array of structs, just simply collect an array of instances of the type defined in obs_typedef
. The struct of arrays storage format has better cache efficiency and data locality if you want to operate on all values of a particular field at once, rather than all the fields of a particular value.
In addition to the new type sim_name
, the following methods will be defined:
sim_name(sz::NTuple{N,Int})
. This is a constructor forsim_name
that allocates arrays of sizesz
for each field. Ifobs_typedef
inlcuded any type parameters, then the default values (specified indefault_type_params
) will be used.Base.endof(::sim_name)
: equal to the length of any of its fieldsBase.length(::sim_name)
: equal to the length of any of its fields- The iterator protocol for
sim_name
. The type of each element of the iterator is the type defined inobs_typedef
. This amounts tho defining the following methodsBase.start(::sim_name)::Int
Base.next(::sim_name, ::Int)::Tuple{Observation,Int}
Base.done(::sim_name, ::Int)::Bool
Base.getindex(sim::sim_name, ix::Int)
. This implements linear indexing forsim_name
and will return an instance of the type defined inobs_typedef
Example
NOTE: the using MacroTools
and call to MacroTools.prettify
is not necessary and is only used here to clean up the output so it is easier to read
julia> using MacroTools
julia> macroexpand(:(@def_sim Simulation (T => Float64,) struct Observation{T<:Number}
c::T
k::T
i_z::Int
end
)) |> MacroTools.prettify
quote
struct Simulation{prairiedog, T <: Number}
c::Array{T, prairiedog}
k::Array{T, prairiedog}
i_z::Array{Int, prairiedog}
end
function Simulation{prairiedog}(sz::NTuple{prairiedog, Int})
c = Array{Float64, prairiedog}(sz)
k = Array{Float64, prairiedog}(sz)
i_z = Array{Int, prairiedog}(sz)
Simulation(c, k, i_z)
end
struct Observation{T <: Number}
c::T
k::T
i_z::Int
end
Base.endof(sim::Simulation) = length(sim.c)
Base.length(sim::Simulation) = endof(sim)
Base.start(sim::Simulation) = 1
Base.next(sim::Simulation, ix::Int) = (sim[ix], ix + 1)
Base.done(sim::Simulation, ix::Int) = ix >= length(sim)
function Base.getindex(sim::Simulation, ix::Int)
$(Expr(:boundscheck, true))
if ix > length(sim)
throw(BoundsError("$(length(sim))-element Simulation at index $(ix)"))
end
$(Expr(:boundscheck, :pop))
$(Expr(:inbounds, true))
out = Observation(sim.c[ix], sim.k[ix], sim.i_z[ix])
$(Expr(:inbounds, :pop))
return out
end
end
Internal
QuantEcon.DPSolveResult
— TypeDPSolveResult
is an object for retaining results and associated metadata after solving the model
Parameters
ddp::DiscreteDP
: DiscreteDP object
Returns
ddpr::DPSolveResult
: DiscreteDP results object
QuantEcon.VAREstimationMethod
— Typetypes specifying the method for discrete_var
Base.:*
— MethodDefine Matrix Multiplication between 3-dimensional matrix and a vector
Matrix multiplication over the last dimension of $A$
Base.rand
— MethodMake multiple draws from the discrete distribution represented by a DiscreteRV
instance
Arguments
d::DiscreteRV
: TheDiscreteRV
type representing the distributionk::Int
Returns
out::Vector{Int}
:k
draws fromd
Base.rand
— MethodMake a single draw from the discrete distribution.
Arguments
d::DiscreteRV
: TheDiscreteRV
type represetning the distribution
Returns
out::Int
: One draw from the discrete distribution
QuantEcon._compute_sequence
— MethodPrivate method implementing compute_sequence
when state is a scalar
QuantEcon._compute_sequence
— MethodPrivate method implementing compute_sequence
when state is a scalar
QuantEcon._generate_a_indptr!
— MethodGenerate a_indptr
; stored in out
. s_indices
is assumed to be in sorted order.
Parameters
num_states::Integer
s_indices::AbstractVector{T}
out::AbstractVector{T}
: with length =num_states
+ 1
QuantEcon._has_sorted_sa_indices
— MethodCheck whether s_indices
and a_indices
are sorted in lexicographic order.
Parameters
s_indices
, a_indices
: Vectors
Returns
bool: Whether s_indices
and a_indices
are sorted.
QuantEcon._random_stochastic_matrix
— Method_random_stochastic_matrix([rng], n, m; k=n)
Generate a "non-square column stochstic matrix" of shape (n, m)
, which contains as columns m
probability vectors of length n
with k
nonzero entries.
Arguments
rng::AbstractRNG=GLOBAL_RNG
: Random number generator.n::Integer
: Number of states.m::Integer
: Number of probability vectors.;k::Integer(n)
: Number of nonzero entries in each column of the matrix. Set ton
if none specified.
Returns
p::Array
: Array of shape(n, m)
containingm
probability vectors of lengthn
as columns.
QuantEcon._solve!
— MethodModified Policy Function Iteration
QuantEcon._solve!
— MethodPolicy Function Iteration
NOTE: The epsilon is ignored in this method. It is only here so dispatch can go from solve(::DiscreteDP, ::Type{Algo})
to any of the algorithms. See solve
for further details
QuantEcon._solve!
— MethodImpliments Value Iteration NOTE: See solve
for further details
QuantEcon.allcomb3
— MethodReturn combinations of each column of matrix A
. It is simiplifying allcomb2
by using gridmake
from QuantEcon
Arguments
A::AbstractMatrix
:N x M
Matrix
Returns
N^M x M
Matrix, combination of each row ofA
.
Example
julia> allcomb3([1 4 7;
2 5 8;
3 6 9]) # numerical input
27×3 Array{Int64,2}:
1 4 7
1 4 8
1 4 9
1 5 7
1 5 8
1 5 9
1 6 7
1 6 8
1 6 9
2 4 7
⋮
2 6 9
3 4 7
3 4 8
3 4 9
3 5 7
3 5 8
3 5 9
3 6 7
3 6 8
3 6 9
QuantEcon.construct_1D_grid
— Methodconstruct one-dimensional quantile grid of states
Argument
Sigma::AbstractMatrix
: variance-covariance matrix of the standardized processNm::Integer
: number of grid pointsM::Integer
: number of variables (M=1
corresponds to AR(1))n_sigmas::Real
: number of standard error determining end points of gridmethod::Quntile
: method for grid making
Return
y1D
:M x Nm
matrix of variable gridy1Dbounds
: bounds of each grid bin
QuantEcon.construct_1D_grid
— Methodconstruct one-dimensional quadrature grid of states
Argument
::ScalarOrArray
: not usedNm::Integer
: number of grid pointsM::Integer
: number of variables (M=1
corresponds to AR(1))n_sigmas::Real
: not usedmethod::Quadrature
: method for grid making
Return
y1D
:M x Nm
matrix of variable gridweights
: weights on each grid
QuantEcon.construct_1D_grid
— Methodconstruct one-dimensional evenly spaced grid of states
Argument
Sigma::ScalarOrArray
: variance-covariance matrix of the standardized processNm::Integer
: number of grid pointsM::Integer
: number of variables (M=1
corresponds to AR(1))n_sigmas::Real
: number of standard error determining end points of gridmethod::Even
: method for grid making
Return
y1D
:M x Nm
matrix of variable gridnothing
:nothing
of typeVoid
QuantEcon.construct_prior_guess
— Methodconstruct prior guess for quantile grid method
Arguments
cond_mean::AbstractVector
: conditional Mean of each variableNm::Integer
: number of grid points::AbstractMatrix
: grid of variabley1Dbounds::AbstractMatrix
: bounds of each grid binmethod::Quantile
: method for grid making
QuantEcon.construct_prior_guess
— Methodconstruct prior guess for quadrature grid method
Arguments
cond_mean::AbstractVector
: conditional Mean of each variableNm::Integer
: number of grid pointsy1D::AbstractMatrix
: grid of variableweights::AbstractVector
: weights of gridy1D
method::Quadrature
: method for grid making
QuantEcon.construct_prior_guess
— Methodconstruct prior guess for evenly spaced grid method
Arguments
cond_mean::AbstractVector
: conditional Mean of each variableNm::Integer
: number of grid pointsy1D::AbstractMatrix
: grid of variable::AbstractMatrix
: bounds of each grid binmethod::Even
: method for grid making
QuantEcon.discrete_approximation
— FunctionCompute a discrete state approximation to a distribution with known moments, using the maximum entropy procedure proposed in Tanaka and Toda (2013)
p, lambda_bar, moment_error = discrete_approximation(D, T, Tbar, q, lambda0)
Arguments
D::AbstractVector
: vector of grid points of lengthN
. N is the number of points at which an approximation is to be constructed.T::Function
: A function that accepts a singleAbstractVector
of lengthN
and returns anL x N
matrix of moments evaluated at each grid point, where L is the number of moments to be matched.Tbar::AbstractVector
: lengthL
vector of moments of the underlying distribution which should be matched
Optional
q::AbstractVector
: lengthN
vector of prior weights for each point in D. The default is for each point to have an equal weight.lambda0::AbstractVector
: lengthL
vector of initial guesses for the dual problem variables. The default is a vector of zeros.
Returns
p
: (1 x N) vector of probabilties assigned to each grid point inD
.lambda_bar
: lengthL
vector of dual problem variables which solve the maximum entropy problemmoment_error
: vector of errors in moments (defined by moments of discretization minus actual moments) of lengthL
QuantEcon.entropy_grad!
— MethodCompute gradient of objective function
Returns
grad
: lengthL
gradient vector of the objective function evaluated atlambda
QuantEcon.entropy_hess!
— MethodCompute hessian of objective function
Returns
hess
:L x L
hessian matrix of the objective function evaluated atlambda
QuantEcon.entropy_obj
— MethodCompute the maximum entropy objective function used in discrete_approximation
obj = entropy_obj(lambda, Tx, Tbar, q)
Arguments
lambda::AbstractVector
: lengthL
vector of values of the dual problem variablesTx::AbstractMatrix
:L x N
matrix of moments evaluated at the grid points specified in discrete_approximationTbar::AbstractVector
: lengthL
vector of moments of the underlying distribution which should be matchedq::AbstractVector
: lengthN
vector of prior weights for each point in the grid.
Returns
obj
: scalar value of objective function evaluated atlambda
QuantEcon.fix
— Functionfix(x)
Round x
towards zero. For arrays there is a mutating version fix!
QuantEcon.getZ
— MethodSimple method to return an element $Z$ in the Riccati equation solver whose type is Float64
(to be accepted by the cond()
function)
Arguments
BB::Float64
: result of $B' B$gamma::Float64
: parameter in the Riccati equation solverR::Float64
Returns
::Float64
: element $Z$ in the Riccati equation solver
QuantEcon.getZ
— MethodSimple method to return an element $Z$ in the Riccati equation solver whose type is Float64
(to be accepted by the cond()
function)
Arguments
BB::Union{Vector, Matrix}
: result of $B' B$gamma::Float64
: parameter in the Riccati equation solverR::Float64
Returns
::Float64
: element $Z$ in the Riccati equation solver
QuantEcon.getZ
— MethodSimple method to return an element $Z$ in the Riccati equation solver whose type is Matrix (to be accepted by the cond()
function)
Arguments
BB::Matrix
: result of $B' B$gamma::Float64
: parameter in the Riccati equation solverR::Matrix
Returns
::Matrix
: element $Z$ in the Riccati equation solver
QuantEcon.go_backward
— MethodArguments
kn::Kalman
:Kalman
specifying the model.x_fi::Vector
: filtered mean of state for period $t$sigma_fi::Matrix
: filtered covariance matrix of state for period $t$sigma_fo::Matrix
: forecast of covariance matrix of state for period $t+1$ conditional on period $t$ observationsx_s1::Vector
: smoothed mean of state for period $t+1$sigma_s1::Matrix
: smoothed covariance of state for period $t+1$
Returns
x_s1::Vector
: smoothed mean of state for period $t$sigma_s1::Matrix
: smoothed covariance of state for period $t$
QuantEcon.gth_solve!
— MethodSame as gth_solve
, but overwrite the input A
, instead of creating a copy.
QuantEcon.log_likelihood
— Methodcomputes log-likelihood of period $t$
Arguments
kn::Kalman
:Kalman
specifying the model. Current values must be the forecast for period $t$ observation conditional on $t-1$ observation.y::AbstractVector
: Respondentbservations at period $t$
Returns
logL::Real
: log-likelihood of observations at period $t$
QuantEcon.min_var_trace
— Methodfind a unitary matrix U
such that the diagonal components of U'AU
is as close to a multiple of identity matrix as possible
Arguments
A::AbstractMatrix
: square matrix
Returns
U
: unitary matrixfval
: minimum value
QuantEcon.polynomial_moment
— MethodCompute the moment defining function used in discrete_approximation
T = polynomial_moment(X, mu, scaling_factor, mMoments)
Arguments:
X::AbstractVector
: lengthN
vector of grid pointsmu::Real
: location parameter (conditional mean)scaling_factor::Real
: scaling factor for numerical stability. (typically largest grid point)n_moments::Integer
: number of polynomial moments
Return
T
: moment defining function used indiscrete_approximation
QuantEcon.random_probvec
— Methodrandom_probvec([rng], k[, m])
Return m
randomly sampled probability vectors of size k
.
Arguments
rng::AbstractRNG=GLOBAL_RNG
: Random number generator.k::Integer
: Size of each probability vector.m::Integer
: Number of probability vectors.
Returns
a::Array
: Matrix of shape(k, m)
, or Vector of shape(k,)
ifm
is not specified, containing probability vector(s) as column(s).
QuantEcon.s_wise_max!
— MethodPopulate out
with max_a vals(s, a)
, where vals
is represented as a Vector
of size (num_sa_pairs,)
.
QuantEcon.s_wise_max!
— MethodPopulate out
with max_a vals(s, a)
, where vals
is represented as a Vector
of size (num_sa_pairs,)
.
Also fills out_argmax
with the cartesiean index associated with the indmax
in each row
QuantEcon.s_wise_max!
— MethodPopulate out
with max_a vals(s, a)
, where vals
is represented as a AbstractMatrix
of size (num_states, num_actions)
.
Also fills out_argmax
with the column number associated with the indmax
in each row
QuantEcon.s_wise_max!
— MethodPopulate out
with max_a vals(s, a)
, where vals
is represented as a AbstractMatrix
of size (num_states, num_actions)
.
QuantEcon.s_wise_max
— MethodReturn the Vector
max_a vals(s, a)
, where vals
is represented as a AbstractMatrix
of size (num_states, num_actions)
.
QuantEcon.standardize_var
— Methodreturn standerdized VAR(1) representation
Arguments
b::AbstractVector
:M x 1
constant term vectorB::AbstractMatrix
:M x M
matrix of impact coefficientsPsi::AbstractMatrix
:M x M
variance-covariance matrix of innovationsM::Intger
: number of variables of the VAR(1) model
Returns
A::Matirx
: impact coefficients of standardized VAR(1) processC::AbstractMatrix
: variance-covariance matrix of standardized model innovationsmu::AbstractVector
: mean of the standardized VAR(1) processSigma::AbstractMatrix
: variance-covariance matrix of the standardized VAR(1) process
QuantEcon.standardize_var
— Methodreturn standerdized AR(1) representation
Arguments
b::Real
: constant termB::Real
: impact coefficientPsi::Real
: variance of innovationM::Integer == 1
: must be one since the function is for AR(1)
Returns
A::Real
: impact coefficient of standardized AR(1) processC::Real
: standard deviation of the innovationmu::Real
: mean of the standardized AR(1) processSigma::Real
: variance of the standardized AR(1) process
QuantEcon.todense
— MethodIf A
is already dense, return A
as is
QuantEcon.todense
— MethodCustom version of full
, which allows convertion to type T
QuantEcon.warn_persistency
— Methodcheck persistency when method
is Quadrature
and give warning if needed
Arguments
B::Union{Real, AbstractMatrix}
: impact coefficientmethod::VAREstimationMethod
: method for grid making
Returns
nothing