QuantEcon
API documentation
Index
QuantEcon.ARMAQuantEcon.CFEUtilityQuantEcon.CRRAUtilityQuantEcon.DPSolveResultQuantEcon.DiscreteDPQuantEcon.DiscreteDPQuantEcon.DiscreteDPQuantEcon.DiscreteRVQuantEcon.EllipticalUtilityQuantEcon.KalmanQuantEcon.LAEQuantEcon.LCPResultQuantEcon.LQQuantEcon.LQQuantEcon.LSSQuantEcon.LinInterpQuantEcon.LogUtilityQuantEcon.MPFIQuantEcon.MVNSamplerQuantEcon.MVNSamplerQuantEcon.MarkovChainQuantEcon.MarkovChainQuantEcon.PFIQuantEcon.PivOptionsQuantEcon.RBLQQuantEcon.SimplexGridQuantEcon.VAREstimationMethodQuantEcon.VFIBase.:*Base.randBase.randDSP.Periodograms.periodogramGraphs.periodQuantEcon.F_to_KQuantEcon.K_to_FQuantEcon.RQ_sigmaQuantEcon.RQ_sigmaQuantEcon._compute_sequenceQuantEcon._compute_sequenceQuantEcon._generate_a_indptr!QuantEcon._get_solution!QuantEcon._has_sorted_sa_indicesQuantEcon._initialize_tableau!QuantEcon._lex_min_ratio_test!QuantEcon._min_ratio_test_no_tie_breaking!QuantEcon._pivoting!QuantEcon._random_stochastic_matrixQuantEcon._solve!QuantEcon._solve!QuantEcon._solve!QuantEcon.allcomb3QuantEcon.ar_periodogramQuantEcon.autocovarianceQuantEcon.b_operatorQuantEcon.backward_inductionQuantEcon.bellman_operatorQuantEcon.bellman_operator!QuantEcon.bellman_operator!QuantEcon.bellman_operator!QuantEcon.bisectQuantEcon.brentQuantEcon.brenthQuantEcon.ckronQuantEcon.communication_classesQuantEcon.compute_deterministic_entropyQuantEcon.compute_fixed_pointQuantEcon.compute_greedyQuantEcon.compute_greedy!QuantEcon.compute_loglikelihoodQuantEcon.compute_sequenceQuantEcon.construct_1D_gridQuantEcon.construct_1D_gridQuantEcon.construct_1D_gridQuantEcon.construct_prior_guessQuantEcon.construct_prior_guessQuantEcon.construct_prior_guessQuantEcon.d_operatorQuantEcon.discrete_approximationQuantEcon.discrete_varQuantEcon.divide_bracketQuantEcon.do_quadQuantEcon.entropy_grad!QuantEcon.entropy_hess!QuantEcon.entropy_objQuantEcon.estimate_mc_discreteQuantEcon.evaluate_FQuantEcon.evaluate_policyQuantEcon.evaluate_policyQuantEcon.expand_bracketQuantEcon.filtered_to_forecast!QuantEcon.fixQuantEcon.getZQuantEcon.getZQuantEcon.getZQuantEcon.go_backwardQuantEcon.golden_methodQuantEcon.golden_methodQuantEcon.gridmakeQuantEcon.gridmake!QuantEcon.gth_solveQuantEcon.gth_solve!QuantEcon.hamilton_filterQuantEcon.hamilton_filterQuantEcon.hp_filterQuantEcon.impulse_responseQuantEcon.interpQuantEcon.is_aperiodicQuantEcon.is_irreducibleQuantEcon.is_stableQuantEcon.is_stableQuantEcon.k_array_rankQuantEcon.lae_estQuantEcon.lcp_lemkeQuantEcon.lcp_lemke!QuantEcon.log_likelihoodQuantEcon.m_quadratic_sumQuantEcon.min_var_traceQuantEcon.moment_sequenceQuantEcon.n_statesQuantEcon.next_k_array!QuantEcon.nnashQuantEcon.num_compositionsQuantEcon.polynomial_momentQuantEcon.prior_to_filtered!QuantEcon.qnwbetaQuantEcon.qnwchebQuantEcon.qnwdistQuantEcon.qnwequiQuantEcon.qnwgammaQuantEcon.qnwlegeQuantEcon.qnwlognQuantEcon.qnwmonomial1QuantEcon.qnwmonomial2QuantEcon.qnwnormQuantEcon.qnwsimpQuantEcon.qnwtrapQuantEcon.qnwunifQuantEcon.quadrectQuantEcon.random_discrete_dpQuantEcon.random_markov_chainQuantEcon.random_probvecQuantEcon.random_stochastic_matrixQuantEcon.recurrent_classesQuantEcon.remove_constantsQuantEcon.replicateQuantEcon.ridderQuantEcon.robust_ruleQuantEcon.robust_rule_simpleQuantEcon.rouwenhorstQuantEcon.s_wise_maxQuantEcon.s_wise_max!QuantEcon.s_wise_max!QuantEcon.s_wise_max!QuantEcon.s_wise_max!QuantEcon.set_state!QuantEcon.simplex_gridQuantEcon.simplex_indexQuantEcon.simulateQuantEcon.simulate!QuantEcon.simulate_indicesQuantEcon.simulate_indices!QuantEcon.simulationQuantEcon.smoothQuantEcon.smoothQuantEcon.smoothQuantEcon.solveQuantEcon.solve_discrete_lyapunovQuantEcon.solve_discrete_riccatiQuantEcon.spectral_densityQuantEcon.standardize_varQuantEcon.standardize_varQuantEcon.stationary_distributionsQuantEcon.stationary_distributionsQuantEcon.stationary_valuesQuantEcon.stationary_valuesQuantEcon.stationary_values!QuantEcon.tauchenQuantEcon.todenseQuantEcon.todenseQuantEcon.update!QuantEcon.update_values!QuantEcon.var_quadratic_sumQuantEcon.warn_persistencyQuantEcon.@def_sim
Exported
QuantEcon.ARMA — Type
ARMARepresents a scalar ARMA(p, q) process.
If $\phi$ and $\theta$ are scalars, then the model is understood to be
\[ X_t = \phi X_{t-1} + \epsilon_t + \theta \epsilon_{t-1}\]
where $\epsilon_t$ is a white noise process with standard deviation sigma.
If $\phi$ and $\theta$ are arrays or sequences, then the interpretation is the ARMA(p, q) model
\[ X_t = \phi_1 X_{t-1} + ... + \phi_p X_{t-p} + \epsilon_t + \theta_1 \epsilon_{t-1} + \ldots + \theta_q \epsilon_{t-q}\]
where
- $\phi = (\phi_1, \phi_2, \ldots , \phi_p)$
- $\theta = (\theta_1, \theta_2, \ldots , \theta_q)$
- $\sigma$ is a scalar, the standard deviation of the white noise
Fields
phi::Vector: AR parameters $\phi_1, \ldots, \phi_p$.theta::Vector: MA parameters $\theta_1, \ldots, \theta_q$.p::Integer: Number of AR coefficients.q::Integer: Number of MA coefficients.sigma::Real: Standard deviation of white noise.ma_poly::Vector: MA polynomial –- filtering representation.ar_poly::Vector: AR polynomial –- filtering representation.
Examples
julia> phi = 0.5;
julia> theta = [0.0, -0.8];
julia> sigma = 1.0;
julia> lp = ARMA(phi, theta, sigma)
ARMA([0.5], [0.0, -0.8], 1, 2, 1.0, [1.0, 0.0, -0.8], [1.0, -0.5])QuantEcon.CFEUtility — Type
CFEUtilityType used to evaluate constant Frisch elasticity (CFE) utility. CFE utility takes the form
\[v(l) = \xi l^{1 + 1/\phi} / (1 + 1/\phi)\]
Additionally, this code assumes that if l < 1e-10 then
\[v(l) = \xi ((10^{-10})^{1 + 1/\phi} / (1 + 1/\phi) - (10^{-10})^{1/\phi} * (10^{-10} - l))\]
Fields
ϕ::Float64: Frisch elasticity of labor supply.ξ::Float64: Scaling parameter for the utility function.
QuantEcon.CRRAUtility — Type
CRRAUtilityType used to evaluate CRRA utility. CRRA utility takes the form
\[u(c) = \xi c^{1 - \gamma} / (1 - \gamma)\]
Additionally, this code assumes that if c < 1e-10 then
\[u(c) = \xi ((10^{-10})^{1 - \gamma} / (1 - \gamma) + (10^{-10})^{-\gamma} * (c - 10^{-10}))\]
Fields
γ::Float64: Coefficient of relative risk aversion.ξ::Float64: Scaling parameter for the utility function.
QuantEcon.DiscreteDP — Type
DiscreteDPDiscreteDP type for specifying parameters for discrete dynamic programming model.
Fields
R::Array{T,NR}: Reward array.Q::Array{T,NQ}: Transition probability array.beta::Float64: Discount factor.a_indices::Vector{Tind}: Action indices. Empty unless using SA formulation.a_indptr::Vector{Tind}: Action index pointers. Empty unless using SA formulation.
QuantEcon.DiscreteDP — Method
DiscreteDP(R, Q, beta)DiscreteDP constructor for specifying parameters for discrete dynamic programming model using dense matrix formulation.
Arguments
R::Array{T,NR}: Reward array.Q::Array{T,NQ}: Transition probability array.beta::Float64: Discount factor.
Returns
ddp::DiscreteDP: Constructor for DiscreteDP object.
QuantEcon.DiscreteDP — Method
DiscreteDP(R, Q, beta, s_indices, a_indices)DiscreteDP constructor for specifying parameters for discrete dynamic programming model using state-action pair formulation.
Arguments
R::Array{T,NR}: Reward array.Q::Array{T,NQ}: Transition probability array.beta::Float64: Discount factor.s_indices::Vector{Tind}: State indices.a_indices::Vector{Tind}: Action indices.
Returns
ddp::DiscreteDP: Constructor for DiscreteDP object.
QuantEcon.DiscreteRV — Type
DiscreteRVGenerates an array of draws from a discrete random variable with vector of probabilities given by q.
Fields
q::TV1: A vector of non-negative probabilities that sum to 1, whereTV1<:AbstractVector.Q::TV2: The cumulative sum ofq, whereTV2<:AbstractVector.
QuantEcon.EllipticalUtility — Type
EllipticalUtilityType used to evaluate elliptical utility function. Elliptical utility takes the form
\[v(l) = b (1 - l^\mu)^{1 / \mu}\]
Fields
b::Float64: Scaling parameter for the utility function.μ::Float64: Curvature parameter for the utility function.
QuantEcon.Kalman — Type
KalmanRepresents a Kalman filter for a linear Gaussian state space model.
Fields
A: State transition matrix.G: Observation matrix.Q: State noise covariance matrix.R: Observation noise covariance matrix.k: Number of observed variables.n: Number of state variables.cur_x_hat: Current estimate of state mean.cur_sigma: Current estimate of state covariance.
QuantEcon.LAE — Type
LAEA look ahead estimator associated with a given stochastic kernel p and a vector of observations X.
Fields
p::Function: The stochastic kernel. Signature isp(x, y)and it should be vectorized in both inputs.X::Matrix: A vector containing observations. Note that this can be passed as any kind ofAbstractArrayand will be coerced into ann x 1vector.
QuantEcon.LQ — Type
LQ(Q, R, A, B, C, N; bet=1.0, capT=nothing, rf=fill(NaN, size(R)...))Main constructor for LQ type.
Specifies default arguments for all fields not part of the payoff function or transition equation.
Arguments
Q::ScalarOrArray:k x kpayoff coefficient for control variable u. Must be symmetric and nonnegative definite.R::ScalarOrArray:n x npayoff coefficient matrix for state variable x. Must be symmetric and nonnegative definite.A::ScalarOrArray:n x ncoefficient on state in state transition.B::ScalarOrArray:n x kcoefficient on control in state transition.C::ScalarOrArray(fill(zero(eltype(R)), size(R, 1))):n x jcoefficient on random shock in state transition.N::ScalarOrArray(zero(B'A)):k x ncross product in payoff equation.;bet::Real(1.0): Discount factor in[0, 1].;capT::Union{Int, Nothing}(nothing): Terminal period in finite horizon problem.;rf::ScalarOrArray(fill(NaN, size(R)...)):n x nterminal payoff in finite horizon problem. Must be symmetric and nonnegative definite.
Returns
lq::LQ: Instance of LQ type with initialized fields.
QuantEcon.LQ — Type
LQLinear quadratic optimal control of either infinite or finite horizon.
The infinite horizon problem can be written
\[\min \mathbb{E} \sum_{t=0}^{\infty} \beta^t r(x_t, u_t)\]
with
\[r(x_t, u_t) := x_t' R x_t + u_t' Q u_t + 2 u_t' N x_t\]
The finite horizon form is
\[\min \mathbb{E} \sum_{t=0}^{T-1} \beta^t r(x_t, u_t) + \beta^T x_T' R_f x_T\]
Both are minimized subject to the law of motion
\[x_{t+1} = A x_t + B u_t + C w_{t+1}\]
Here $x$ is n x 1, $u$ is k x 1, $w$ is j x 1 and the matrices are conformable for these dimensions. The sequence ${w_t}$ is assumed to be white noise, with zero mean and $\mathbb{E} w_t w_t' = I$, the j x j identity.
For this model, the time $t$ value (i.e., cost-to-go) function $V_t$ takes the form
\[x' P_T x + d_T\]
and the optimal policy is of the form $u_T = -F_T x_T$. In the infinite horizon case, $V, P, d$ and $F$ are all stationary.
Fields
Q::ScalarOrArray:k x kpayoff coefficient for control variable u. Must be symmetric and nonnegative definite.R::ScalarOrArray:n x npayoff coefficient matrix for state variable x. Must be symmetric and nonnegative definite.A::ScalarOrArray:n x ncoefficient on state in state transition.B::ScalarOrArray:n x kcoefficient on control in state transition.C::ScalarOrArray:n x jcoefficient on random shock in state transition.N::ScalarOrArray:k x ncross product in payoff equation.bet::Real: Discount factor in[0, 1].capT::Union{Int, Nothing}: Terminal period in finite horizon problem.rf::ScalarOrArray:n x nterminal payoff in finite horizon problem. Must be symmetric and nonnegative definite.P::ScalarOrArray:n x nmatrix in value function representation $V(x) = x'Px + d$.d::Real: Constant in value function representation.F::ScalarOrArray: Policy rule that specifies optimal control in each period.
QuantEcon.LSS — Type
LSSA type that describes the Gaussian Linear State Space Model of the form:
\[ x_{t+1} = A x_t + C w_{t+1} \\ y_t = G x_t + H v_t\]
where ${w_t}$ and ${v_t}$ are independent and standard normal with dimensions k and l respectively. The initial conditions are $\mu_0$ and $\Sigma_0$ for $x_0 \sim N(\mu_0, \Sigma_0)$. When $\Sigma_0=0$, the draw of $x_0$ is exactly $\mu_0$.
Fields
A::Matrix: Part of the state transition equation. It should ben x n.C::Matrix: Part of the state transition equation. It should ben x m.G::Matrix: Part of the observation equation. It should bek x n.H::Matrix: Part of the observation equation. It should bek x l.k::Int: Dimension.n::Int: Dimension.m::Int: Dimension.l::Int: Dimension.mu_0::Vector: This is the mean of initial draw and is of lengthn.Sigma_0::Matrix: This is the variance of the initial draw and isn x nand also should be positive definite and symmetric.
QuantEcon.LinInterp — Type
LinInterpLinear interpolation in one dimension.
Fields
breaks::AbstractVector: A sorted array of grid points on which to interpolate.vals::AbstractVector: The function values associated with each of the grid points.
Examples
julia> breaks = cumsum(0.1 .* rand(20));
julia> vals = 0.1 .* sin.(breaks);
julia> li = LinInterp(breaks, vals);
julia> li(0.2) # do interpolation via `call` method on a LinInterp object
0.019866933079506122
julia> li.([0.1, 0.2, 0.3]) # use broadcasting to evaluate at multiple points
3-element Vector{Float64}:
0.009983341664682815
0.019866933079506122
0.02955202066613396QuantEcon.LogUtility — Type
LogUtilityType used to evaluate log utility. Log utility takes the form
\[u(c) = \xi \log(c)\]
Additionally, this code assumes that if c < 1e-10 then
\[u(c) = \xi (\log(10^{-10}) + 10^10*(c - 10^{-10}))\]
Fields
ξ::Float64: Scaling parameter for the utility function.
QuantEcon.MPFI — Type
MPFIModified Policy Iteration solution algorithm.
References
https://lectures.quantecon.org/jl/discrete_dp.html
QuantEcon.MVNSampler — Type
MVNSamplerA sampler for multivariate normal distributions.
Fields
mu::Vector: Mean vector of the multivariate normal distribution.Sigma::Matrix: Covariance matrix of the multivariate normal distribution.Q::Matrix: Cholesky factor of the covariance matrix used for sampling.
QuantEcon.MVNSampler — Method
MVNSampler(mu, Sigma)Construct a sampler for the multivariate normal distribution with mean vector mu and covariance matrix Sigma.
Arguments
mu::Vector: Mean vector of the multivariate normal distribution.Sigma::Matrix: Covariance matrix of the multivariate normal distribution. Must be symmetric and positive semidefinite.
Returns
MVNSampler: A sampler object that can be used withrandto generate samples.
Examples
julia> using QuantEcon, LinearAlgebra, Random
julia> n = 3;
julia> mu = zeros(n);
julia> r = -0.2;
julia> Sigma = fill(r, (n, n)); Sigma[diagind(Sigma)] = ones(n);
julia> d = MVNSampler(mu, Sigma);
julia> rng = MersenneTwister(12345);
julia> rand(rng, d)
3-element Vector{Float64}:
0.8087089406385097
-2.6078862871910893
-1.2034459855748247
julia> rand(rng, d, 4)
3×4 Matrix{Float64}:
0.585714 -0.286877 0.835413 0.8792
0.228359 -0.104968 0.543674 -0.388309
1.16821 -0.0262369 -1.10658 -1.84924QuantEcon.MarkovChain — Type
MarkovChainFinite-state discrete-time Markov chain.
Methods are available that provide useful information such as the stationary distributions, and communication and recurrent classes, and allow simulation of state transitions.
Fields
p::AbstractMatrix: The transition matrix. Must be square, all elements must be nonnegative, and all rows must sum to unity.state_values::AbstractVector: Vector containing the values associated with the states.
QuantEcon.MarkovChain — Method
MarkovChain(ddp, ddpr)Returns the controlled Markov chain for a given policy sigma.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult: Object that contains result variables.
Returns
mc::MarkovChain: Controlled Markov chain.
QuantEcon.PFI — Type
PFIPolicy Iteration solution algorithm.
References
https://lectures.quantecon.org/jl/discrete_dp.html
QuantEcon.RBLQ — Type
RBLQRepresents infinite horizon robust LQ control problems of the form
\[ \min_{u_t} \sum_t \beta^t {x_t' R x_t + u_t' Q u_t }\]
subject to
\[ x_{t+1} = A x_t + B u_t + C w_{t+1}\]
and with model misspecification parameter $\theta$.
Fields
Q::Matrix{Float64}: The cost(payoff) matrix for the controls. See above for more. $Q$ should bek x kand symmetric and positive definite.R::Matrix{Float64}: The cost(payoff) matrix for the state. See above for more. $R$ should ben x nand symmetric and non-negative definite.A::Matrix{Float64}: The matrix that corresponds with the state in the state space system. $A$ should ben x n.B::Matrix{Float64}: The matrix that corresponds with the control in the state space system. $B$ should ben x k.C::Matrix{Float64}: The matrix that corresponds with the random process in the state space system. $C$ should ben x j.beta::Real: The discount factor in the robust control problem.theta::Real: The robustness factor in the robust control problem.k, n, j::Int: Dimensions of input matrices.
QuantEcon.SimplexGrid — Type
SimplexGridIterator version of simplex_grid, i.e., iterator that iterates over the integer points in the (m-1)-dimensional simplex $\{x \mid x_1 + \cdots + x_m = n, x_i \geq 0\}$, or equivalently, the m-part compositions of n, in lexicographic order.
Fields
m::Int: Dimension of each point. Must be a positive integer.n::Int: Number which the coordinates of each point sum to. Must be a nonnegative integer.
Examples
julia> sg = SimplexGrid(3, 4);
julia> for x in sg
@show x
end
x = [0, 0, 4]
x = [0, 1, 3]
x = [0, 2, 2]
x = [0, 3, 1]
x = [0, 4, 0]
x = [1, 0, 3]
x = [1, 1, 2]
x = [1, 2, 1]
x = [1, 3, 0]
x = [2, 0, 2]
x = [2, 1, 1]
x = [2, 2, 0]
x = [3, 0, 1]
x = [3, 1, 0]
x = [4, 0, 0]QuantEcon.VFI — Type
VFIValue Iteration solution algorithm.
References
https://lectures.quantecon.org/jl/discrete_dp.html
DSP.Periodograms.periodogram — Function
periodogram(x)
periodogram(x, window, window_len=7)Computes the periodogram
\[I(w) = \frac{1}{n} | \sum_{t=0}^{n-1} x_t e^{itw} |^2\]
at the Fourier frequencies $w_j := 2 \frac{\pi j}{n}, j = 0, \ldots, n - 1$, using the fast Fourier transform. Only the frequencies $w_j$ in $[0, \pi]$ and corresponding values $I(w_j)$ are returned. If a window type is given then smoothing is performed.
Arguments
x::Vector: A vector containing the data to analyze.window::AbstractString: A string giving the window type (optional). Possible values areflat,hanning,hamming,bartlett, orblackman.window_len::Int: An odd integer giving the length of the window (default: 7).
Returns
w::Vector{Float64}: Fourier frequencies at which the periodogram is evaluated.I_w::Vector{Float64}: The periodogram at frequenciesw.
Graphs.period — Method
period(mc)Return the period of the Markov chain mc.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Int: Period ofmc.
QuantEcon.F_to_K — Method
F_to_K(rlq, F)Compute agent 2's best cost-minimizing response $K$, given $F$.
Arguments
rlq::RBLQ: Instance ofRBLQtype.F::Matrix{Float64}: Ak x narray representing agent 1's policy.
Returns
K::Matrix{Float64}: Agent's best cost minimizing response corresponding to $F$.P::Matrix{Float64}: The value function corresponding to $F$.
QuantEcon.K_to_F — Method
K_to_F(rlq, K)Compute agent 1's best cost-minimizing response $F$, given $K$.
Arguments
rlq::RBLQ: Instance ofRBLQtype.K::Matrix{Float64}: Aj x narray representing the worst case matrix.
Returns
F::Matrix{Float64}: Agent's best cost minimizing response corresponding to $K$.P::Matrix{Float64}: The value function corresponding to $K$.
QuantEcon.RQ_sigma — Method
RQ_sigma(ddp, ddpr)Method of RQ_sigma that extracts sigma from a DPSolveResult.
See other docstring for details.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult: Object that contains result variables.
Returns
R_sigma::Array{Float64}: Reward vector forsigma, of lengthn.Q_sigma::Array{Float64}: Transition probability matrix forsigma, of shape(n, n).
QuantEcon.RQ_sigma — Method
RQ_sigma(ddp, sigma)Given a policy sigma, return the reward vector R_sigma and the transition probability matrix Q_sigma.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.sigma::AbstractVector{Int}: Policy rule vector.
Returns
R_sigma::Array{Float64}: Reward vector forsigma, of lengthn.Q_sigma::Array{Float64}: Transition probability matrix forsigma, of shape(n, n).
QuantEcon.ar_periodogram — Function
ar_periodogram(x, window="hanning", window_len=7)Compute periodogram from data x, using prewhitening, smoothing and recoloring. The data is fitted to an AR(1) model for prewhitening, and the residuals are used to compute a first-pass periodogram with smoothing. The fitted coefficients are then used for recoloring.
Arguments
x::Array: An array containing the data to analyze.window::AbstractString: A string giving the window type (default: "hanning"). Possible values areflat,hanning,hamming,bartlett, orblackman.window_len::Int: An odd integer giving the length of the window (default: 7).
Returns
w::Vector{Float64}: Fourier frequencies at which the periodogram is evaluated.I_w::Vector{Float64}: The periodogram at frequenciesw.
QuantEcon.autocovariance — Method
autocovariance(arma; num_autocov=16)Compute the autocovariance function from the ARMA parameters over the integers range(num_autocov) using the spectral density and the inverse Fourier transform.
Arguments
arma::ARMA: Instance ofARMAtype.;num_autocov::Integer(16): The number of autocovariances to calculate.
Returns
::Vector{Float64}: The autocovariance function.
QuantEcon.b_operator — Method
b_operator(rlq, P)The $B$ operator, mapping $P$ into
\[ B(P) := R - \beta^2 A'PB(Q + \beta B'PB)^{-1}B'PA + \beta A'PA\]
and also returning
\[ F := (Q + \beta B'PB)^{-1} \beta B'PA\]
Arguments
rlq::RBLQ: Instance ofRBLQtype.P::Matrix{Float64}: Size isn x n.
Returns
F::Matrix{Float64}: The $F$ matrix as defined above.new_p::Matrix{Float64}: The matrix $P$ after applying the $B$ operator.
QuantEcon.backward_induction — Method
backward_induction(ddp, J[, v_term=zeros(num_states(ddp))])Solve by backward induction a $J$-period finite horizon discrete dynamic program with stationary reward $r$ and transition probability functions $q$ and discount factor $\beta \in [0, 1]$.
The optimal value functions $v^{\ast}_1, \ldots, v^{\ast}_{J+1}$ and policy functions $\sigma^{\ast}_1, \ldots, \sigma^{\ast}_J$ are obtained by $v^{\ast}_{J+1} = v_{J+1}$, and
\[v^{\ast}_j(s) = \max_{a \in A(s)} r(s, a) + \beta \sum_{s' \in S} q(s'|s, a) v^{\ast}_{j+1}(s') \quad (s \in S)\]
and
\[\sigma^{\ast}_j(s) \in \operatorname*{arg\,max}_{a \in A(s)} r(s, a) + \beta \sum_{s' \in S} q(s'|s, a) v^*_{j+1}(s') \quad (s \in S)\]
for $j= J, \ldots, 1$, where the terminal value function $v_{J+1}$ is exogenously given by v_term.
Arguments
ddp::DiscreteDP{T}: Object that contains the model parameters.J::Integer: Number of decision periods.v_term::AbstractVector{<:Real}=zeros(num_states(ddp)): Terminal value function of length equal to n (the number of states).
Returns
vs::Matrix{S}: Array of shape (n, J+1) wherevs[:,j]contains the optimal value function at period j = 1, ..., J+1.sigmas::Matrix{Int}: Array of shape (n, J) wheresigmas[:,j]contains the optimal policy function at period j = 1, ..., J.
QuantEcon.bellman_operator! — Method
bellman_operator!(ddp, v, Tv, sigma)The Bellman operator, which computes and returns the updated value function $Tv$ for a value function $v$.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.v::AbstractVector{T<:AbstractFloat}: The current guess of the value function.Tv::AbstractVector{T<:AbstractFloat}: A buffer array to hold the updated value function. Initial value not used and will be overwritten.sigma::AbstractVector: A buffer array to hold the policy function. Initial values not used and will be overwritten.
Returns
Tv::typeof(Tv): Updated value function vector.sigma::typeof(sigma): Updated policy function vector.
QuantEcon.bellman_operator! — Method
bellman_operator!(ddp, ddpr)Apply the Bellman operator using v=ddpr.v, Tv=ddpr.Tv, and sigma=ddpr.sigma.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult: Object that contains result variables.
Returns
Tv::typeof(ddpr.Tv): Updated value function vector.sigma::typeof(ddpr.sigma): Updated policy function vector.
Notes
Updates ddpr.Tv and ddpr.sigma inplace.
QuantEcon.bellman_operator! — Method
bellman_operator!(ddp, v, sigma)The Bellman operator, which computes and returns the updated value function $Tv$ for a given value function $v$.
This function will fill the input v with Tv and the input sigma with the corresponding policy rule.
Arguments
ddp::DiscreteDP: The ddp model.v::AbstractVector{T<:AbstractFloat}: The current guess of the value function. This array will be overwritten.sigma::AbstractVector: A buffer array to hold the policy function. Initial values not used and will be overwritten.
Returns
Tv::Vector: Updated value function vector.sigma::typeof(sigma): Policy rule.
QuantEcon.bellman_operator — Method
bellman_operator(ddp, v)The Bellman operator, which computes and returns the updated value function $Tv$ for a given value function $v$.
Arguments
ddp::DiscreteDP: The ddp model.v::AbstractVector: The current guess of the value function.
Returns
Tv::Vector: Updated value function vector.
QuantEcon.bisect — Method
bisect(f, x1, x2; maxiter=500, xtol=1e-12, rtol=2*eps())Find the root of the f on the bracketing interval [x1, x2] via bisection.
Arguments
f::Function: The function you want to bracket.x1::T: Lower border for search interval.x2::T: Upper border for search interval.;maxiter::Int(500): Maximum number of bisection iterations.;xtol::Float64(1e-12): The routine converges when a root is known to lie withinxtolof the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps()): The routine converges when a root is known to lie withinrtoltimes the value returned of the value returned. Should be ≥ 0.
Returns
x::T: The found root.
Exceptions
- Throws an
ArgumentErrorif[x1, x2]does not form a bracketing interval. - Throws a
ConvergenceErrorif the maximum number of iterations is exceeded.
References
Matches bisect function from scipy/scipy/optimize/Zeros/bisect.c.
QuantEcon.brent — Method
brent(f, xa, xb; maxiter=500, xtol=1e-12, rtol=2*eps())Find the root of the f on the bracketing interval [xa, xb] via Brent's algorithm.
Arguments
f::Function: The function you want to bracket.x1::T: Lower border for search interval.x2::T: Upper border for search interval.;maxiter::Int(500): Maximum number of bisection iterations.;xtol::Float64(1e-12): The routine converges when a root is known to lie withinxtolof the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps()): The routine converges when a root is known to lie withinrtoltimes the value returned of the value returned. Should be ≥ 0.
Returns
x::T: The found root.
Exceptions
- Throws an
ArgumentErrorif[x1, x2]does not form a bracketing interval. - Throws a
ConvergenceErrorif the maximum number of iterations is exceeded.
References
Matches brentq function from scipy/scipy/optimize/Zeros/bisectq.c.
QuantEcon.brenth — Method
brenth(f, xa, xb; maxiter=500, xtol=1e-12, rtol=2*eps())Find a root of the f on the bracketing interval [xa, xb] via modified Brent's algorithm.
This routine uses a hyperbolic extrapolation formula instead of the standard inverse quadratic formula. Otherwise it is the original Brent's algorithm, as implemented in the brent function.
Arguments
f::Function: The function you want to bracket.x1::T: Lower border for search interval.x2::T: Upper border for search interval.;maxiter::Int(500): Maximum number of bisection iterations.;xtol::Float64(1e-12): The routine converges when a root is known to lie withinxtolof the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps()): The routine converges when a root is known to lie withinrtoltimes the value returned of the value returned. Should be ≥ 0.
Returns
x::T: The found root.
Exceptions
- Throws an
ArgumentErrorif[x1, x2]does not form a bracketing interval. - Throws a
ConvergenceErrorif the maximum number of iterations is exceeded.
References
Matches brenth function from scipy/scipy/optimize/Zeros/bisecth.c.
QuantEcon.ckron — Function
ckron(arrays::AbstractArray...)Repeatedly apply kronecker products to the arrays. Equivalent to reduce(kron, arrays).
QuantEcon.communication_classes — Method
communication_classes(mc)Find the communication classes of the Markov chain mc.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Vector{Vector{Int}}: Vector of vectors that describe the communication classes ofmc.
QuantEcon.compute_deterministic_entropy — Method
compute_deterministic_entropy(rlq, F, K, x0)Given $K$ and $F$, compute the value of deterministic entropy, which is $\sum_t \beta^t x_t' K'K x_t$ with $x_{t+1} = (A - BF + CK) x_t$.
Arguments
rlq::RBLQ: Instance ofRBLQtype.F::Matrix{Float64}: The policy function, ak x narray.K::Matrix{Float64}: The worst case matrix, aj x narray.x0::Vector{Float64}: The initial condition for state.
Returns
e::Float64: The deterministic entropy.
QuantEcon.compute_fixed_point — Method
compute_fixed_point(T, v; err_tol=1e-4, max_iter=100, verbose=2, print_skip=10)Repeatedly apply a function to search for a fixed point.
Approximates $T^∞ v$, where $T$ is an operator (function) and $v$ is an initial guess for the fixed point. Will terminate either when |T^{k+1}(v) - T^k v| < err_tol or max_iter iterations has been exceeded.
Provided that $T$ is a contraction mapping or similar, the return value will be an approximation to the fixed point of $T$.
Arguments
T::Function: A function representing the operator $T$.v::TV: The initial condition. An object of typeTV.err_tol::Real(1e-4): Stopping tolerance for iterations.max_iter::Integer(100): Maximum number of iterations.verbose::Integer(2): Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration).print_skip::Integer(10): Ifverbose == 2, how many iterations to apply between print messages.
Returns
::TV: The fixed point of the operatorT. Has typeTV.
Examples
julia> T(x, μ) = 4.0 * μ * x * (1.0 - x);
julia> x_star = compute_fixed_point(x->T(x, 0.3), 0.4); # (4μ - 1)/(4μ)
Compute iterate 10 with error 0.0023564830444494367
Compute iterate 20 with error 0.0002222571812867391
Converged in 24 steps
julia> x_star
0.16702641162980347QuantEcon.compute_greedy! — Method
compute_greedy!(ddp, ddpr)Compute the $v$-greedy policy.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult: Object that contains result variables.
Returns
sigma::Vector{Int}: Array containingv-greedy policy rule.
Notes
Modifies ddpr.sigma and ddpr.Tv in place.
QuantEcon.compute_greedy — Method
compute_greedy(ddp, v)Compute the $v$-greedy policy.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.v::AbstractVector: Value function vector of lengthn.
Returns
sigma::Vector{Int}: v-greedy policy vector, of lengthn.
QuantEcon.compute_loglikelihood — Method
compute_loglikelihood(kn, y)Computes log-likelihood of entire observations.
Arguments
kn::Kalman:Kalmaninstance specifying the model. Initial value must be the prior for t=1 period observation, i.e. $x_{1|0}$.y::AbstractMatrix:n x Tmatrix of observed data.nis the number of observed variables in one period. Each column is a vector of observations at each period.
Returns
logL::Real: Log-likelihood of all observations.
QuantEcon.compute_sequence — Function
compute_sequence(lq, x0, ts_length=100)Compute and return the optimal state and control sequence, assuming innovation $N(0,1)$.
Arguments
lq::LQ: Instance ofLQtype.x0::ScalarOrArray: Initial state.ts_length::Integer(100): Maximum number of periods for which to return process. Iflqinstance is finite horizon type, the sequences are returned only formin(ts_length, lq.capT).
Returns
x_path::Matrix{Float64}: Ann x T+1matrix, where the t-th column represents $x_t$.u_path::Matrix{Float64}: Ak x Tmatrix, where the t-th column represents $u_t$.w_path::Matrix{Float64}: An x T+1matrix, where the t-th column representslq.C*N(0,1).
QuantEcon.d_operator — Method
d_operator(rlq, P)The $D$ operator, mapping $P$ into
\[ D(P) := P + PC(\theta I - C'PC)^{-1} C'P\]
Arguments
rlq::RBLQ: Instance ofRBLQtype.P::Matrix{Float64}: Size isn x n.
Returns
dP::Matrix{Float64}: The matrix $P$ after applying the $D$ operator.
QuantEcon.discrete_var — Function
discrete_var(b, B, Psi, Nm, n_moments=2, method=Even(), n_sigmas=sqrt(Nm-1))Compute a finite-state Markov chain approximation to a VAR(1) process of the form
\[ y_{t+1} = b + By_{t} + \Psi^{\frac{1}{2}}\epsilon_{t+1}\]
where $\epsilon_{t+1}$ is an vector of independent standard normal innovations of length M.
Arguments
b::Union{Real, AbstractVector}: Constant vector of lengthM.M=1corresponds scalar case.B::Union{Real, AbstractMatrix}:M x Mmatrix of impact coefficients.Psi::Union{Real, AbstractMatrix}:M x Mvariance-covariance matrix of the innovations.discrete_varonly accepts non-singular variance-covariance matrices,Psi.Nm::Integer > 3: Desired number of discrete points in each dimension.
Optional
n_moments::Integer: Desired number of moments to match. The default is 2.method::VAREstimationMethod: Specify the method used to determine the grid points. Accepted inputs areEven(),Quantile(), orQuadrature(). Please see the paper for more details.n_sigmas::Real: If theEven()option is specified,n_sigmasis used to determine the number of unconditional standard deviations used to set the endpoints of the grid. The default issqrt(Nm-1).
Returns
P:Nm^M x Nm^Mprobability transition matrix. Each row corresponds to a discrete conditional probability distribution over the state M-tuples inX.X:M x Nm^Mmatrix of states. Each column corresponds to an M-tuple of values which correspond to the state associated with each row ofP.
NOTES
- discrete_var only constructs tensor product grids where each dimension contains the same number of points. For this reason it is recommended that this code not be used for problems of more than about 4 or 5 dimensions due to curse of dimensionality issues.
- Future updates will allow for singular variance-covariance matrices and sparse grid specifications.
Reference
- Farmer, L. E., & Toda, A. A. (2017). "Discretizing nonlinear, non‐Gaussian Markov processes with exact conditional moments," Quantitative Economics, 8(2), 651-683.
QuantEcon.divide_bracket — Method
divide_bracket(f, x1, x2, n=50)Given a function f defined on the interval [x1, x2], subdivide the interval into n equally spaced segments, and search for zero crossings of the function. nroot will be set to the number of bracketing pairs found. If it is positive, the arrays xb1[1..nroot] and xb2[1..nroot] will be filled sequentially with any bracketing pairs that are found.
Arguments
f::Function: The function you want to bracket.x1::T: Lower border for search interval.x2::T: Upper border for search interval.n::Int(50): The number of sub-intervals to divide[x1, x2]into.
Returns
x1b::Vector{T}:Vectorof lower borders of bracketing intervals.x2b::Vector{T}:Vectorof upper borders of bracketing intervals.
References
This is zbrack from Numerical Recipes in C++.
QuantEcon.do_quad — Method
do_quad(f, nodes, weights, args...; kwargs...)Approximate the integral of f, given quadrature nodes and weights.
Arguments
f::Function: A callable function that is to be approximated over the domain spanned bynodes.nodes::Array: Quadrature nodes.weights::Array: Quadrature weights.args...(Void): Additional positional arguments to pass tof.;kwargs...(Void): Additional keyword arguments to pass tof.
Returns
out::Float64: The scalar that approximates integral offon the hypercube formed by[a, b].
QuantEcon.estimate_mc_discrete — Method
estimate_mc_discrete(X)Accepts the simulation of a discrete state Markov chain and estimates the transition probabilities.
Let $S = s_1, s_2, \ldots, s_N$ with $s_1 < s_2 < \ldots < s_N$ be the discrete states of a Markov chain. Furthermore, let $P$ be the corresponding stochastic transition matrix.
Given a history of observations, $\{X\}_{t=0}^{T}$ with $x_t \in S \forall t$, we would like to estimate the transition probabilities in $P$ with $p_{ij}$ as the ith row and jth column of $P$. For $x_t = s_i$ and $x_{t-1} = s_j$, let $P(x_t | x_{t-1})$ be defined as $p_{i,j}$ element of the stochastic matrix. The likelihood function is then given by
\[ L(\{X\}^t; P) = \text{Prob}(x_1) \prod_{t=2}^{T} P(x_t | x_{t-1})\]
The maximum likelihood estimate is then just given by the number of times a transition from $s_i$ to $s_j$ is observed divided by the number of times $s_i$ was observed.
Note: Because of the estimation procedure used, only states that are observed in the history appear in the estimated Markov chain... It can't divine whether there are unobserved states in the original Markov chain.
For more info, refer to:
- http://www.stat.cmu.edu/~cshalizi/462/lectures/06/markov-mle.pdf
- https://stats.stackexchange.com/questions/47685/calculating-log-likelihood-for-given-mle-markov-chains
Arguments
X::Vector{T}: Simulated history of Markov states.
Returns
mc::MarkovChain{T}: A Markov chain holding the state values and transition matrix.
QuantEcon.evaluate_F — Method
evaluate_F(rlq, F)Given a fixed policy $F$, with the interpretation $u = -F x$, this function computes the matrix $P_F$ and constant $d_F$ associated with discounted cost $J_F(x) = x' P_F x + d_F$.
Arguments
rlq::RBLQ: Instance ofRBLQtype.F::Matrix{Float64}: The policy function, ak x narray.
Returns
K_F::Matrix{Float64}: Worst case policy.P_F::Matrix{Float64}: Matrix for discounted cost.d_F::Float64: Constant for discounted cost.O_F::Matrix{Float64}: Matrix for discounted entropy.o_F::Float64: Constant for discounted entropy.
QuantEcon.evaluate_policy — Method
evaluate_policy(ddp, ddpr)Method of evaluate_policy that extracts sigma from a DPSolveResult.
See other docstring for details.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult: Object that contains result variables.
Returns
v_sigma::Array{Float64}: Value vector ofsigma, of lengthn.
QuantEcon.evaluate_policy — Method
evaluate_policy(ddp, sigma)Compute the value of a policy.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.sigma::AbstractVector{T<:Integer}: Policy rule vector.
Returns
v_sigma::Array{Float64}: Value vector ofsigma, of lengthn.
QuantEcon.expand_bracket — Method
expand_bracket(f, x1, x2; ntry=50, fac=1.6)Given a function f and an initial guessed range x1 to x2, the routine expands the range geometrically until a root is bracketed by the returned values x1 and x2 (in which case zbrac returns true) or until the range becomes unacceptably large (in which case a ConvergenceError is thrown).
Arguments
f::Function: The function you want to bracket.x1::T: Initial guess for lower border of bracket.x2::T: Initial guess for upper border of bracket.;ntry::Int(50): The maximum number of expansion iterations.;fac::Float64(1.6): Expansion factor (higher ⟶ larger interval size jumps).
Returns
x1::T: The lower end of an actual bracketing interval.x2::T: The upper end of an actual bracketing interval.
References
This method is zbrac from Numerical Recipes in C++.
Exceptions
- Throws a
ConvergenceErrorif the maximum number of iterations is exceeded.
QuantEcon.filtered_to_forecast! — Method
filtered_to_forecast!(k)Updates the moments of the time $t$ filtering distribution to the moments of the predictive distribution, which becomes the time $t+1$ prior.
Arguments
k::Kalman: An instance of the Kalman filter.
Returns
nothing: This function modifies the Kalman filter in place.
QuantEcon.golden_method — Method
golden_method(f, a, b; tol=eps()*10, maxit=1000)Applies Golden-section search to search for the maximum of a function elementwise over intervals specified by vectors.
Arguments
f::Function: Function to maximize. Should accept a vector and return a vector of function values.a::AbstractVector: Vector of lower bounds for each search interval.b::AbstractVector: Vector of upper bounds for each search interval.tol::Real=eps()*10: Convergence tolerance.maxit::Int=1000: Maximum number of iterations.
Returns
x::AbstractVector: Vector of points where the function is maximized in each interval.fx::AbstractVector: Vector of maximum function values.
QuantEcon.golden_method — Method
golden_method(f, a, b; tol=10*eps(), maxit=1000)Applies Golden-section search to search for the maximum of a function in the interval (a, b).
See: https://en.wikipedia.org/wiki/Golden-section_search
Arguments
f::Function: Function to maximize. Should accept a real number and return a real number.a::Real: Lower bound of the search interval.b::Real: Upper bound of the search interval.tol::Float64=10*eps(): Convergence tolerance.maxit::Int=1000: Maximum number of iterations.
Returns
x::Real: Point where the function is maximized.fx::Float64: Maximum function value.
QuantEcon.gridmake — Function
gridmake(arrays::Union{AbstractVector,AbstractMatrix}...)Expand one or more vectors (or matrices) into a matrix where rows span the cartesian product of combinations of the input arrays. Each column of the input arrays will correspond to one column of the output matrix. The first array varies the fastest (see example).
Examples
julia> x = [1, 2, 3]; y = [10, 20]; z = [100, 200];
julia> gridmake(x, y, z)
12×3 Matrix{Int64}:
1 10 100
2 10 100
3 10 100
1 20 100
2 20 100
3 20 100
1 10 200
2 10 200
3 10 200
1 20 200
2 20 200
3 20 200QuantEcon.gridmake! — Method
gridmake!(out::AbstractMatrix, arrays::AbstractVector...)Like gridmake, but fills a pre-populated array. out must have size prod(map(length, arrays), dims = length(arrays)).
QuantEcon.gth_solve — Method
gth_solve(A)This routine computes the stationary distribution of an irreducible Markov transition matrix (stochastic matrix) or transition rate matrix (generator matrix) $A$.
More generally, given a Metzler matrix (square matrix whose off-diagonal entries are all nonnegative) $A$, this routine solves for a nonzero solution $x$ to $x (A - D) = 0$, where $D$ is the diagonal matrix for which the rows of $A - D$ sum to zero (i.e., $D_{ii} = \sum_j A_{ij}$ for all $i$). One (and only one, up to normalization) nonzero solution exists corresponding to each recurrent class of $A$, and in particular, if $A$ is irreducible, there is a unique solution; when there are more than one solution, the routine returns the solution that contains in its support the first index $i$ such that no path connects $i$ to any index larger than $i$. The solution is normalized so that its 1-norm equals one. This routine implements the Grassmann-Taksar-Heyman (GTH) algorithm (Grassmann, Taksar, and Heyman 1985), a numerically stable variant of Gaussian elimination, where only the off-diagonal entries of $A$ are used as the input data. For a nice exposition of the algorithm, see Stewart (2009), Chapter 10.
Arguments
A::Matrix{T}: Stochastic matrix or generator matrix. Must be of shape n x n.
Returns
x::Vector{T}: Stationary distribution of $A$.
References
- W. K. Grassmann, M. I. Taksar and D. P. Heyman, "Regenerative Analysis and Steady State Distributions for Markov Chains, " Operations Research (1985), 1107-1116.
- W. J. Stewart, Probability, Markov Chains, Queues, and Simulation, Princeton University Press, 2009.
QuantEcon.hamilton_filter — Method
hamilton_filter(y, h, p)Apply Hamilton filter to AbstractVector.
http://econweb.ucsd.edu/~jhamilto/hp.pdf
Arguments
y::AbstractVector: Data to be filtered.h::Integer: Time horizon that we are likely to predict incorrectly. Original paper recommends 2 for annual data, 8 for quarterly data, 24 for monthly data.p::Integer: Number of lags in regression. Must be greater thanh. Note: For seasonal data, it's desirable forpandhto be integer multiples of the number of observations in a year. E.g., for quarterly data,h = 8andp = 4are recommended.
Returns
y_cycle::Vector: Cyclical component.y_trend::Vector: Trend component.
QuantEcon.hamilton_filter — Method
hamilton_filter(y, h)Apply Hamilton filter to AbstractVector under random walk assumption.
http://econweb.ucsd.edu/~jhamilto/hp.pdf
Arguments
y::AbstractVector: Data to be filtered.h::Integer: Time horizon that we are likely to predict incorrectly. Original paper recommends 2 for annual data, 8 for quarterly data, 24 for monthly data. Note: For seasonal data, it's desirable forhto be an integer multiple of the number of observations in a year. E.g., for quarterly data,h = 8is recommended.
Returns
y_cycle::Vector: Cyclical component.y_trend::Vector: Trend component.
QuantEcon.hp_filter — Method
hp_filter(y, λ)Apply Hodrick-Prescott filter to AbstractVector.
Arguments
y::AbstractVector: Data to be detrended.λ::Real: Penalty on variation in trend.
Returns
y_cyclical::Vector: Cyclical component.y_trend::Vector: Trend component.
QuantEcon.impulse_response — Method
impulse_response(arma; impulse_length=30)Get the impulse response corresponding to our model.
Arguments
arma::ARMA: Instance ofARMAtype.;impulse_length::Integer(30): Length of horizon for calculating impulse response. Must be at least as long as thepfields ofarma.
Returns
psi::Vector{Float64}:psi[j]is the response at lag j of the impulse response. We takepsi[1]as unity.
QuantEcon.interp — Method
interp(grid, function_vals)Linear interpolation in one dimension.
Arguments
grid::AbstractVector: A vector of grid points (will be sorted if not already sorted).function_vals::AbstractVector: The function values associated with each grid point.
Returns
LinInterp: A LinInterp object that can be called to perform linear interpolation.
Examples
julia> breaks = cumsum(0.1 .* rand(20));
julia> vals = 0.1 .* sin.(breaks);
julia> li = interp(breaks, vals);
julia> li(0.2) # Do interpolation by treating `li` as a function you can pass scalars to
0.019866933079506122
julia> li.([0.1, 0.2, 0.3]) # use broadcasting to evaluate at multiple points
3-element Vector{Float64}:
0.009983341664682815
0.019866933079506122
0.02955202066613396QuantEcon.is_aperiodic — Method
is_aperiodic(mc)Indicate whether the Markov chain mc is aperiodic.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Bool: True if the Markov chain is aperiodic, false otherwise.
QuantEcon.is_irreducible — Method
is_irreducible(mc)Indicate whether the Markov chain mc is irreducible.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Bool: True if the Markov chain is irreducible, false otherwise.
QuantEcon.is_stable — Method
is_stable(A)General function for testing for stability of matrix $A$. Just checks that eigenvalues are less than 1 in absolute value.
Arguments
A::Matrix: The matrix we want to check.
Returns
stable::Bool: Whether or not the matrix is stable.
QuantEcon.is_stable — Method
is_stable(lss)Test for stability of linear state space system. First removes the constant row and column.
Arguments
lss::LSS: The linear state space system.
Returns
stable::Bool: Whether or not the system is stable.
QuantEcon.k_array_rank — Method
k_array_rank([T=Int], a)Given an array a of k distinct positive integers, sorted in ascending order, return its ranking in the lexicographic ordering of the descending sequences of the elements, following [Combinatorial number system] (https://en.wikipedia.org/wiki/Combinatorialnumbersystem).
Notes
InexactError exception will be thrown, or an incorrect value will be returned without warning if overflow occurs during the computation. It is the user's responsibility to ensure that the rank of the input array fits within the range of T; a sufficient condition for it is binomial(BigInt(a[end]), BigInt(length(a))) <= typemax(T).
Arguments
T::Type{<:Integer}: The numeric type of ranking to be returned.a::Vector{<:Integer}: Array of length k.
Returns
idx::T: Ranking ofa.
QuantEcon.lae_est — Method
lae_est(l, y)A vectorized function that returns the value of the look ahead estimate at the values in the array y.
Arguments
l::LAE: Instance ofLAEtype.y::Array: Array that becomes theyinl.p(l.x, y).
Returns
psi_vals::Vector: Density at(x, y).
QuantEcon.lcp_lemke! — Method
lcp_lemke!(z, tableau, basis, M, q; d=ones(T, size(M, 1)),
max_iter=10^6, piv_options=PivOptions())Same as lcp_lemke, but allow for passing preallocated arrays z (to store the solution), tableau and basis (for workspace).
If M is an n x n matrix, z must be a Vector{T} of length n, tableau a Matrix{T} of size (n, 2n+2), and basis a Vector{<:Integer} of length n, where T<:AbstractFloat.
QuantEcon.lcp_lemke — Method
lcp_lemke(M, q; d=ones(eltype(M), size(M, 1)), max_iter=10^6,
piv_options=PivOptions())Solve the linear complementarity problem
\[\begin{aligned} &z \geq 0 \\ &M z + q \geq 0 \\ &z (M z + q) = 0 \end{aligned}\]
by Lemke's algorithm (with the lexicographic pivoting rule).
Arguments
M::AbstractMatrix: Matrix of size(n, n).q::AbstractVector: Vector of size(n,).d::AbstractVector: Covering vector, of size(n,). Must be strictly positive. Defaults to the vector of ones.max_iter::Integer(10^6): Maximum number of iterations to perform.piv_options::PivOptions:PivOptionsinstance to set the following tolerance values:tol_piv: Pivot tolerance (default=1.0e-7).tol_ratio_diff: Tolerance used in the lexicographic pivoting (default=1.0e-13).
Returns
LCPResult: Object consisting of the fields:z::Vector: Vector of size(n,)containing the solution.success::Bool: True if the algorithm succeeded in finding a solution.status::Int: An integer representing the exit status of the result:- 0: Solution found successfully
- 1: Iteration limit reached
- 2: Secondary ray termination
num_iter::Int: Number of iterations performed.
Examples
julia> M = [1 0 0; 2 1 0; 2 2 1];
julia> q = [-8, -12, -14];
julia> res = lcp_lemke(M, q);
julia> res.success
true
julia> res.z
3-element Vector{Float64}:
8.0
0.0
0.0
julia> w = M * res.z + q
3-element Vector{Float64}:
0.0
4.0
2.0
julia> res.z' * w
0.0References
- K. G. Murty, Linear Complementarity, Linear and Nonlinear Programming, 1988.
QuantEcon.m_quadratic_sum — Method
m_quadratic_sum(A, B; max_it=50)Computes the quadratic sum.
\[ V = \sum_{j=0}^{\infty} A^j B A^{j'}\]
$V$ is computed by solving the corresponding discrete lyapunov equation using the doubling algorithm. See the documentation of solve_discrete_lyapunov for more information.
Arguments
A::Matrix{Float64}: Ann x nmatrix as described above. We assume in order for convergence that the eigenvalues of $A$ have moduli bounded by unity.B::Matrix{Float64}: Ann x nmatrix as described above. We assume in order for convergence that the eigenvalues of $B$ have moduli bounded by unity.max_it::Int(50): Maximum number of iterations.
Returns
gamma1::Matrix{Float64}: Represents the value $V$.
QuantEcon.moment_sequence — Method
moment_sequence(lss)Create an iterator to calculate the population mean and variance-covariance matrix for both $x_t$ and $y_t$, starting at the initial condition (self.mu_0, self.Sigma_0). Each iteration produces a 4-tuple of items (mu_x, mu_y, Sigma_x, Sigma_y) for the next period.
Arguments
lss::LSS: An instance of the Gaussian linear state space model.
Returns
iterator: An iterator that yields 4-tuples(mu_x, mu_y, Sigma_x, Sigma_y)for each period.
QuantEcon.n_states — Method
n_states(mc)Number of states in the Markov chain mc.
QuantEcon.next_k_array! — Method
next_k_array!(a)Given an array a of k distinct positive integers, sorted in ascending order, return the next k-array in the lexicographic ordering of the descending sequences of the elements, following [Combinatorial number system] (https://en.wikipedia.org/wiki/Combinatorialnumbersystem). a is modified in place.
Arguments
a::Vector{<:Integer}: Array of length k.
Returns
a::Vector{<:Integer}: View ofa.
Examples
julia> n, k = 4, 2;
julia> a = collect(1:2);
julia> while a[end] <= n
@show a
next_k_array!(a)
end
a = [1, 2]
a = [1, 3]
a = [2, 3]
a = [1, 4]
a = [2, 4]
a = [3, 4]QuantEcon.nnash — Method
nnash(A, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2;
beta=1.0, tol=1e-8, max_iter=1000)Compute the limit of a Nash linear quadratic dynamic game.
Player i minimizes
\[ \sum_{t=1}^{\infty}\beta^{t-1}(x_t' R_i x_t + 2 x_t' W_i u_{it} +u_{it}' Q_i u_{it} + u_{jt}' S_i u_{jt} + 2 u_{jt}' M_i u_{it})\]
subject to the law of motion
\[ x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t}\]
and a perceived control law $u_j(t) = - f_j x_t$ for the other player.
The solution computed in this routine is the $f_i$ and $p_i$ of the associated double optimal linear regulator problem.
Arguments
A: Corresponds to the above equation, should be of size(n, n).B1: As above, size(n, k_1).B2: As above, size(n, k_2).R1: As above, size(n, n).R2: As above, size(n, n).Q1: As above, size(k_1, k_1).Q2: As above, size(k_2, k_2).S1: As above, size(k_1, k_1).S2: As above, size(k_2, k_2).W1: As above, size(n, k_1).W2: As above, size(n, k_2).M1: As above, size(k_2, k_1).M2: As above, size(k_1, k_2).;beta::Float64(1.0): Discount rate.;tol::Float64(1e-8): Tolerance level for convergence.;max_iter::Int(1000): Maximum number of iterations allowed.
Returns
F1::Matrix{Float64}:(k_1, n)matrix representing feedback law for agent 1.F2::Matrix{Float64}:(k_2, n)matrix representing feedback law for agent 2.P1::Matrix{Float64}:(n, n)matrix representing the steady-state solution to the associated discrete matrix Riccati equation for agent 1.P2::Matrix{Float64}:(n, n)matrix representing the steady-state solution to the associated discrete matrix Riccati equation for agent 2.
QuantEcon.num_compositions — Method
num_compositions(m, n)The total number of m-part compositions of n, which is equal to (n + m - 1) choose (m - 1).
Arguments
m::Int: Number of parts of composition.n::Int: Integer to decompose.
Returns
::Int: Total number of m-part compositions of n.
QuantEcon.prior_to_filtered! — Method
prior_to_filtered!(k, y)Updates the moments (cur_x_hat, cur_sigma) of the time $t$ prior to the time $t$ filtering distribution, using current measurement $y_t$. The updates are according to
\[ \hat{x}^F = \hat{x} + \Sigma G' (G \Sigma G' + R)^{-1} (y - G \hat{x}) \\ \Sigma^F = \Sigma - \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma\]
Arguments
k::Kalman: An instance of the Kalman filter.y: The current measurement.
Returns
nothing: This function modifies the Kalman filter in place.
QuantEcon.qnwbeta — Method
qnwbeta(n, a, b)Computes nodes and weights for beta distribution.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: First parameter of the beta distribution, along each dimension.b::Union{Real, Vector{Real}}: Second parameter of the beta distribution, along each dimension.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwcheb — Method
qnwcheb(n, a, b)Computes multivariate Gauss-Chebyshev quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Lower endpoint along each dimension.b::Union{Real, Vector{Real}}: Upper endpoint along each dimension.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwdist — Method
qnwdist(d, N; q0=0.001, qN=0.999, method=Quantile)Construct N quadrature weights and nodes for distribution d from the quantile q0 to the quantile qN.
Arguments
d::Distributions.ContinuousUnivariateDistribution: The distribution for which to construct quadrature nodes and weights.N::Int: Number of desired quadrature nodes.q0::Real(0.001): Lower quantile bound.qN::Real(0.999): Upper quantile bound.method::Union{T,Type{T}}(Quantile): Method for node placement. Can be one of:Even: nodes will be evenly spaced between the quantiles.Quantile: nodes will be placed at evenly spaced quantile values.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
To construct the weights, consider splitting the nodes into cells centered at each node. Specifically, let notation z_i mean the ith node and let z_{i-1/2} be 1/2 between nodes z_{i-1} and z_i. Then, weights are determined as follows:
weights[1] = cdf(d, z_{1+1/2})weights[N] = 1 - cdf(d, z_{N-1/2})weights[i] = cdf(d, z_{i+1/2}) - cdf(d, z_{i-1/2})for all i in 2:N-1
In effect, this strategy assigns node i all the probability associated with a random variable occuring within the node is cell.
The weights always sum to 1, so they can be used as a proper probability distribution. This means that E[f(x) | x ~ d] ≈ dot(f.(nodes), weights).
QuantEcon.qnwequi — Function
qnwequi(n, a, b, kind)Generates equidistributed sequences with property that averages value of integrable function evaluated over the sequence converges to the integral as n goes to infinity.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Lower endpoint along each dimension.b::Union{Real, Vector{Real}}: Upper endpoint along each dimension.kind::AbstractString("N"): One of the following:- N - Neiderreiter (default)
- W - Weyl
- H - Haber
- R - pseudo Random
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwgamma — Function
qnwgamma(n, a, b)Computes nodes and weights for gamma distribution.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Shape parameter of the gamma distribution, along each dimension. Must be positive. Default is 1.b::Union{Real, Vector{Real}}: Scale parameter of the gamma distribution, along each dimension. Must be positive. Default is 1.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwlege — Method
qnwlege(n, a, b)Computes multivariate Gauss-Legendre quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Lower endpoint along each dimension.b::Union{Real, Vector{Real}}: Upper endpoint along each dimension.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwlogn — Method
qnwlogn(n, mu, sig2)Computes quadrature nodes and weights for multivariate lognormal distribution.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.mu::Union{Real, Vector{Real}}: Mean along each dimension.sig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n))): Covariance structure.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
See also the documentation for qnwnorm.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwmonomial1 — Method
qnwmonomial1(vcv)Computes monomial integration nodes and weights for multivariate normal distribution using a first-order monomial rule.
Arguments
vcv::AbstractMatrix: Variance-covariance matrix.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
QuantEcon.qnwmonomial2 — Method
qnwmonomial2(vcv)Computes monomial integration nodes and weights for multivariate normal distribution using a second-order monomial rule.
Arguments
vcv::AbstractMatrix: Variance-covariance matrix.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
QuantEcon.qnwnorm — Method
qnwnorm(n, mu, sig2)Computes nodes and weights for multivariate normal distribution.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.mu::Union{Real, Vector{Real}}: Mean along each dimension.sig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n))): Covariance structure.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
This function has many methods. I try to describe them here.
n or mu can be a vector or a scalar. If just one is a scalar the other is repeated to match the length of the other. If both are scalars, then the number of repeats is inferred from sig2.
sig2 can be a matrix, vector or scalar. If it is a matrix, it is treated as the covariance matrix. If it is a vector, it is considered the diagonal of a diagonal covariance matrix. If it is a scalar it is repeated along the diagonal as many times as necessary, where the number of repeats is determined by the length of either n and/or mu (which ever is a vector).
If all 3 are scalars, then 1d nodes are computed. mu and sig2 are treated as the mean and variance of a 1d normal distribution.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwsimp — Method
qnwsimp(n, a, b)Computes multivariate Simpson quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Lower endpoint along each dimension.b::Union{Real, Vector{Real}}: Upper endpoint along each dimension.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwtrap — Method
qnwtrap(n, a, b)Computes multivariate trapezoid quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Lower endpoint along each dimension.b::Union{Real, Vector{Real}}: Upper endpoint along each dimension.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwunif — Method
qnwunif(n, a, b)Computes quadrature nodes and weights for multivariate uniform distribution.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Lower endpoint along each dimension.b::Union{Real, Vector{Real}}: Upper endpoint along each dimension.
Returns
nodes::Array{Float64}: An array of quadrature nodes.weights::Array{Float64}: An array of corresponding quadrature weights.
Notes
If any of the parameters to this function are scalars while others are vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.quadrect — Function
quadrect(f, n, a, b, kind, args...; kwargs...)Integrate the d-dimensional function f on a rectangle with lower and upper bound for dimension i defined by a[i] and b[i], respectively; using n[i] points.
Arguments
f::Function: The function to integrate over. This should be a function that accepts as its first argument a matrix representing points along each dimension (each dimension is a column). Other arguments that need to be passed to the function are caught byargs...andkwargs....n::Union{Int, Vector{Int}}: Number of desired nodes along each dimension.a::Union{Real, Vector{Real}}: Lower endpoint along each dimension.b::Union{Real, Vector{Real}}: Upper endpoint along each dimension.kind::AbstractString("lege"): Specifies which type of integration to perform. Valid values are:"lege": Gauss-Legendre"cheb": Gauss-Chebyshev"trap": trapezoid rule"simp": Simpson rule"N": Neiderreiter equidistributed sequence"W": Weyl equidistributed sequence"H": Haber equidistributed sequence"R": Monte Carlo
args...(Void): Additional positional arguments to pass tof.;kwargs...(Void): Additional keyword arguments to pass tof.
Returns
out::Float64: The scalar that approximates integral offon the hypercube formed by[a, b].
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.random_discrete_dp — Function
random_discrete_dp([rng], num_states, num_actions[, beta];
k=num_states, scale=1)Generate a DiscreteDP randomly. The reward values are drawn from the normal distribution with mean 0 and standard deviation scale.
Arguments
rng::AbstractRNG=GLOBAL_RNG: Random number generator.num_states::Integer: Number of states.num_actions::Integer: Number of actions.beta::Real=rand(rng): Discount factor. Randomly chosen from[0, 1)if not specified.;k::Integer(num_states): Number of possible next states for each state-action pair. Equal tonum_statesif not specified.scale::Real(1): Standard deviation of the normal distribution for the reward values.
Returns
ddp::DiscreteDP: An instance of DiscreteDP.
QuantEcon.random_markov_chain — Function
random_markov_chain([rng], n[, k])Return a randomly sampled MarkovChain instance with n states, where each state has k states with positive transition probability.
Arguments
rng::AbstractRNG=GLOBAL_RNG: Random number generator.n::Integer: Number of states.k::Integer=n: Number of nonzero entries in each column of the matrix. Set tonif none specified.
Returns
mc::MarkovChain: MarkovChain instance.
Examples
julia> using QuantEcon, Random
julia> rng = MersenneTwister(1234);
julia> mc = random_markov_chain(rng, 3);
julia> mc.p
3×3 LinearAlgebra.Transpose{Float64,Array{Float64,2}}:
0.590845 0.175952 0.233203
0.460085 0.106152 0.433763
0.794026 0.0601209 0.145853
julia> mc = random_markov_chain(rng, 3, 2);
julia> mc.p
3×3 LinearAlgebra.Transpose{Float64,Array{Float64,2}}:
0.0 0.200586 0.799414
0.701386 0.0 0.298614
0.753163 0.246837 0.0QuantEcon.random_stochastic_matrix — Function
random_stochastic_matrix([rng], n[, k])Return a randomly sampled n x n stochastic matrix with k nonzero entries for each row.
Arguments
rng::AbstractRNG=GLOBAL_RNG: Random number generator.n::Integer: Number of states.k::Integer=n: Number of nonzero entries in each column of the matrix. Set tonif none specified.
Returns
p::Array: Stochastic matrix.
QuantEcon.recurrent_classes — Method
recurrent_classes(mc)Find the recurrent classes of the Markov chain mc.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Vector{Vector{Int}}: Vector of vectors that describe the recurrent classes ofmc.
QuantEcon.remove_constants — Method
remove_constants(lss)Finds the row and column, if any, that correspond to the constant term in a LSS system and removes them to get the matrix that needs to be checked for stability.
Arguments
lss::LSS: The linear state space system.
Returns
A::Matrix: The matrix A with constant row and column removed.
QuantEcon.replicate — Function
replicate(lss, t, num_reps)Simulate num_reps observations of $x_T$ and $y_T$ given $x_0 \sim N(\mu_0, \Sigma_0)$.
Arguments
lss::LSS: An instance of the Gaussian linear state space model.t::Int = 10: The period that we want to replicate values for.num_reps::Int = 100: The number of replications we want.
Returns
x::Matrix: Ann x num_repsmatrix, where the j-th column is the jth observation of ``xT``.y::Matrix: Ak x num_repsmatrix, where the j-th column is the jth observation of ``yT``.
QuantEcon.ridder — Method
ridder(f, xa, xb; maxiter=500, xtol=1e-12, rtol=2*eps())Find a root of the f on the bracketing interval [xa, xb] via Ridder's algorithm.
Arguments
f::Function: The function you want to bracket.x1::T: Lower border for search interval.x2::T: Upper border for search interval.;maxiter::Int(500): Maximum number of bisection iterations.;xtol::Float64(1e-12): The routine converges when a root is known to lie withinxtolof the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.;rtol::Float64(2*eps()): The routine converges when a root is known to lie withinrtoltimes the value returned of the value returned. Should be ≥ 0.
Returns
x::T: The found root.
Exceptions
- Throws an
ArgumentErrorif[x1, x2]does not form a bracketing interval. - Throws a
ConvergenceErrorif the maximum number of iterations is exceeded.
References
Matches ridder function from scipy/scipy/optimize/Zeros/ridder.c.
QuantEcon.robust_rule — Method
robust_rule(rlq)Solves the robust control problem.
The algorithm here tricks the problem into a stacked LQ problem, as described in chapter 2 of Hansen-Sargent's text "Robustness". The optimal control with observed state is
\[ u_t = - F x_t\]
And the value function is $-x'Px$.
Arguments
rlq::RBLQ: Instance ofRBLQtype.
Returns
F::Matrix{Float64}: The optimal control matrix from above.P::Matrix{Float64}: The positive semi-definite matrix defining the value function.K::Matrix{Float64}: The worst-case shock matrix $K$, where $w_{t+1} = K x_t$ is the worst case shock.
QuantEcon.robust_rule_simple — Function
robust_rule_simple(rlq, P=zeros(Float64, rlq.n, rlq.n); max_iter=80, tol=1e-8)Solve the robust LQ problem.
A simple algorithm for computing the robust policy $F$ and the corresponding value function $P$, based around straightforward iteration with the robust Bellman operator. This function is easier to understand but one or two orders of magnitude slower than robust_rule(). For more information see the docstring of that method.
Arguments
rlq::RBLQ: Instance ofRBLQtype.P::Matrix{Float64}(zeros(Float64, rlq.n, rlq.n)): The initial guess for the value function matrix.;max_iter::Int(80): Maximum number of iterations that are allowed.;tol::Real(1e-8): The tolerance for convergence.
Returns
F::Matrix{Float64}: The optimal control matrix from above.P::Matrix{Float64}: The positive semi-definite matrix defining the value function.K::Matrix{Float64}: The worst-case shock matrix $K$, where $w_{t+1} = K x_t$ is the worst case shock.
QuantEcon.rouwenhorst — Function
rouwenhorst(N, ρ, σ, μ=0.0)Rouwenhorst's method to approximate AR(1) processes.
The process follows
\[ y_t = \mu + \rho y_{t-1} + \epsilon_t\]
where $\epsilon_t \sim N (0, \sigma^2)$.
Arguments
N::Integer: Number of points in markov process.ρ::Real: Persistence parameter in AR(1) process.σ::Real: Standard deviation of random component of AR(1) process.μ::Real(0.0): Mean of AR(1) process.
Returns
mc::MarkovChain{Float64}: Markov chain holding the state values and transition matrix.
QuantEcon.set_state! — Method
set_state!(k, x_hat, Sigma)Set the current state estimate of the Kalman filter.
Arguments
k::Kalman: An instance of the Kalman filter.x_hat: The state mean estimate.Sigma: The state covariance estimate.
Returns
nothing: This function modifies the Kalman filter in place.
QuantEcon.simplex_grid — Method
simplex_grid(m, n)Construct an array consisting of the integer points in the (m-1)-dimensional simplex $\{x \mid x_1 + \cdots + x_m = n, x_i \geq 0\}$, or equivalently, the m-part compositions of n, which are listed in lexicographic order. The total number of the points (hence the length of the output array) is L = (n+m-1)!/(n!*(m-1)!) (i.e., (n+m-1) choose (m-1)).
Arguments
m::Int: Dimension of each point. Must be a positive integer.n::Int: Number which the coordinates of each point sum to. Must be a nonnegative integer.
Returns
out::Matrix{Int}: Array of shape (m, L) containing the integer points in the simplex, aligned in lexicographic order.
Notes
A grid of the (m-1)-dimensional unit simplex with n subdivisions along each dimension can be obtained by simplex_grid(m, n) / n.
Examples
julia> simplex_grid(3, 4)
3×15 Matrix{Int64}:
0 0 0 0 0 1 1 1 1 2 2 2 3 3 4
0 1 2 3 4 0 1 2 3 0 1 2 0 1 0
4 3 2 1 0 3 2 1 0 2 1 0 1 0 0References
A. Nijenhuis and H. S. Wilf, Combinatorial Algorithms, Chapter 5, Academic Press, 1978.
QuantEcon.simplex_index — Method
simplex_index(x, m, n)Return the index of the point x in the lexicographic order of the integer points of the (m-1)-dimensional simplex $\{x \mid x_0 + \cdots + x_{m-1} = n\}$.
Arguments
x::Vector{Int}: Integer point in the simplex, i.e., an array of m nonnegative integers that sum to n.m::Int: Dimension of each point. Must be a positive integer.n::Int: Number which the coordinates of each point sum to. Must be a nonnegative integer.
Returns
idx::Int: Index of x.
QuantEcon.simulate! — Method
simulate!(X, mc; init=rand(1:n_states(mc)))Fill X with sample paths of the Markov chain mc as columns. The resulting matrix has the state values of mc as elements.
Arguments
X::Matrix: Preallocated matrix to be filled with sample paths
of the Markov chain mc. The element types in X should be the same as the type of the state values of mc.
mc::MarkovChain: MarkovChain instance.;init=rand(1:n_states(mc)): Can be one of the following:- blank: random initial condition for each chain
- scalar: same initial condition for each chain
- vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)
QuantEcon.simulate — Method
simulate(mc, ts_length; init=rand(1:n_states(mc)))Simulate one sample path of the Markov chain mc. The resulting vector has the state values of mc as elements.
Arguments
mc::MarkovChain: MarkovChain instance.ts_length::Int: Length of simulation.;init::Int=rand(1:n_states(mc)): Initial state.
Returns
X::Vector: Vector containing the sample path, with length ts_length.
QuantEcon.simulate_indices! — Method
simulate_indices!(X, mc; init=rand(1:n_states(mc)))Fill X with sample paths of the Markov chain mc as columns. The resulting matrix has the indices of the state values of mc as elements.
Arguments
X::Matrix{Int}: Preallocated matrix to be filled with indices
of the sample paths of the Markov chain mc.
mc::MarkovChain: MarkovChain instance.;init=rand(1:n_states(mc)): Can be one of the following:- blank: random initial condition for each chain
- scalar: same initial condition for each chain
- vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)
QuantEcon.simulate_indices — Method
simulate_indices(mc, ts_length; init=rand(1:n_states(mc)))Simulate one sample path of the Markov chain mc. The resulting vector has the indices of the state values of mc as elements.
Arguments
mc::MarkovChain: MarkovChain instance.ts_length::Int: Length of simulation.;init::Int=rand(1:n_states(mc)): Initial state.
Returns
X::Vector{Int}: Vector containing the sample path, with length ts_length.
QuantEcon.simulation — Method
simulation(arma; ts_length=90, impulse_length=30)Compute a simulated sample path assuming Gaussian shocks.
Arguments
arma::ARMA: Instance ofARMAtype.;ts_length::Integer(90): Length of simulation.;impulse_length::Integer(30): Horizon for calculating impulse response (see also docstring forimpulse_response)
Returns
X::Vector{Float64}: Simulation of the ARMA modelarma.
QuantEcon.smooth — Function
smooth(x, window_len, window="hanning")Smooth the data in x using convolution with a window of requested size and type.
Arguments
x::Array: An array containing the data to smooth.window_len::Int: An odd integer giving the length of the window.window::AbstractString: A string giving the window type (default: "hanning"). Possible values areflat,hanning,hamming,bartlett, orblackman.
Returns
out::Array: The array of smoothed data.
QuantEcon.smooth — Method
smooth(x; window_len=7, window="hanning")Version of smooth where window_len and window are keyword arguments.
Arguments
x::Array: An array containing the data to smooth.;window_len::Int(7): An odd integer giving the length of the window.;window::AbstractString("hanning"): A string giving the window type.
Returns
out::Array: The array of smoothed data.
QuantEcon.smooth — Method
smooth(kn, y)Computes smoothed state estimates using the Kalman smoother.
Arguments
kn::Kalman:Kalmaninstance specifying the model. Initial value must be the prior for t=1 period observation, i.e. $x_{1|0}$.y::AbstractMatrix:n x Tmatrix of observed data.nis the number of observed variables in one period. Each column is a vector of observations at each period.
Returns
x_smoothed::AbstractMatrix:k x Tmatrix of smoothed mean of states.kis the number of states.logL::Real: Log-likelihood of all observations.sigma_smoothed::AbstractArray:k x k x Tarray of smoothed covariance matrix of states.
QuantEcon.solve — Method
solve(ddp[, method=VFI]; max_iter=250, epsilon=1e-3, k=20)Solve the dynamic programming problem.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.method::Type{<:DDPAlgorithm}(VFI): Type name specifying solution method. Acceptable arguments areVFIfor value function iteration orPFIfor policy function iteration orMPFIfor modified policy function iteration.;max_iter::Int(250): Maximum number of iterations.;epsilon::Float64(1e-3): Value for epsilon-optimality. Only used ifmethodisVFI.;k::Int(20): Number of iterations for partial policy evaluation in modified policy iteration (irrelevant for other methods).
Returns
ddpr::DPSolveResult{<:DDPAlgorithm}: Optimization result represented as aDPSolveResult. SeeDPSolveResultfor details.
QuantEcon.solve_discrete_lyapunov — Function
solve_discrete_lyapunov(A, B, max_it=50)Solves the discrete Lyapunov equation.
The problem is given by
\[ AXA' - X + B = 0\]
$X$ is computed by using a doubling algorithm. In particular, we iterate to convergence on $X_j$ with the following recursions for $j = 1, 2, \ldots$ starting from $X_0 = B, a_0 = A$:
\[ a_j = a_{j-1} a_{j-1} \\ X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}'\]
Arguments
A::Matrix{Float64}: Ann x nmatrix as described above. We assume in order for convergence that the eigenvalues of $A$ have moduli bounded by unity.B::Matrix{Float64}: Ann x nmatrix as described above. We assume in order for convergence that the eigenvalues of $B$ have moduli bounded by unity.max_it::Int(50): Maximum number of iterations.
Returns
gamma1::Matrix{Float64}: Represents the value $X$.
QuantEcon.solve_discrete_riccati — Function
solve_discrete_riccati(A, B, Q, R, N=zeros(size(R, 1), size(Q, 1)); tolerance=1e-10, max_it=50)Solves the discrete-time algebraic Riccati equation.
The problem is defined as
\[ X = A'XA - (N + B'XA)'(B'XB + R)^{-1}(N + B'XA) + Q\]
via a modified structured doubling algorithm. An explanation of the algorithm can be found in the reference below.
Arguments
A:k x karray.B:k x narray.R:n x n, should be symmetric and positive definite.Q:k x k, should be symmetric and non-negative definite.N::Matrix{Float64}(zeros(size(R, 1), size(Q, 1))):n x karray.tolerance::Float64(1e-10): Tolerance level for convergence.max_it::Int(50): The maximum number of iterations allowed.
Note that A, B, R, Q can either be real (i.e. k, n = 1) or matrices.
Returns
X::Matrix{Float64}: The fixed point of the Riccati equation; ak x karray representing the approximate solution.
References
Chiang, Chun-Yueh, Hung-Yuan Fan, and Wen-Wei Lin. "STRUCTURED DOUBLING ALGORITHM FOR DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS WITH SINGULAR CONTROL WEIGHTING MATRICES." Taiwanese Journal of Mathematics 14, no. 3A (2010): pp-935.
QuantEcon.spectral_density — Method
spectral_density(arma; res=1200, two_pi=true)Compute the spectral density function.
The spectral density is the discrete time Fourier transform of the autocovariance function. In particular,
\[ f(w) = \sum_k \gamma(k) \exp(-ikw)\]
where $\gamma$ is the autocovariance function and the sum is over the set of all integers.
Arguments
arma::ARMA: Instance ofARMAtype.;two_pi::Bool(true): Compute the spectral density function over $[0, \pi]$ if false and $[0, 2 \pi]$ otherwise.;res(1200): Ifresis a scalar then the spectral density is computed atresfrequencies evenly spaced around the unit circle, but ifresis an array then the function computes the response at the frequencies given by the array.
Returns
w::Vector{Float64}: The normalized frequencies at which h was computed, in radians/sample.spect::Vector{Float64}: The frequency response.
QuantEcon.stationary_distributions — Function
stationary_distributions(mc)Compute stationary distributions of the Markov chain mc, one for each recurrent class.
Arguments
mc::MarkovChain{T}: MarkovChain instance.
Returns
stationary_dists::Vector{Vector{T1}}: Vector of vectors that represent stationary distributions, where the element typeT1isRationalifTisInt(and equal toTotherwise).
QuantEcon.stationary_distributions — Method
stationary_distributions(lss; max_iter, tol)Compute the moments of the stationary distributions of $x_t$ and $y_t$ if possible. Computation is by iteration, starting from the initial conditions lss.mu_0 and lss.Sigma_0.
Arguments
lss::LSS: An instance of the Gaussian linear state space model.;max_iter::Int = 200: The maximum number of iterations allowed.;tol::Float64 = 1e-5: The tolerance level one wishes to achieve.
Returns
mu_x::Vector: Represents the stationary mean of $x_t$.mu_y::Vector: Represents the stationary mean of $y_t$.Sigma_x::Matrix: Represents the var-cov matrix.Sigma_y::Matrix: Represents the var-cov matrix.
QuantEcon.stationary_values! — Method
stationary_values!(lq)Computes value and policy functions in infinite horizon model.
Arguments
lq::LQ: Instance ofLQtype.
Returns
P::ScalarOrArray: n x n matrix in value function representation $V(x) = x'Px + d$.d::Real: Constant in value function representation.F::ScalarOrArray: Policy rule that specifies optimal control in each period.
Notes
This function updates the P, d, and F fields on the lq instance in addition to returning them.
QuantEcon.stationary_values — Method
stationary_values(k)Compute the stationary covariance matrix and Kalman gain for the filter.
Arguments
k::Kalman: An instance of the Kalman filter.
Returns
Sigma_inf: The stationary state covariance matrix.K_inf: The stationary Kalman gain matrix.
QuantEcon.stationary_values — Method
stationary_values(lq)Non-mutating routine for solving for P, d, and F in infinite horizon model.
Arguments
lq::LQ: Instance ofLQtype.
Returns
P::ScalarOrArray: n x n matrix in value function representation $V(x) = x'Px + d$.F::ScalarOrArray: Policy rule that specifies optimal control in each period.d::Real: Constant in value function representation.
See docstring for stationary_values! for more explanation.
QuantEcon.tauchen — Method
tauchen(N, ρ, σ, μ=0.0, n_std=3)Tauchen's (1996) method for approximating AR(1) process with finite markov chain.
The process follows
\[ y_t = \mu + \rho y_{t-1} + \epsilon_t\]
where $\epsilon_t \sim N (0, \sigma^2)$.
Arguments
N::Integer: Number of points in markov process.ρ::Real: Persistence parameter in AR(1) process.σ::Real: Standard deviation of random component of AR(1) process.μ::Real(0.0): Mean of AR(1) process.n_std::Real(3): The number of standard deviations to each side the process should span.
Returns
mc::MarkovChain: Markov chain holding the state values and transition matrix.
QuantEcon.update! — Method
update!(k, y)Updates cur_x_hat and cur_sigma given array y of length k. The full update, from one period to the next.
Arguments
k::Kalman: An instance of the Kalman filter.y: An array representing the current measurement.
Returns
nothing: This function modifies the Kalman filter in place.
QuantEcon.update_values! — Method
update_values!(lq)Update P and d from the value function representation in finite horizon case.
Arguments
lq::LQ: Instance ofLQtype.
Returns
P::ScalarOrArray:n x nmatrix in value function representation $V(x) = x'Px + d$.d::Real: Constant in value function representation.
Notes
This function updates the P and d fields on the lq instance in addition to returning them.
QuantEcon.var_quadratic_sum — Method
var_quadratic_sum(A, C, H, beta, x_0)Computes the expected discounted quadratic sum.
\[ q(x_0) = \mathbb{E} \sum_{t=0}^{\infty} \beta^t x_t' H x_t\]
Here ${x_t}$ is the VAR process $x_{t+1} = A x_t + C w_t$ with ${w_t}$ standard normal and $x_0$ the initial condition.
Arguments
A::Union{Float64, Matrix{Float64}}: Then x nmatrix described above (scalar ifn = 1).C::Union{Float64, Matrix{Float64}}: Then x nmatrix described above (scalar ifn = 1).H::Union{Float64, Matrix{Float64}}: Then x nmatrix described above (scalar ifn = 1).beta::Float64: Discount factor in(0, 1).x_0::Union{Float64, Vector{Float64}}: The initial condition. A conformable array (of lengthn) or a scalar ifn = 1.
Returns
q0::Float64: Represents the value $q(x_0)$.
Notes
The formula for computing $q(x_0)$ is $q(x_0) = x_0' Q x_0 + v$ where
- $Q$ is the solution to $Q = H + \beta A' Q A$ and
- $v = \frac{trace(C' Q C) \beta}{1 - \beta}$.
QuantEcon.@def_sim — Macro
@def_sim sim_name default_type_params begin
obs_typedef
endGiven a type definition for a single observation in a simulation (obs_typedef), evaluate that type definition as is, but also creates a second type named sim_name as well as various methods on the new type.
The fields of sim_name will have the same name as the fields of obs_typedef, but will be arrays of whatever the type of the corresponding obs_typedef field was. The intention is for sim_name to be a struct of arrays (see https://en.wikipedia.org/wiki/AOSandSOA). If you want an array of structs, just simply collect an array of instances of the type defined in obs_typedef. The struct of arrays storage format has better cache efficiency and data locality if you want to operate on all values of a particular field at once, rather than all the fields of a particular value.
In addition to the new type sim_name, the following methods will be defined:
sim_name(sz::NTuple{N,Int}). This is a constructor forsim_namethat allocates arrays of sizeszfor each field. Ifobs_typedefincluded any type parameters, then the default values (specified indefault_type_params) will be used.Base.endof(::sim_name): Equal to the length of any of its fields.Base.length(::sim_name): Equal to the length of any of its fields.- The iterator protocol for
sim_name. The type of each element of the iterator is the type defined inobs_typedef. This amounts to defining the following methods:Base.start(::sim_name)::IntBase.next(::sim_name, ::Int)::Tuple{Observation,Int}Base.done(::sim_name, ::Int)::Bool
Base.getindex(sim::sim_name, ix::Int). This implements linear indexing forsim_nameand will return an instance of the type defined inobs_typedef.
Examples
NOTE: the using MacroTools and call to MacroTools.prettify is not necessary and is only used here to clean up the output so it is easier to read.
julia> using MacroTools
julia> macroexpand(:(@def_sim Simulation (T => Float64,) struct Observation{T<:Number}
c::T
k::T
i_z::Int
end
)) |> MacroTools.prettify
quote
struct Simulation{prairiedog, T <: Number}
c::Array{T, prairiedog}
k::Array{T, prairiedog}
i_z::Array{Int, prairiedog}
end
function Simulation{prairiedog}(sz::NTuple{prairiedog, Int})
c = Array{Float64, prairiedog}(sz)
k = Array{Float64, prairiedog}(sz)
i_z = Array{Int, prairiedog}(sz)
Simulation(c, k, i_z)
end
struct Observation{T <: Number}
c::T
k::T
i_z::Int
end
Base.endof(sim::Simulation) = length(sim.c)
Base.length(sim::Simulation) = endof(sim)
Base.start(sim::Simulation) = 1
Base.next(sim::Simulation, ix::Int) = (sim[ix], ix + 1)
Base.done(sim::Simulation, ix::Int) = ix >= length(sim)
function Base.getindex(sim::Simulation, ix::Int)
$(Expr(:boundscheck, true))
if ix > length(sim)
throw(BoundsError("$(length(sim))-element Simulation at index $(ix)"))
end
$(Expr(:boundscheck, :pop))
$(Expr(:inbounds, true))
out = Observation(sim.c[ix], sim.k[ix], sim.i_z[ix])
$(Expr(:inbounds, :pop))
return out
end
endInternal
QuantEcon.DPSolveResult — Type
DPSolveResultObject for retaining results and associated metadata after solving the model.
Fields
v::Vector{Tval}: Value function vector.Tv::Array{Tval}: Temporary value function array.num_iter::Int: Number of iterations.sigma::Array{Int,1}: Policy function vector.mc::MarkovChain: Controlled Markov chain.
QuantEcon.LCPResult — Type
LCPResultStruct containing the result from lcp_lemke.
Fields
z::Vector: Solution vector.success::Bool: True if the algorithm succeeded in finding a solution.status::Int: An integer representing the exit status of the result:- 0: Solution found successfully
- 1: Iteration limit reached
- 2: Secondary ray termination
num_iter::Int: The number of iterations performed.
QuantEcon.PivOptions — Type
PivOptionsStruct to hold tolerance values for pivoting.
Fields
tol_piv::Float64: Pivot tolerance (default=1.0e-7).tol_ratio_diff::Float64: Tolerance used in the lexicographic pivoting (default=1.0e-13).
QuantEcon.VAREstimationMethod — Type
VAREstimationMethodTypes specifying the method for discrete_var.
QuantEcon._compute_sequence — Method
Private method implementing compute_sequence when state is a scalar.
QuantEcon._compute_sequence — Method
Private method implementing compute_sequence when state is a vector.
QuantEcon._generate_a_indptr! — Method
_generate_a_indptr!(num_states, s_indices, out)Generate a_indptr; stored in out. s_indices is assumed to be in sorted order.
Arguments
num_states::Integer: Number of states.s_indices::AbstractVector{T}: State indices vector (must be sorted).out::AbstractVector{T}: Output vector with length =num_states+ 1.
Returns
out::AbstractVector{T}: Action index pointers vector.
QuantEcon._get_solution! — Method
_get_solution!(z, tableau, basis)Fetch the solution from tableau and basis.
Arguments
z::Vector{T}: Empty vector of size(n,)to store the solution. Modified in place.tableau::Matrix{T}: Matrix of size(n, 2*n+2)containing the terminal tableau.basis::Vector{<:Integer}: Vector of size(n,)containing the terminal basis.
Returns
z: Modified vector storing the solution.
QuantEcon._has_sorted_sa_indices — Method
_has_sorted_sa_indices(s_indices, a_indices)Check whether s_indices and a_indices are sorted in lexicographic order.
Arguments
s_indices::AbstractVector: State indices vector.a_indices::AbstractVector: Action indices vector.
Returns
result::Bool: Whethers_indicesanda_indicesare sorted.
QuantEcon._initialize_tableau! — Method
_initialize_tableau!(tableau, basis, M, q, d)Initialize the tableau and basis arrays in place.
With covering vector $d$ and artificial variable $z0$, the LCP is written as
$q = w - M z - d z0$
where the variables are ordered as $(w, z, z0)$. Thus, tableau[:, 1:n] stores $I$, tableau[:, n+1:2n] stores $-M$, tableau[:, 2n+1] stores $-d$, and tableau[:, end] stores $q$, while basis stores 1, ..., n (variables $w$).
Arguments
tableau::Matrix{T}: Empty matrix of size(n, 2n+2)to store the tableau. Modified in place.basis::Vector{<:Integer}: Empty vector of size(n,)to store the basic variables. Modified in place.M::AbstractMatrix: Matrix of size(n, n).q::AbstractVector: Vector of size(n,).d::AbstractVector: Vector of size(n,).
Returns
tableau, basis: Initialized tableau and basis.
QuantEcon._lex_min_ratio_test! — Method
_lex_min_ratio_test!(tableau, pivot, slack_start, argmins;
tol_piv=1.0e-10, tol_ratio_diff=1.0e-15)Perform the lexico-minimum ratio test.
Arguments
tableau::AbstractMatrix: Array containing the tableau.pivot::Integer: Pivot column index.slack_start::Integer: First column index for slack variables (assumed to form an identity over columnsslack_start : slack_start + nrows - 1).argmins::AbstractVector{<:Integer}: Empty array used to store the row indices. Its length must be no smaller than the number of the rows oftableau.tol_piv::Real(1.0e-10): Pivot tolerance below which a number is considered to be nonpositive.tol_ratio_diff::Real(1.0e-15): Tolerance to determine a tie between ratio values.
Returns
found::Bool:falseif there is no positive entry in the pivot column.row_min::Int: Index of the row with the lexico-minimum ratio.
QuantEcon._min_ratio_test_no_tie_breaking! — Method
_min_ratio_test_no_tie_breaking!(tableau, pivot, test_col,
argmins, num_candidates,
tol_piv, tol_ratio_diff)Perform the minimum ratio test, without tie breaking, for the candidate rows in argmins[1:num_candidates]. Return the number num_argmins of the rows minimizing the ratio and store their indices in argmins[1:num_argmins].
Arguments
tableau::AbstractMatrix: Array containing the tableau.pivot::Integer: Pivot column index used as denominator.test_col::Integer: Index of the column used in the test.argmins::AbstractVector{<:Integer}: Array containing the indices of the candidate rows. Modified in place to store the indices of minimizing rows.num_candidates::Integer: Number of candidate rows inargmins.tol_piv::Real: Pivot tolerance below which a number is considered to be nonpositive.tol_ratio_diff::Real: Tolerance to determine a tie between ratio values.
Returns
num_argmins::Int: Number of minimizing rows; their indices occupyargmins[1:num_argmins].
QuantEcon._pivoting! — Method
_pivoting!(tableau, pivot_col, pivot_row, col_buf)Perform a pivoting step. Modify tableau in place.
Arguments
tableau::AbstractMatrix: Array containing the tableau.pivot_col::Integer: Pivot column index.pivot_row::Integer: Pivot row index.col_buf::Vector: Workspace vector to hold a copy of the pivot column, of typeeltype(tableau)and lengthsize(tableau, 1). Pass, for example,col_buf = similar(tableau, size(tableau, 1)).
Returns
tableau::AbstractMatrix: View totableau.
QuantEcon._random_stochastic_matrix — Method
_random_stochastic_matrix([rng], n, m; k=n)Generate a "non-square column stochastic matrix" of shape (n, m), which contains as columns m probability vectors of length n with k nonzero entries.
Arguments
rng::AbstractRNG=GLOBAL_RNG: Random number generator.n::Integer: Number of states.m::Integer: Number of probability vectors.;k::Integer(n): Number of nonzero entries in each column of the matrix. Set tonif none specified.
Returns
p::Array: Array of shape(n, m)containingmprobability vectors of lengthnas columns.
QuantEcon._solve! — Method
_solve!(ddp, ddpr, max_iter, epsilon, k)Modified Policy Function Iteration.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult{MPFI}: Object that contains result variables.max_iter::Integer: Maximum number of iterations.epsilon::Real: Value for epsilon-optimality.k::Integer: Number of iterations for partial policy evaluation.
Returns
ddpr::DPSolveResult{MPFI}: Updated result object.
QuantEcon._solve! — Method
_solve!(ddp, ddpr, max_iter, epsilon, k)Policy Function Iteration.
NOTE: The epsilon is ignored in this method. It is only here so dispatch can go from solve(::DiscreteDP, ::Type{Algo}) to any of the algorithms. See solve for further details.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult{PFI}: Object that contains result variables.max_iter::Integer: Maximum number of iterations.epsilon::Real: Value for epsilon-optimality (not used for PFI).k::Integer: Number of iterations (not used for PFI).
Returns
ddpr::DPSolveResult{PFI}: Updated result object.
QuantEcon._solve! — Method
_solve!(ddp, ddpr, max_iter, epsilon, k)Implements Value Iteration.
NOTE: See solve for further details.
Arguments
ddp::DiscreteDP: Object that contains the model parameters.ddpr::DPSolveResult{VFI}: Object that contains result variables.max_iter::Integer: Maximum number of iterations.epsilon::Real: Value for epsilon-optimality.k::Integer: Number of iterations (not used for VFI).
Returns
ddpr::DPSolveResult{VFI}: Updated result object.
QuantEcon.allcomb3 — Method
allcomb3(A)Return combinations of each column of matrix A. It is simplifying allcomb2 by using gridmake from QuantEcon.
Arguments
A::AbstractMatrix:N x MMatrix.
Returns
N^M x MMatrix, combination of each row ofA.
Example
julia> allcomb3([1 4 7;
2 5 8;
3 6 9]) # numerical input
27×3 Array{Int64,2}:
1 4 7
1 4 8
1 4 9
1 5 7
1 5 8
1 5 9
1 6 7
1 6 8
1 6 9
2 4 7
⋮
2 6 9
3 4 7
3 4 8
3 4 9
3 5 7
3 5 8
3 5 9
3 6 7
3 6 8
3 6 9QuantEcon.construct_1D_grid — Method
construct_1D_grid(Sigma, Nm, M, n_sigmas, method)Construct one-dimensional quantile grid of states.
Argument
Sigma::AbstractMatrix: Variance-covariance matrix of the standardized process.Nm::Integer: Number of grid points.M::Integer: Number of variables (M=1corresponds to AR(1)).n_sigmas::Real: Number of standard error determining end points of grid.method::Quantile: Method for grid making.
Return
y1D:M x Nmmatrix of variable grid.y1Dbounds: Bounds of each grid bin.
QuantEcon.construct_1D_grid — Method
construct_1D_grid(Sigma, Nm, M, n_sigmas, method)Construct one-dimensional quadrature grid of states.
Argument
::ScalarOrArray: Not used.Nm::Integer: Number of grid points.M::Integer: Number of variables (M=1corresponds to AR(1)).n_sigmas::Real: Not used.method::Quadrature: Method for grid making.
Return
y1D:M x Nmmatrix of variable grid.weights: Weights on each grid.
QuantEcon.construct_1D_grid — Method
construct_1D_grid(Sigma, Nm, M, n_sigmas, method)Construct one-dimensional evenly spaced grid of states.
Argument
Sigma::ScalarOrArray: Variance-covariance matrix of the standardized process.Nm::Integer: Number of grid points.M::Integer: Number of variables (M=1corresponds to AR(1)).n_sigmas::Real: Number of standard error determining end points of grid.method::Even: Method for grid making.
Return
y1D:M x Nmmatrix of variable grid.nothing:nothingof typeNothing.
QuantEcon.construct_prior_guess — Method
construct_prior_guess(cond_mean, Nm, y1D, y1Dbounds, method)Construct prior guess for quantile grid method.
Arguments
cond_mean::AbstractVector: Conditional mean of each variable.Nm::Integer: Number of grid points.::AbstractMatrix: Grid of variable.y1Dbounds::AbstractMatrix: Bounds of each grid bin.method::Quantile: Method for grid making.
QuantEcon.construct_prior_guess — Method
construct_prior_guess(cond_mean, Nm, y1D, weights, method)Construct prior guess for quadrature grid method.
Arguments
cond_mean::AbstractVector: Conditional mean of each variable.Nm::Integer: Number of grid points.y1D::AbstractMatrix: Grid of variable.weights::AbstractVector: Weights of gridy1D.method::Quadrature: Method for grid making.
QuantEcon.construct_prior_guess — Method
construct_prior_guess(cond_mean, Nm, y1D, nothing, method)Construct prior guess for evenly spaced grid method.
Arguments
cond_mean::AbstractVector: Conditional mean of each variable.Nm::Integer: Number of grid points.y1D::AbstractMatrix: Grid of variable.::AbstractMatrix: Bounds of each grid bin.method::Even: Method for grid making.
QuantEcon.discrete_approximation — Function
discrete_approximation(D, T, Tbar, q=ones(length(D))/length(D), lambda0=zeros(Tbar))Compute a discrete state approximation to a distribution with known moments, using the maximum entropy procedure proposed in Tanaka and Toda (2013).
Arguments
D::AbstractVector: Vector of grid points of lengthN. N is the number of points at which an approximation is to be constructed.T::Function: A function that accepts a singleAbstractVectorof lengthNand returns anL x Nmatrix of moments evaluated at each grid point, where L is the number of moments to be matched.Tbar::AbstractVector: LengthLvector of moments of the underlying distribution which should be matched.
Optional
q::AbstractVector: LengthNvector of prior weights for each point in D. The default is for each point to have an equal weight.lambda0::AbstractVector: LengthLvector of initial guesses for the dual problem variables. The default is a vector of zeros.
Returns
p: (1 x N) vector of probabilities assigned to each grid point inD.lambda_bar: LengthLvector of dual problem variables which solve the maximum entropy problem.moment_error: Vector of errors in moments (defined by moments of discretization minus actual moments) of lengthL.
QuantEcon.entropy_grad! — Method
entropy_grad!(grad, lambda, Tx, Tbar, q)Compute gradient of objective function.
Returns
grad: LengthLgradient vector of the objective function evaluated atlambda.
QuantEcon.entropy_hess! — Method
entropy_hess!(hess, lambda, Tx, Tbar, q)Compute hessian of objective function.
Returns
hess:L x Lhessian matrix of the objective function evaluated atlambda.
QuantEcon.entropy_obj — Method
entropy_obj(lambda, Tx, Tbar, q)Compute the maximum entropy objective function used in discrete_approximation.
obj = entropy_obj(lambda, Tx, Tbar, q)Arguments
lambda::AbstractVector: LengthLvector of values of the dual problem variables.Tx::AbstractMatrix:L x Nmatrix of moments evaluated at the grid points specified in discrete_approximation.Tbar::AbstractVector: LengthLvector of moments of the underlying distribution which should be matched.q::AbstractVector: LengthNvector of prior weights for each point in the grid.
Returns
obj: Scalar value of objective function evaluated atlambda.
QuantEcon.fix — Function
fix(x)Round x towards zero. For arrays there is a mutating version fix!.
QuantEcon.getZ — Method
getZ(R, gamma, BB)Simple method to return an element $Z$ in the Riccati equation solver whose type is Float64 (to be accepted by the cond() function).
Arguments
R::Float64: Input scalar.gamma::Float64: Parameter in the Riccati equation solver.BB::Float64: Result of $B' B$.
Returns
::Float64: Element $Z$ in the Riccati equation solver.
QuantEcon.getZ — Method
getZ(R, gamma, BB)Simple method to return an element $Z$ in the Riccati equation solver whose type is Float64 (to be accepted by the cond() function).
Arguments
R::Float64: Input scalar.gamma::Float64: Parameter in the Riccati equation solver.BB::Union{Vector, Matrix}: Result of $B' B$.
Returns
::Float64: Element $Z$ in the Riccati equation solver.
QuantEcon.getZ — Method
getZ(R, gamma, BB)Simple method to return an element $Z$ in the Riccati equation solver whose type is Matrix (to be accepted by the cond() function).
Arguments
R::Matrix: Input matrix.gamma::Float64: Parameter in the Riccati equation solver.BB::Matrix: Result of $B' B$.
Returns
::Matrix: Element $Z$ in the Riccati equation solver.
QuantEcon.go_backward — Method
go_backward(k, x_fi, sigma_fi, sigma_fo, x_s1, sigma_s1)Helper function for backward recursion in Kalman smoothing.
Arguments
k::Kalman:Kalmaninstance specifying the model.x_fi::Vector: Filtered mean of state for period $t$.sigma_fi::Matrix: Filtered covariance matrix of state for period $t$.sigma_fo::Matrix: Forecast of covariance matrix of state for period $t+1$ conditional on period $t$ observations.x_s1::Vector: Smoothed mean of state for period $t+1$.sigma_s1::Matrix: Smoothed covariance of state for period $t+1$.
Returns
x_s::Vector: Smoothed mean of state for period $t$.sigma_s::Matrix: Smoothed covariance of state for period $t$.
QuantEcon.gth_solve! — Method
gth_solve!(A)Same as gth_solve, but overwrite the input A, instead of creating a copy.
QuantEcon.log_likelihood — Method
log_likelihood(k, y)Computes log-likelihood of period $t$.
Arguments
k::Kalman:Kalmaninstance specifying the model. Current values must be the forecast for period $t$ observation conditional on $t-1$ observation.y::AbstractVector: Response observations at period $t$.
Returns
logL::Real: Log-likelihood of observations at period $t$.
QuantEcon.min_var_trace — Method
min_var_trace(A)Find a unitary matrix U such that the diagonal components of U'AU is as close to a multiple of identity matrix as possible.
Arguments
A::AbstractMatrix: Square matrix.
Returns
U: Unitary matrix.fval: Minimum value.
QuantEcon.polynomial_moment — Method
polynomial_moment(X, mu, scaling_factor, n_moments)Compute the moment defining function used in discrete_approximation.
Arguments
X::AbstractVector: LengthNvector of grid points.mu::Real: Location parameter (conditional mean).scaling_factor::Real: Scaling factor for numerical stability. (typically largest grid point).n_moments::Integer: Number of polynomial moments.
Return
T: Moment defining function used indiscrete_approximation.
QuantEcon.random_probvec — Method
random_probvec([rng], k[, m])Return m randomly sampled probability vectors of size k.
Arguments
rng::AbstractRNG=GLOBAL_RNG: Random number generator.k::Integer: Size of each probability vector.m::Integer: Number of probability vectors.
Returns
a::Array: Matrix of shape(k, m), or Vector of shape(k,)ifmis not specified, containing probability vector(s) as column(s).
QuantEcon.s_wise_max! — Method
s_wise_max!(a_indices, a_indptr, vals, out)Populate out with max_a vals(s, a), where vals is represented as a Vector of size (num_sa_pairs,).
Arguments
a_indices::AbstractVector: Action indices vector.a_indptr::AbstractVector: Action index pointers vector.vals::AbstractVector: Vector of values of size(num_sa_pairs,).out::AbstractVector: Output vector to be populated with maximum values.
Returns
out::AbstractVector: Vector of maximum values across actions for each state.
QuantEcon.s_wise_max! — Method
s_wise_max!(a_indices, a_indptr, vals, out, out_argmax)Populate out with max_a vals(s, a), where vals is represented as a Vector of size (num_sa_pairs,).
Also fills out_argmax with the cartesian index associated with the argmax in each row.
Arguments
a_indices::AbstractVector: Action indices vector.a_indptr::AbstractVector: Action index pointers vector.vals::AbstractVector: Vector of values of size(num_sa_pairs,).out::AbstractVector: Output vector to be populated with maximum values.out_argmax::AbstractVector: Output vector to be populated with argmax indices.
Returns
out::AbstractVector: Vector of maximum values across actions for each state.out_argmax::AbstractVector: Vector of argmax indices for each state.
QuantEcon.s_wise_max! — Method
s_wise_max!(vals, out, out_argmax)Populate out with max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).
Also fills out_argmax with the column number associated with the argmax in each row.
Arguments
vals::AbstractMatrix: Matrix of values of size(num_states, num_actions).out::AbstractVector: Output vector to be populated with maximum values.out_argmax::AbstractVector: Output vector to be populated with argmax indices.
Returns
out::AbstractVector: Vector of maximum values across actions for each state.out_argmax::AbstractVector: Vector of argmax column indices for each state.
QuantEcon.s_wise_max! — Method
s_wise_max!(vals, out)Populate out with max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).
Arguments
vals::AbstractMatrix: Matrix of values of size(num_states, num_actions).out::AbstractVector: Output vector to be populated with maximum values.
Returns
out::AbstractVector: Vector of maximum values across actions for each state.
QuantEcon.s_wise_max — Method
s_wise_max(vals)Return the Vector max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).
Arguments
vals::AbstractMatrix: Matrix of values of size(num_states, num_actions).
Returns
out::Vector: Vector of maximum values across actions for each state.
QuantEcon.standardize_var — Method
standardize_var(b, B, Psi, M)Return standardized VAR(1) representation.
Arguments
b::AbstractVector:M x 1constant term vector.B::AbstractMatrix:M x Mmatrix of impact coefficients.Psi::AbstractMatrix:M x Mvariance-covariance matrix of innovations.M::Integer: Number of variables of the VAR(1) model.
Returns
A::Matrix: Impact coefficients of standardized VAR(1) process.C::AbstractMatrix: Variance-covariance matrix of standardized model innovations.mu::AbstractVector: Mean of the standardized VAR(1) process.Sigma::AbstractMatrix: Variance-covariance matrix of the standardized VAR(1) process.
QuantEcon.standardize_var — Method
standardize_var(b, B, Psi, M)Return standardized AR(1) representation.
Arguments
b::Real: Constant term.B::Real: Impact coefficient.Psi::Real: Variance of innovation.M::Integer == 1: Must be one since the function is for AR(1).
Returns
A::Real: Impact coefficient of standardized AR(1) process.C::Real: Standard deviation of the innovation.mu::Real: Mean of the standardized AR(1) process.Sigma::Real: Variance of the standardized AR(1) process.
QuantEcon.todense — Method
If A is already dense, return A as is.
QuantEcon.todense — Method
Custom version of full, which allows conversion to type T.
QuantEcon.warn_persistency — Method
warn_persistency(B, method)Check persistency when method is Quadrature and give warning if needed.
Arguments
B::Union{Real, AbstractMatrix}: Impact coefficient.method::VAREstimationMethod: Method for grid making.
Returns
nothing