QuantEcon
API documentation
Index
Base.eQuantEcon.ARMAQuantEcon.DPSolveResultQuantEcon.DiscreteDPQuantEcon.DiscreteDPQuantEcon.DiscreteDPQuantEcon.DiscreteRVQuantEcon.ECDFQuantEcon.LAEQuantEcon.LQQuantEcon.LQQuantEcon.LSSQuantEcon.LinInterpQuantEcon.MPFIQuantEcon.MarkovChainQuantEcon.MarkovChainQuantEcon.PFIQuantEcon.RBLQQuantEcon.VFIBase.:*LightGraphs.periodQuantEcon.F_to_KQuantEcon.K_to_FQuantEcon.RQ_sigmaQuantEcon.RQ_sigmaQuantEcon._compute_sequenceQuantEcon._compute_sequenceQuantEcon._generate_a_indptr!QuantEcon._has_sorted_sa_indicesQuantEcon._random_stochastic_matrixQuantEcon._solve!QuantEcon._solve!QuantEcon._solve!QuantEcon.ar_periodogramQuantEcon.autocovarianceQuantEcon.b_operatorQuantEcon.bellman_operatorQuantEcon.bellman_operator!QuantEcon.bellman_operator!QuantEcon.bellman_operator!QuantEcon.bisectQuantEcon.brentQuantEcon.brenthQuantEcon.ckronQuantEcon.communication_classesQuantEcon.compute_deterministic_entropyQuantEcon.compute_fixed_pointQuantEcon.compute_greedyQuantEcon.compute_greedy!QuantEcon.compute_sequenceQuantEcon.d_operatorQuantEcon.divide_bracketQuantEcon.do_quadQuantEcon.drawQuantEcon.drawQuantEcon.evaluate_FQuantEcon.evaluate_policyQuantEcon.evaluate_policyQuantEcon.expand_bracketQuantEcon.filtered_to_forecast!QuantEcon.fixQuantEcon.getZQuantEcon.getZQuantEcon.getZQuantEcon.gridmakeQuantEcon.gridmake!QuantEcon.gth_solveQuantEcon.gth_solve!QuantEcon.impulse_responseQuantEcon.interpQuantEcon.is_aperiodicQuantEcon.is_irreducibleQuantEcon.lae_estQuantEcon.m_quadratic_sumQuantEcon.moment_sequenceQuantEcon.n_statesQuantEcon.nnashQuantEcon.periodogramQuantEcon.prior_to_filtered!QuantEcon.qnwbetaQuantEcon.qnwchebQuantEcon.qnwequiQuantEcon.qnwgammaQuantEcon.qnwlegeQuantEcon.qnwlognQuantEcon.qnwnormQuantEcon.qnwsimpQuantEcon.qnwtrapQuantEcon.qnwunifQuantEcon.quadrectQuantEcon.random_discrete_dpQuantEcon.random_markov_chainQuantEcon.random_markov_chainQuantEcon.random_probvecQuantEcon.random_stochastic_matrixQuantEcon.recurrent_classesQuantEcon.replicateQuantEcon.ridderQuantEcon.robust_ruleQuantEcon.robust_rule_simpleQuantEcon.rouwenhorstQuantEcon.s_wise_maxQuantEcon.s_wise_max!QuantEcon.s_wise_max!QuantEcon.s_wise_max!QuantEcon.s_wise_max!QuantEcon.simulateQuantEcon.simulate!QuantEcon.simulate_indicesQuantEcon.simulate_indices!QuantEcon.simulationQuantEcon.smoothQuantEcon.smoothQuantEcon.solveQuantEcon.solve_discrete_lyapunovQuantEcon.solve_discrete_riccatiQuantEcon.spectral_densityQuantEcon.stationary_distributionsQuantEcon.stationary_distributionsQuantEcon.stationary_valuesQuantEcon.stationary_values!QuantEcon.tauchenQuantEcon.todenseQuantEcon.todenseQuantEcon.update!QuantEcon.update_values!QuantEcon.var_quadratic_sum
Exported
QuantEcon.ARMA — Type.Represents a scalar ARMA(p, q) process
If phi and theta are scalars, then the model is understood to be
X_t = phi X_{t-1} + epsilon_t + theta epsilon_{t-1}where epsilon_t is a white noise process with standard deviation sigma.
If phi and theta are arrays or sequences, then the interpretation is the ARMA(p, q) model
X_t = phi_1 X_{t-1} + ... + phi_p X_{t-p} +
epsilon_t + theta_1 epsilon_{t-1} + ... +
theta_q epsilon_{t-q}where
phi = (phi_1, phi_2,..., phi_p)
theta = (theta_1, theta_2,..., theta_q)
sigma is a scalar, the standard deviation of the white noise
Fields
phi::Vector: AR parameters phi_1, ..., phi_ptheta::Vector: MA parameters theta_1, ..., theta_qp::Integer: Number of AR coefficientsq::Integer: Number of MA coefficientssigma::Real: Standard deviation of white noisema_poly::Vector: MA polynomial –- filtering representatoinar_poly::Vector: AR polynomial –- filtering representation
Examples
using QuantEcon
phi = 0.5
theta = [0.0, -0.8]
sigma = 1.0
lp = ARMA(phi, theta, sigma)
require(joinpath(dirname(@__FILE__),"..", "examples", "arma_plots.jl"))
quad_plot(lp)QuantEcon.DiscreteDP — Type.DiscreteDP type for specifying paramters for discrete dynamic programming model
Parameters
R::Array{T,NR}: Reward ArrayQ::Array{T,NQ}: Transition Probability Arraybeta::Float64: Discount Factora_indices::Nullable{Vector{Tind}}: Action Indices. Null unless using SA formulationa_indptr::Nullable{Vector{Tind}}: Action Index Pointers. Null unless using SA formulation
Returns
ddp::DiscreteDP: DiscreteDP object
QuantEcon.DiscreteDP — Method.DiscreteDP type for specifying parameters for discrete dynamic programming model State-Action Pair Formulation
Parameters
R::Array{T,NR}: Reward ArrayQ::Array{T,NQ}: Transition Probability Arraybeta::Float64: Discount Factors_indices::Nullable{Vector{Tind}}: State Indices. Null unless using SA formulationa_indices::Nullable{Vector{Tind}}: Action Indices. Null unless using SA formulationa_indptr::Nullable{Vector{Tind}}: Action Index Pointers. Null unless using SA formulation
Returns
ddp::DiscreteDP: Constructor for DiscreteDP object
QuantEcon.DiscreteDP — Method.DiscreteDP type for specifying parameters for discrete dynamic programming model Dense Matrix Formulation
Parameters
R::Array{T,NR}: Reward ArrayQ::Array{T,NQ}: Transition Probability Arraybeta::Float64: Discount Factor
Returns
ddp::DiscreteDP: Constructor for DiscreteDP object
QuantEcon.DiscreteRV — Type.Generates an array of draws from a discrete random variable with vector of probabilities given by q.
Fields
q::AbstractVector: A vector of non-negative probabilities that sum to 1Q::AbstractVector: The cumulative sum of q
QuantEcon.ECDF — Type.One-dimensional empirical distribution function given a vector of observations.
Fields
observations::Vector: The vector of observations
QuantEcon.LAE — Type.A look ahead estimator associated with a given stochastic kernel p and a vector of observations X.
Fields
p::Function: The stochastic kernel. Signature isp(x, y)and it should be
vectorized in both inputs
X::Matrix: A vector containing observations. Note that this can be passed as
any kind of AbstractArray and will be coerced into an n x 1 vector.
QuantEcon.LQ — Type.Linear quadratic optimal control of either infinite or finite horizon
The infinite horizon problem can be written
min E sum_{t=0}^{infty} beta^t r(x_t, u_t)with
r(x_t, u_t) := x_t' R x_t + u_t' Q u_t + 2 u_t' N x_tThe finite horizon form is
min E sum_{t=0}^{T-1} beta^t r(x_t, u_t) + beta^T x_T' R_f x_TBoth are minimized subject to the law of motion
x_{t+1} = A x_t + B u_t + C w_{t+1}Here x is n x 1, u is k x 1, w is j x 1 and the matrices are conformable for these dimensions. The sequence {w_t} is assumed to be white noise, with zero mean and E w_t w_t' = I, the j x j identity.
For this model, the time t value (i.e., cost-to-go) function V_t takes the form
x' P_T x + d_Tand the optimal policy is of the form u_T = -F_T x_T. In the infinite horizon case, V, P, d and F are all stationary.
Fields
Q::ScalarOrArray: k x k payoff coefficient for control variable u. Must be
symmetric and nonnegative definite
R::ScalarOrArray: n x n payoff coefficient matrix for state variable x.
Must be symmetric and nonnegative definite
A::ScalarOrArray: n x n coefficient on state in state transitionB::ScalarOrArray: n x k coefficient on control in state transitionC::ScalarOrArray: n x j coefficient on random shock in state transitionN::ScalarOrArray: k x n cross product in payoff equationbet::Real: Discount factor in [0, 1]capT::Union{Int, Void}: Terminal period in finite horizon problemrf::ScalarOrArray: n x n terminal payoff in finite horizon problem. Must be
symmetric and nonnegative definite
P::ScalarOrArray: n x n matrix in value function representation
V(x) = x'Px + d
d::Real: Constant in value function representationF::ScalarOrArray: Policy rule that specifies optimal control in each period
QuantEcon.LQ — Type.Main constructor for LQ type
Specifies default argumets for all fields not part of the payoff function or transition equation.
Arguments
Q::ScalarOrArray: k x k payoff coefficient for control variable u. Must be
symmetric and nonnegative definite
R::ScalarOrArray: n x n payoff coefficient matrix for state variable x.
Must be symmetric and nonnegative definite
A::ScalarOrArray: n x n coefficient on state in state transitionB::ScalarOrArray: n x k coefficient on control in state transition;C::ScalarOrArray{zeros(size(R}(1))): n x j coefficient on random shock in
state transition
;N::ScalarOrArray{zeros(size(B,1)}(size(A, 2))): k x n cross product in
payoff equation
;bet::Real(1.0): Discount factor in [0, 1]capT::Union{Int, Void}(Void): Terminal period in finite horizon
problem
rf::ScalarOrArray{fill(NaN}(size(R)...)): n x n terminal payoff in finite
horizon problem. Must be symmetric and nonnegative definite.
QuantEcon.LSS — Type.A type that describes the Gaussian Linear State Space Model of the form:
x_{t+1} = A x_t + C w_{t+1}
y_t = G x_twhere {w_t} and {v_t} are independent and standard normal with dimensions k and l respectively. The initial conditions are mu_0 and Sigma_0 for x_0 ~ N(mu_0, Sigma_0). When Sigma_0=0, the draw of x_0 is exactly mu_0.
Fields
A::MatrixPart of the state transition equation. It should ben x nC::MatrixPart of the state transition equation. It should ben x mG::MatrixPart of the observation equation. It should bek x nk::IntDimensionn::IntDimensionm::IntDimensionmu_0::VectorThis is the mean of initial draw and is of lengthnSigma_0::MatrixThis is the variance of the initial draw and isn x nand also should be positive definite and symmetric
QuantEcon.LinInterp — Type.Linear interpolation in one dimension
Fields
breaks::AbstractVector: A sorted array of grid points on which to
interpolate
vals::AbstractVector: The function values associated with each of the grid
points
Examples
breaks = cumsum(0.1 .* rand(20))
vals = 0.1 .* sin.(breaks)
li = LinInterp(breaks, vals)
# do interpolation via `call` method on a LinInterp object
li(0.2)
# use broadcasting to evaluate at multiple points
li.([0.1, 0.2, 0.3])QuantEcon.MPFI — Type.This refers to the Modified Policy Iteration solution algorithm.
References
http://quant-econ.net/jl/ddp.html
QuantEcon.MarkovChain — Type.Finite-state discrete-time Markov chain.
Methods are available that provide useful information such as the stationary distributions, and communication and recurrent classes, and allow simulation of state transitions.
Fields
p::AbstractMatrix: The transition matrix. Must be square, all elements
must be nonnegative, and all rows must sum to unity.
state_values::AbstractVector: Vector containing the values associated with
the states.
QuantEcon.MarkovChain — Method.Returns the controlled Markov chain for a given policy sigma.
Parameters
ddp::DiscreteDP: Object that contains the model parametersddpr::DPSolveResult: Object that contains result variables
Returns
mc : MarkovChain Controlled Markov chain.
QuantEcon.PFI — Type.This refers to the Policy Iteration solution algorithm.
References
http://quant-econ.net/jl/ddp.html
QuantEcon.RBLQ — Type.Represents infinite horizon robust LQ control problems of the form
min_{u_t} sum_t beta^t {x_t' R x_t + u_t' Q u_t }subject to
x_{t+1} = A x_t + B u_t + C w_{t+1}and with model misspecification parameter theta.
Fields
Q::Matrix{Float64}: The cost(payoff) matrix for the controls. See above
for more. Q should be k x k and symmetric and positive definite
R::Matrix{Float64}: The cost(payoff) matrix for the state. See above for
more. R should be n x n and symmetric and non-negative definite
A::Matrix{Float64}: The matrix that corresponds with the state in the
state space system. A should be n x n
B::Matrix{Float64}: The matrix that corresponds with the control in the
state space system. B should be n x k
C::Matrix{Float64}: The matrix that corresponds with the random process in
the state space system. C should be n x j
beta::Real: The discount factor in the robust control problemtheta::RealThe robustness factor in the robust control problemk, n, j::Int: Dimensions of input matrices
QuantEcon.VFI — Type.This refers to the Value Iteration solution algorithm.
References
http://quant-econ.net/jl/ddp.html
LightGraphs.period — Method.Return the period of the Markov chain mc.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Int: Period ofmc.
QuantEcon.F_to_K — Method.Compute agent 2's best cost-minimizing response K, given F.
Arguments
rlq::RBLQ: Instance ofRBLQtypeF::Matrix{Float64}: A k x n array representing agent 1's policy
Returns
K::Matrix{Float64}: Agent's best cost minimizing response corresponding to
F
P::Matrix{Float64}: The value function corresponding toF
QuantEcon.K_to_F — Method.Compute agent 1's best cost-minimizing response K, given F.
Arguments
rlq::RBLQ: Instance ofRBLQtypeK::Matrix{Float64}: A k x n array representing the worst case matrix
Returns
F::Matrix{Float64}: Agent's best cost minimizing response corresponding to
K
P::Matrix{Float64}: The value function corresponding toK
QuantEcon.RQ_sigma — Method.Method of RQ_sigma that extracts sigma from a DPSolveResult
See other docstring for details
QuantEcon.RQ_sigma — Method.Given a policy sigma, return the reward vector R_sigma and the transition probability matrix Q_sigma.
Parameters
ddp::DiscreteDP: Object that contains the model parameterssigma::Vector{Int}: policy rule vector
Returns
R_sigma::Array{Float64}: Reward vector forsigma, of length n.Q_sigma::Array{Float64}: Transition probability matrix forsigma, of shape (n, n).
QuantEcon.ar_periodogram — Function.Compute periodogram from data x, using prewhitening, smoothing and recoloring. The data is fitted to an AR(1) model for prewhitening, and the residuals are used to compute a first-pass periodogram with smoothing. The fitted coefficients are then used for recoloring.
Arguments
x::Array: An array containing the data to smoothwindow_len::Int(7): An odd integer giving the length of the windowwindow::AbstractString("hanning"): A string giving the window type. Possible values
are flat, hanning, hamming, bartlett, or blackman
Returns
w::Array{Float64}: Fourier frequencies at which the periodogram is evaluatedI_w::Array{Float64}: The periodogram at frequencesw
QuantEcon.autocovariance — Method.Compute the autocovariance function from the ARMA parameters over the integers range(num_autocov) using the spectral density and the inverse Fourier transform.
Arguments
arma::ARMA: Instance ofARMAtype;num_autocov::Integer(16): The number of autocovariances to calculate
QuantEcon.b_operator — Method.The D operator, mapping P into
B(P) := R - beta^2 A'PB(Q + beta B'PB)^{-1}B'PA + beta A'PAand also returning
F := (Q + beta B'PB)^{-1} beta B'PAArguments
rlq::RBLQ: Instance ofRBLQtypeP::Matrix{Float64}:sizeis n x n
Returns
F::Matrix{Float64}: The F matrix as defined abovenew_p::Matrix{Float64}: The matrix P after applying the B operator
QuantEcon.bellman_operator! — Method.The Bellman operator, which computes and returns the updated value function Tv for a value function v.
Parameters
ddp::DiscreteDP: Object that contains the model parametersv::Vector{T<:AbstractFloat}: The current guess of the value functionTv::Vector{T<:AbstractFloat}: A buffer array to hold the updated value function. Initial value not used and will be overwrittensigma::Vector: A buffer array to hold the policy function. Initial values not used and will be overwritten
Returns
Tv::Vector: Updated value function vectorsigma::Vector: Updated policiy function vector
QuantEcon.bellman_operator! — Method.The Bellman operator, which computes and returns the updated value function Tv for a given value function v.
This function will fill the input v with Tv and the input sigma with the corresponding policy rule
Parameters
ddp::DiscreteDP: The ddp modelv::Vector{T<:AbstractFloat}: The current guess of the value function. This array will be overwrittensigma::Vector: A buffer array to hold the policy function. Initial values not used and will be overwritten
Returns
Tv::Vector: Updated value function vectorsigma::Vector{T<:Integer}: Policy rule
QuantEcon.bellman_operator! — Method.Apply the Bellman operator using v=ddpr.v, Tv=ddpr.Tv, and sigma=ddpr.sigma
Notes
Updates ddpr.Tv and ddpr.sigma inplace
QuantEcon.bellman_operator — Method.The Bellman operator, which computes and returns the updated value function Tv for a given value function v.
Parameters
ddp::DiscreteDP: The ddp modelv::Vector: The current guess of the value function
Returns
Tv::Vector: Updated value function vector
QuantEcon.bisect — Method.Find the root of the f on the bracketing inverval [x1, x2] via bisection
Arguments
f::Function: The function you want to bracketx1::T: Lower border for search intervalx2::T: Upper border for search interval;maxiter::Int(500): Maximum number of bisection iterations;xtol::Float64(1e-12): The routine converges when a root is known to lie
within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.
;rtol::Float64(2*eps()):The routine converges when a root is known to lie
within rtol times the value returned of the value returned. Should be ≥ 0
Returns
x::T: The found root
Exceptions
Throws an
ArgumentErrorif[x1, x2]does not form a bracketing intervalThrows a
ConvergenceErrorif the maximum number of iterations is exceeded
References
Matches bisect function from scipy/scipy/optimize/Zeros/bisect.c
QuantEcon.brent — Method.Find the root of the f on the bracketing inverval [x1, x2] via brent's algo
Arguments
f::Function: The function you want to bracketx1::T: Lower border for search intervalx2::T: Upper border for search interval;maxiter::Int(500): Maximum number of bisection iterations;xtol::Float64(1e-12): The routine converges when a root is known to lie
within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.
;rtol::Float64(2*eps()):The routine converges when a root is known to lie
within rtol times the value returned of the value returned. Should be ≥ 0
Returns
x::T: The found root
Exceptions
Throws an
ArgumentErrorif[x1, x2]does not form a bracketing intervalThrows a
ConvergenceErrorif the maximum number of iterations is exceeded
References
Matches brentq function from scipy/scipy/optimize/Zeros/bisectq.c
QuantEcon.brenth — Method.Find a root of the f on the bracketing inverval [x1, x2] via modified brent
This routine uses a hyperbolic extrapolation formula instead of the standard inverse quadratic formula. Otherwise it is the original Brent's algorithm, as implemented in the brent function.
Arguments
f::Function: The function you want to bracketx1::T: Lower border for search intervalx2::T: Upper border for search interval;maxiter::Int(500): Maximum number of bisection iterations;xtol::Float64(1e-12): The routine converges when a root is known to lie
within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.
;rtol::Float64(2*eps()):The routine converges when a root is known to lie
within rtol times the value returned of the value returned. Should be ≥ 0
Returns
x::T: The found root
Exceptions
Throws an
ArgumentErrorif[x1, x2]does not form a bracketing intervalThrows a
ConvergenceErrorif the maximum number of iterations is exceeded
References
Matches brenth function from scipy/scipy/optimize/Zeros/bisecth.c
QuantEcon.ckron — Function.ckron(arrays::AbstractArray...)
Repeatedly apply kronecker products to the arrays. Equilvalent to reduce(kron, arrays)
QuantEcon.communication_classes — Method.Find the communication classes of the Markov chain mc.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Vector{Vector{Int}}: Vector of vectors that describe the communication
classes of mc.
QuantEcon.compute_deterministic_entropy — Method.Given K and F, compute the value of deterministic entropy, which is sum_t beta^t x_t' K'K x_t with x_{t+1} = (A - BF + CK) x_t.
Arguments
rlq::RBLQ: Instance ofRBLQtypeF::Matrix{Float64}The policy function, a k x n arrayK::Matrix{Float64}The worst case matrix, a j x n arrayx0::Vector{Float64}: The initial condition for state
Returns
e::Float64The deterministic entropy
QuantEcon.compute_fixed_point — Method.Repeatedly apply a function to search for a fixed point
Approximates T^∞ v, where T is an operator (function) and v is an initial guess for the fixed point. Will terminate either when T^{k+1}(v) - T^k v < err_tol or max_iter iterations has been exceeded.
Provided that T is a contraction mapping or similar, the return value will be an approximation to the fixed point of T.
Arguments
T: A function representing the operatorTv::TV: The initial condition. An object of typeTV;err_tol(1e-3): Stopping tolerance for iterations;max_iter(50): Maximum number of iterations;verbose(2): Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration);print_skip(10): ifverbose == 2, how many iterations to apply between print messages
Returns
'::TV': The fixed point of the operator
T. Has typeTV
Example
using QuantEcon
T(x, μ) = 4.0 * μ * x * (1.0 - x)
x_star = compute_fixed_point(x->T(x, 0.3), 0.4) # (4μ - 1)/(4μ)QuantEcon.compute_greedy! — Method.Compute the v-greedy policy
Parameters
ddp::DiscreteDP: Object that contains the model parametersddpr::DPSolveResult: Object that contains result variables
Returns
sigma::Vector{Int}: Array containingv-greedy policy rule
Notes
modifies ddpr.sigma and ddpr.Tv in place
QuantEcon.compute_greedy — Method.Compute the v-greedy policy.
Arguments
v::VectorValue function vector of lengthnddp::DiscreteDPObject that contains the model parameters
Returns
sigma:: v-greedy policy vector, of lengthn`
QuantEcon.compute_sequence — Function.Compute and return the optimal state and control sequence, assuming innovation N(0,1)
Arguments
lq::LQ: instance ofLQtypex0::ScalarOrArray: initial statets_length::Integer(100): maximum number of periods for which to return
process. If lq instance is finite horizon type, the sequenes are returned only for min(ts_length, lq.capT)
Returns
x_path::Matrix{Float64}: An n x T+1 matrix, where the t-th column
represents x_t
u_path::Matrix{Float64}: A k x T matrix, where the t-th column represents
u_t
w_path::Matrix{Float64}: A n x T+1 matrix, where the t-th column represents
lq.C*N(0,1)
QuantEcon.d_operator — Method.The D operator, mapping P into
D(P) := P + PC(theta I - C'PC)^{-1} C'P.Arguments
rlq::RBLQ: Instance ofRBLQtypeP::Matrix{Float64}:sizeis n x n
Returns
dP::Matrix{Float64}: The matrix P after applying the D operator
QuantEcon.divide_bracket — Function.Given a function f defined on the interval [x1, x2], subdivide the interval into n equally spaced segments, and search for zero crossings of the function. nroot will be set to the number of bracketing pairs found. If it is positive, the arrays xb1[1..nroot] and xb2[1..nroot] will be filled sequentially with any bracketing pairs that are found.
Arguments
f::Function: The function you want to bracketx1::T: Lower border for search intervalx2::T: Upper border for search intervaln::Int(50): The number of sub-intervals to divide[x1, x2]into
Returns
x1b::Vector{T}:Vectorof lower borders of bracketing intervalsx2b::Vector{T}:Vectorof upper borders of bracketing intervals
References
This is zbrack from Numerical Recepies Recepies in C++
QuantEcon.do_quad — Method.Approximate the integral of f, given quadrature nodes and weights
Arguments
f::Function: A callable function that is to be approximated over the domain
spanned by nodes.
nodes::Array: Quadrature nodesweights::Array: Quadrature nodesargs...(Void): additional positional arguments to pass tof;kwargs...(Void): additional keyword arguments to pass tof
Returns
out::Float64: The scalar that approximates integral offon the hypercube
formed by [a, b]
QuantEcon.draw — Method.Make multiple draws from the discrete distribution represented by a DiscreteRV instance
Arguments
d::DiscreteRV: TheDiscreteRVtype representing the distributionk::Int:
Returns
out::Vector{Int}:kdraws fromd
QuantEcon.draw — Method.Make a single draw from the discrete distribution
Arguments
d::DiscreteRV: TheDiscreteRVtype represetning the distribution
Returns
out::Int: One draw from the discrete distribution
QuantEcon.evaluate_F — Method.Given a fixed policy F, with the interpretation u = -F x, this function computes the matrix P_F and constant d_F associated with discounted cost J_F(x) = x' P_F x + d_F.
Arguments
rlq::RBLQ: Instance ofRBLQtypeF::Matrix{Float64}: The policy function, a k x n array
Returns
P_F::Matrix{Float64}: Matrix for discounted costd_F::Float64: Constant for discounted costK_F::Matrix{Float64}: Worst case policyO_F::Matrix{Float64}: Matrix for discounted entropyo_F::Float64: Constant for discounted entropy
QuantEcon.evaluate_policy — Method.Compute the value of a policy.
Parameters
ddp::DiscreteDP: Object that contains the model parameterssigma::Vector{T<:Integer}: Policy rule vector
Returns
v_sigma::Array{Float64}: Value vector ofsigma, of length n.
QuantEcon.evaluate_policy — Method.Method of evaluate_policy that extracts sigma from a DPSolveResult
See other docstring for details
QuantEcon.expand_bracket — Method.Given a function f and an initial guessed range x1 to x2, the routine expands the range geometrically until a root is bracketed by the returned values x1 and x2 (in which case zbrac returns true) or until the range becomes unacceptably large (in which case a ConvergenceError is thrown).
Arguments
f::Function: The function you want to bracketx1::T: Initial guess for lower border of bracketx2::T: Initial guess ofr upper border of bracket;ntry::Int(50): The maximum number of expansion iterations;fac::Float64(1.6): Expansion factor (higher ⟶ larger interval size jumps)
Returns
x1::T: The lower end of an actual bracketing intervalx2::T: The upper end of an actual bracketing interval
References
This method is zbrac from numerical recipies in C++
Exceptions
Throws a
ConvergenceErrorif the maximum number of iterations is exceeded
QuantEcon.filtered_to_forecast! — Method.Updates the moments of the time t filtering distribution to the moments of the predictive distribution, which becomes the time t+1 prior
Arguments
k::KalmanAn instance of the Kalman filter
QuantEcon.gridmake — Function.gridmake(arrays::Union{AbstractVector,AbstractMatrix}...)
Expand one or more vectors (or matrices) into a matrix where rows span the cartesian product of combinations of the input arrays. Each column of the input arrays will correspond to one column of the output matrix. The first array varies the fastest (see example)
Example
julia> x = [1, 2, 3]; y = [10, 20]; z = [100, 200];
julia> gridmake(x, y, z)
12x3 Array{Int64,2}:
1 10 100
2 10 100
3 10 100
1 20 100
2 20 100
3 20 100
1 10 200
2 10 200
3 10 200
1 20 200
2 20 200
3 20 200QuantEcon.gridmake! — Method.gridmake!(out::AbstractMatrix, arrays::AbstractVector...)
Like gridmake, but fills a pre-populated array. out must have size prod(map(length, arrays), length(arrays))
QuantEcon.gth_solve — Method.This routine computes the stationary distribution of an irreducible Markov transition matrix (stochastic matrix) or transition rate matrix (generator matrix) A.
More generally, given a Metzler matrix (square matrix whose off-diagonal entries are all nonnegative) A, this routine solves for a nonzero solution x to x (A - D) = 0, where D is the diagonal matrix for which the rows of A - D sum to zero (i.e., D_{ii} = sum_j A_{ij} for all i). One (and only one, up to normalization) nonzero solution exists corresponding to each reccurent class of A, and in particular, if A is irreducible, there is a unique solution; when there are more than one solution, the routine returns the solution that contains in its support the first index i such that no path connects i to any index larger than i. The solution is normalized so that its 1-norm equals one. This routine implements the Grassmann-Taksar-Heyman (GTH) algorithm (Grassmann, Taksar, and Heyman 1985), a numerically stable variant of Gaussian elimination, where only the off-diagonal entries of A are used as the input data. For a nice exposition of the algorithm, see Stewart (2009), Chapter 10.
Arguments
A::Matrix{T}: Stochastic matrix or generator matrix. Must be of shape n x n.
Returns
x::Vector{T}: Stationary distribution ofA.
References
W. K. Grassmann, M. I. Taksar and D. P. Heyman, "Regenerative Analysis and
Steady State Distributions for Markov Chains, " Operations Research (1985), 1107-1116.
W. J. Stewart, Probability, Markov Chains, Queues, and Simulation, Princeton
University Press, 2009.
QuantEcon.impulse_response — Method.Get the impulse response corresponding to our model.
Arguments
arma::ARMA: Instance ofARMAtype;impulse_length::Integer(30): Length of horizon for calucluating impulse
reponse. Must be at least as long as the p fields of arma
Returns
psi::Vector{Float64}:psi[j]is the response at lag j of the impulse
response. We take psi[1] as unity.
QuantEcon.interp — Method.interp(grid::AbstractVector, function_vals::AbstractVector)Linear interpolation in one dimension
Examples
breaks = cumsum(0.1 .* rand(20))
vals = 0.1 .* sin.(breaks)
li = interp(breaks, vals)
# Do interpolation by treating `li` as a function you can pass scalars to
li(0.2)
# use broadcasting to evaluate at multiple points
li.([0.1, 0.2, 0.3])QuantEcon.is_aperiodic — Method.Indicate whether the Markov chain mc is aperiodic.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Bool
QuantEcon.is_irreducible — Method.Indicate whether the Markov chain mc is irreducible.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Bool
QuantEcon.lae_est — Method.A vectorized function that returns the value of the look ahead estimate at the values in the array y.
Arguments
l::LAE: Instance ofLAEtypey::Array: Array that becomes theyinl.p(l.x, y)
Returns
psi_vals::Vector: Density at(x, y)
QuantEcon.m_quadratic_sum — Method.Computes the quadratic sum
V = sum_{j=0}^{infty} A^j B A^{j'}V is computed by solving the corresponding discrete lyapunov equation using the doubling algorithm. See the documentation of solve_discrete_lyapunov for more information.
Arguments
A::Matrix{Float64}: An n x n matrix as described above. We assume in order
for convergence that the eigenvalues of A have moduli bounded by unity
B::Matrix{Float64}: An n x n matrix as described above. We assume in order
for convergence that the eigenvalues of B have moduli bounded by unity
max_it::Int(50): Maximum number of iterations
Returns
gamma1::Matrix{Float64}: Represents the value V
QuantEcon.moment_sequence — Method.Create an iterator to calculate the population mean and variance-convariance matrix for both x_t and y_t, starting at the initial condition (self.mu_0, self.Sigma_0). Each iteration produces a 4-tuple of items (mu_x, mu_y, Sigma_x, Sigma_y) for the next period.
Arguments
lss::LSSAn instance of the Gaussian linear state space model
QuantEcon.n_states — Method.Number of states in the Markov chain mc
QuantEcon.nnash — Method.Compute the limit of a Nash linear quadratic dynamic game.
Player i minimizes
sum_{t=1}^{inf}(x_t' r_i x_t + 2 x_t' w_i
u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}'
m_i u_{it})subject to the law of motion
x_{t+1} = A x_t + b_1 u_{1t} + b_2 u_{2t}and a perceived control law :math:u_j(t) = - f_j x_t for the other player.
The solution computed in this routine is the f_i and p_i of the associated double optimal linear regulator problem.
Arguments
A: Corresponds to the above equation, should be of size (n, n)B1: As above, size (n, k_1)B2: As above, size (n, k_2)R1: As above, size (n, n)R2: As above, size (n, n)Q1: As above, size (k_1, k_1)Q2: As above, size (k_2, k_2)S1: As above, size (k_1, k_1)S2: As above, size (k_2, k_2)W1: As above, size (n, k_1)W2: As above, size (n, k_2)M1: As above, size (k_2, k_1)M2: As above, size (k_1, k_2);beta::Float64(1.0)Discount rate;tol::Float64(1e-8): Tolerance level for convergence;max_iter::Int(1000): Maximum number of iterations allowed
Returns
F1::Matrix{Float64}: (k_1, n) matrix representing feedback law for agent 1F2::Matrix{Float64}: (k_2, n) matrix representing feedback law for agent 2P1::Matrix{Float64}: (n, n) matrix representing the steady-state solution to the associated discrete matrix ticcati equation for agent 1P2::Matrix{Float64}: (n, n) matrix representing the steady-state solution to the associated discrete matrix riccati equation for agent 2
QuantEcon.periodogram — Function.Computes the periodogram
I(w) = (1 / n) | sum_{t=0}^{n-1} x_t e^{itw} |^2at the Fourier frequences w_j := 2 pi j / n, j = 0, ..., n - 1, using the fast Fourier transform. Only the frequences w_j in [0, pi] and corresponding values I(w_j) are returned. If a window type is given then smoothing is performed.
Arguments
x::Array: An array containing the data to smoothwindow_len::Int(7): An odd integer giving the length of the windowwindow::AbstractString("hanning"): A string giving the window type. Possible values
are flat, hanning, hamming, bartlett, or blackman
Returns
w::Array{Float64}: Fourier frequencies at which the periodogram is evaluatedI_w::Array{Float64}: The periodogram at frequencesw
QuantEcon.prior_to_filtered! — Method.Updates the moments (cur_x_hat, cur_sigma) of the time t prior to the time t filtering distribution, using current measurement y_t. The updates are according to x_{hat}^F = x_{hat} + Sigma G' (G Sigma G' + R)^{-1} (y - G x_{hat}) Sigma^F = Sigma - Sigma G' (G Sigma G' + R)^{-1} G Sigma
Arguments
k::KalmanAn instance of the Kalman filteryThe current measurement
QuantEcon.qnwbeta — Method.Computes nodes and weights for beta distribution
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: First parameter of the beta distribution,
along each dimension
b::Union{Real, Vector{Real}}: Second parameter of the beta distribution,
along each dimension
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwcheb — Method.Computes multivariate Guass-Checbychev quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}: Upper endpoint along each dimension
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwequi — Function.Generates equidistributed sequences with property that averages value of integrable function evaluated over the sequence converges to the integral as n goes to infinity.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}: Upper endpoint along each dimensionkind::AbstractString("N"): One of the following:N - Neiderreiter (default)
W - Weyl
H - Haber
R - pseudo Random
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwgamma — Function.Computes nodes and weights for beta distribution
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Shape parameter of the gamma distribution,
along each dimension. Must be positive. Default is 1
b::Union{Real, Vector{Real}}: Scale parameter of the gamma distribution,
along each dimension. Must be positive. Default is 1
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwlege — Method.Computes multivariate Guass-Legendre quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}: Upper endpoint along each dimension
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwlogn — Method.Computes quadrature nodes and weights for multivariate uniform distribution
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensionmu::Union{Real, Vector{Real}}: Mean along each dimensionsig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n))): Covariance
structure
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
See also the documentation for qnwnorm
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwnorm — Method.Computes nodes and weights for multivariate normal distribution
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensionmu::Union{Real, Vector{Real}}: Mean along each dimensionsig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n))): Covariance
structure
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
This function has many methods. I try to describe them here.
n or mu can be a vector or a scalar. If just one is a scalar the other is repeated to match the length of the other. If both are scalars, then the number of repeats is inferred from sig2.
sig2 can be a matrix, vector or scalar. If it is a matrix, it is treated as the covariance matrix. If it is a vector, it is considered the diagonal of a diagonal covariance matrix. If it is a scalar it is repeated along the diagonal as many times as necessary, where the number of repeats is determined by the length of either n and/or mu (which ever is a vector).
If all 3 are scalars, then 1d nodes are computed. mu and sig2 are treated as the mean and variance of a 1d normal distribution
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwsimp — Method.Computes multivariate Simpson quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}: Upper endpoint along each dimension
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwtrap — Method.Computes multivariate trapezoid quadrature nodes and weights.
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}: Upper endpoint along each dimension
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.qnwunif — Method.Computes quadrature nodes and weights for multivariate uniform distribution
Arguments
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}: Upper endpoint along each dimension
Returns
nodes::Array{Float64}: An array of quadrature nodesweights::Array{Float64}: An array of corresponding quadrature weights
Notes
If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.quadrect — Function.Integrate the d-dimensional function f on a rectangle with lower and upper bound for dimension i defined by a[i] and b[i], respectively; using n[i] points.
Arguments
f::FunctionThe function to integrate over. This should be a function that
accepts as its first argument a matrix representing points along each dimension (each dimension is a column). Other arguments that need to be passed to the function are caught by args... and kwargs...`
n::Union{Int, Vector{Int}}: Number of desired nodes along each dimensiona::Union{Real, Vector{Real}}: Lower endpoint along each dimensionb::Union{Real, Vector{Real}}: Upper endpoint along each dimensionkind::AbstractString("lege")Specifies which type of integration to perform. Valid
values are: - "lege" : Gauss-Legendre - "cheb" : Gauss-Chebyshev - "trap" : trapezoid rule - "simp" : Simpson rule - "N" : Neiderreiter equidistributed sequence - "W" : Weyl equidistributed sequence - "H" : Haber equidistributed sequence - "R" : Monte Carlo - args...(Void): additional positional arguments to pass to f - ;kwargs...(Void): additional keyword arguments to pass to f
Returns
out::Float64: The scalar that approximates integral offon the hypercube
formed by [a, b]
References
Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
QuantEcon.random_discrete_dp — Function.Generate a DiscreteDP randomly. The reward values are drawn from the normal distribution with mean 0 and standard deviation scale.
Arguments
num_states::Integer: Number of states.num_actions::Integer: Number of actions.beta::Union{Float64, Void}(nothing): Discount factor. Randomly chosen from
[0, 1) if not specified.
;k::Union{Integer, Void}(nothing): Number of possible next states for each
state-action pair. Equal to num_states if not specified.
scale::Real(1): Standard deviation of the normal distribution for the
reward values.
Returns
ddp::DiscreteDP: An instance of DiscreteDP.
QuantEcon.random_markov_chain — Method.Return a randomly sampled MarkovChain instance with n states, where each state has k states with positive transition probability.
Arguments
n::Integer: Number of states.
Returns
mc::MarkovChain: MarkovChain instance.
Examples
julia> using QuantEcon
julia> mc = random_markov_chain(3, 2)
Discrete Markov Chain
stochastic matrix:
3x3 Array{Float64,2}:
0.369124 0.0 0.630876
0.519035 0.480965 0.0
0.0 0.744614 0.255386
QuantEcon.random_markov_chain — Method.Return a randomly sampled MarkovChain instance with n states.
Arguments
n::Integer: Number of states.
Returns
mc::MarkovChain: MarkovChain instance.
Examples
julia> using QuantEcon
julia> mc = random_markov_chain(3)
Discrete Markov Chain
stochastic matrix:
3x3 Array{Float64,2}:
0.281188 0.61799 0.100822
0.144461 0.848179 0.0073594
0.360115 0.323973 0.315912
QuantEcon.random_stochastic_matrix — Function.Return a randomly sampled n x n stochastic matrix with k nonzero entries for each row.
Arguments
n::Integer: Number of states.k::Union{Integer, Void}(nothing): Number of nonzero entries in each
column of the matrix. Set to n if note specified.
Returns
p::Array: Stochastic matrix.
QuantEcon.recurrent_classes — Method.Find the recurrent classes of the Markov chain mc.
Arguments
mc::MarkovChain: MarkovChain instance.
Returns
::Vector{Vector{Int}}: Vector of vectors that describe the recurrent
classes of mc.
QuantEcon.replicate — Function.Simulate num_reps observations of x_T and y_T given x_0 ~ N(mu_0, Sigma_0).
Arguments
lss::LSSAn instance of the Gaussian linear state space model.t::Int = 10The period that we want to replicate values for.num_reps::Int = 100The number of replications we want
Returns
x::MatrixAn n x num_reps matrix, where the j-th column is the j_th observation of x_Ty::MatrixAn k x num_reps matrix, where the j-th column is the j_th observation of y_T
QuantEcon.ridder — Method.Find a root of the f on the bracketing inverval [x1, x2] via ridder algo
Arguments
f::Function: The function you want to bracketx1::T: Lower border for search intervalx2::T: Upper border for search interval;maxiter::Int(500): Maximum number of bisection iterations;xtol::Float64(1e-12): The routine converges when a root is known to lie
within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.
;rtol::Float64(2*eps()):The routine converges when a root is known to lie
within rtol times the value returned of the value returned. Should be ≥ 0
Returns
x::T: The found root
Exceptions
Throws an
ArgumentErrorif[x1, x2]does not form a bracketing intervalThrows a
ConvergenceErrorif the maximum number of iterations is exceeded
References
Matches ridder function from scipy/scipy/optimize/Zeros/ridder.c
QuantEcon.robust_rule — Method.Solves the robust control problem.
The algorithm here tricks the problem into a stacked LQ problem, as described in chapter 2 of Hansen- Sargent's text "Robustness." The optimal control with observed state is
u_t = - F x_tAnd the value function is -x'Px
Arguments
rlq::RBLQ: Instance ofRBLQtype
Returns
F::Matrix{Float64}: The optimal control matrix from aboveP::Matrix{Float64}: The positive semi-definite matrix defining the value
function
K::Matrix{Float64}: the worst-case shock matrixK, where
w_{t+1} = K x_t is the worst case shock
QuantEcon.robust_rule_simple — Function.Solve the robust LQ problem
A simple algorithm for computing the robust policy F and the corresponding value function P, based around straightforward iteration with the robust Bellman operator. This function is easier to understand but one or two orders of magnitude slower than self.robust_rule(). For more information see the docstring of that method.
Arguments
rlq::RBLQ: Instance ofRBLQtypeP_init::Matrix{Float64}(zeros(rlq.n, rlq.n)): The initial guess for the
value function matrix
;max_iter::Int(80): Maximum number of iterations that are allowed;tol::Real(1e-8)The tolerance for convergence
Returns
F::Matrix{Float64}: The optimal control matrix from aboveP::Matrix{Float64}: The positive semi-definite matrix defining the value
function
K::Matrix{Float64}: the worst-case shock matrixK, where
w_{t+1} = K x_t is the worst case shock
QuantEcon.rouwenhorst — Function.Rouwenhorst's method to approximate AR(1) processes.
The process follows
y_t = μ + ρ y_{t-1} + ε_t,where ε_t ~ N (0, σ^2)
Arguments
N::Integer: Number of points in markov processρ::Real: Persistence parameter in AR(1) processσ::Real: Standard deviation of random component of AR(1) processμ::Real(0.0): Mean of AR(1) process
Returns
mc::MarkovChain{Float64}: Markov chain holding the state values and
transition matrix
QuantEcon.simulate! — Method.Fill X with sample paths of the Markov chain mc as columns. The resulting matrix has the state values of mc as elements.
Arguments
X::Matrix: Preallocated matrix to be filled with sample paths
of the Markov chain mc. The element types in X should be the same as the type of the state values of mc
mc::MarkovChain: MarkovChain instance.;init=rand(1:n_states(mc)): Can be one of the followingblank: random initial condition for each chain
scalar: same initial condition for each chain
vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)
QuantEcon.simulate — Method.Simulate one sample path of the Markov chain mc. The resulting vector has the state values of mc as elements.
Arguments
mc::MarkovChain: MarkovChain instance.ts_length::Int: Length of simulation;init::Int=rand(1:n_states(mc)): Initial state
Returns
X::Vector: Vector containing the sample path, with length
ts_length
QuantEcon.simulate_indices! — Method.Fill X with sample paths of the Markov chain mc as columns. The resulting matrix has the indices of the state values of mc as elements.
Arguments
X::Matrix{Int}: Preallocated matrix to be filled with indices
of the sample paths of the Markov chain mc.
mc::MarkovChain: MarkovChain instance.;init=rand(1:n_states(mc)): Can be one of the followingblank: random initial condition for each chain
scalar: same initial condition for each chain
vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)
QuantEcon.simulate_indices — Method.Simulate one sample path of the Markov chain mc. The resulting vector has the indices of the state values of mc as elements.
Arguments
mc::MarkovChain: MarkovChain instance.ts_length::Int: Length of simulation;init::Int=rand(1:n_states(mc)): Initial state
Returns
X::Vector{Int}: Vector containing the sample path, with length
ts_length
QuantEcon.simulation — Method.Compute a simulated sample path assuming Gaussian shocks.
Arguments
arma::ARMA: Instance ofARMAtype;ts_length::Integer(90): Length of simulation;impulse_length::Integer(30): Horizon for calculating impulse response
(see also docstring for impulse_response)
Returns
X::Vector{Float64}: Simulation of the ARMA modelarma
QuantEcon.smooth — Function.Smooth the data in x using convolution with a window of requested size and type.
Arguments
x::Array: An array containing the data to smoothwindow_len::Int(7): An odd integer giving the length of the windowwindow::AbstractString("hanning"): A string giving the window type. Possible values
are flat, hanning, hamming, bartlett, or blackman
Returns
out::Array: The array of smoothed data
QuantEcon.smooth — Method.Version of smooth where window_len and window are keyword arguments
QuantEcon.solve — Function.Solve the dynamic programming problem.
Parameters
ddp::DiscreteDP: Object that contains the Model Parametersmethod::Type{T<Algo}(VFI): Type name specifying solution method. Acceptable
arguments are VFI for value function iteration or PFI for policy function iteration or MPFI for modified policy function iteration
;max_iter::Int(250): Maximum number of iterations;epsilon::Float64(1e-3): Value for epsilon-optimality. Only used if
method is VFI
;k::Int(20): Number of iterations for partial policy evaluation in modified
policy iteration (irrelevant for other methods).
Returns
ddpr::DPSolveResult{Algo}: Optimization result represented as a
DPSolveResult. See DPSolveResult for details.
QuantEcon.solve_discrete_lyapunov — Function.Solves the discrete lyapunov equation.
The problem is given by
AXA' - X + B = 0X is computed by using a doubling algorithm. In particular, we iterate to convergence on X_j with the following recursions for j = 1, 2,... starting from X_0 = B, a_0 = A:
a_j = a_{j-1} a_{j-1}
X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}'Arguments
A::Matrix{Float64}: An n x n matrix as described above. We assume in order
for convergence that the eigenvalues of A have moduli bounded by unity
B::Matrix{Float64}: An n x n matrix as described above. We assume in order
for convergence that the eigenvalues of B have moduli bounded by unity
max_it::Int(50): Maximum number of iterations
Returns
gamma1::Matrix{Float64}Represents the value X
QuantEcon.solve_discrete_riccati — Function.Solves the discrete-time algebraic Riccati equation
The prolem is defined as
X = A'XA - (N + B'XA)'(B'XB + R)^{-1}(N + B'XA) + Qvia a modified structured doubling algorithm. An explanation of the algorithm can be found in the reference below.
Arguments
A: k x k array.B: k x n arrayR: n x n, should be symmetric and positive definiteQ: k x k, should be symmetric and non-negative definiteN::Matrix{Float64}(zeros(size(R, 1), size(Q, 1))): n x k arraytolerance::Float64(1e-10)Tolerance level for convergencemax_iter::Int(50): The maximum number of iterations allowed
Note that A, B, R, Q can either be real (i.e. k, n = 1) or matrices.
Returns
X::Matrix{Float64}The fixed point of the Riccati equation; a k x k array
representing the approximate solution
References
Chiang, Chun-Yueh, Hung-Yuan Fan, and Wen-Wei Lin. "STRUCTURED DOUBLING ALGORITHM FOR DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS WITH SINGULAR CONTROL WEIGHTING MATRICES." Taiwanese Journal of Mathematics 14, no. 3A (2010): pp-935.
QuantEcon.spectral_density — Method.Compute the spectral density function.
The spectral density is the discrete time Fourier transform of the autocovariance function. In particular,
f(w) = sum_k gamma(k) exp(-ikw)where gamma is the autocovariance function and the sum is over the set of all integers.
Arguments
arma::ARMA: Instance ofARMAtype;two_pi::Bool(true): Compute the spectral density function over [0, pi] if false and [0, 2 pi] otherwise.;res(1200): Ifresis a scalar then the spectral density is computed at
res frequencies evenly spaced around the unit circle, but if res is an array then the function computes the response at the frequencies given by the array
Returns
w::Vector{Float64}: The normalized frequencies at which h was computed, in radians/samplespect::Vector{Float64}: The frequency response
QuantEcon.stationary_distributions — Function.Compute stationary distributions of the Markov chain mc, one for each recurrent class.
Arguments
mc::MarkovChain{T}: MarkovChain instance.
Returns
stationary_dists::Vector{Vector{T1}}: Vector of vectors that represent stationary distributions, where the element typeT1isRationalifTisInt(and equal toTotherwise).
QuantEcon.stationary_distributions — Method.Compute the moments of the stationary distributions of x_t and y_t if possible. Computation is by iteration, starting from the initial conditions lss.mu_0 and lss.Sigma_0
Arguments
lss::LSSAn instance of the Guassian linear state space model;max_iter::Int = 200The maximum number of iterations allowed;tol::Float64 = 1e-5The tolerance level one wishes to achieve
Returns
mu_x::VectorRepresents the stationary mean of x_tmu_y::VectorRepresents the stationary mean of y_tSigma_x::MatrixRepresents the var-cov matrixSigma_y::MatrixRepresents the var-cov matrix
QuantEcon.stationary_values! — Method.Computes value and policy functions in infinite horizon model
Arguments
lq::LQ: instance ofLQtype
Returns
P::ScalarOrArray: n x n matrix in value function representation
V(x) = x'Px + d
d::Real: Constant in value function representationF::ScalarOrArray: Policy rule that specifies optimal control in each period
Notes
This function updates the P, d, and F fields on the lq instance in addition to returning them
QuantEcon.stationary_values — Method.Non-mutating routine for solving for P, d, and F in infinite horizon model
See docstring for stationary_values! for more explanation
QuantEcon.tauchen — Function.Tauchen's (1996) method for approximating AR(1) process with finite markov chain
The process follows
y_t = μ + ρ y_{t-1} + ε_t,where ε_t ~ N (0, σ^2)
Arguments
N::Integer: Number of points in markov processρ::Real: Persistence parameter in AR(1) processσ::Real: Standard deviation of random component of AR(1) processμ::Real(0.0): Mean of AR(1) processn_std::Integer(3): The number of standard deviations to each side the process
should span
Returns
mc::MarkovChain{Float64}: Markov chain holding the state values and
transition matrix
QuantEcon.update! — Method.Updates cur_x_hat and cur_sigma given array y of length k. The full update, from one period to the next
Arguments
k::KalmanAn instance of the Kalman filteryAn array representing the current measurement
QuantEcon.update_values! — Method.Update P and d from the value function representation in finite horizon case
Arguments
lq::LQ: instance ofLQtype
Returns
P::ScalarOrArray: n x n matrix in value function representation
V(x) = x'Px + d
d::Real: Constant in value function representation
Notes
This function updates the P and d fields on the lq instance in addition to returning them
QuantEcon.var_quadratic_sum — Method.Computes the expected discounted quadratic sum
q(x_0) = E sum_{t=0}^{infty} beta^t x_t' H x_tHere {x_t} is the VAR process x_{t+1} = A x_t + C w_t with {w_t} standard normal and x_0 the initial condition.
Arguments
A::Union{Float64, Matrix{Float64}}The n x n matrix described above (scalar)
if n = 1
C::Union{Float64, Matrix{Float64}}The n x n matrix described above (scalar)
if n = 1
H::Union{Float64, Matrix{Float64}}The n x n matrix described above (scalar)
if n = 1
beta::Float64: Discount factor in (0, 1)x_0::Union{Float64, Vector{Float64}}The initial condtion. A conformable
array (of length n) or a scalar if n=1
Returns
q0::Float64: Represents the value q(x_0)
Notes
The formula for computing q(x_0) is q(x_0) = x_0' Q x_0 + v where
Q is the solution to Q = H + beta A' Q A and
v = race(C' Q C) eta / (1 - eta)
Internal
Base.e — Method.Evaluate the empirical cdf at one or more points
Arguments
x::Union{Real, Array}: The point(s) at which to evaluate the ECDF
QuantEcon.DPSolveResult — Type.DPSolveResult is an object for retaining results and associated metadata after solving the model
Parameters
ddp::DiscreteDP: DiscreteDP object
Returns
ddpr::DPSolveResult: DiscreteDP Results object
Base.:* — Method.Define Matrix Multiplication between 3-dimensional matrix and a vector
Matrix multiplication over the last dimension of A
QuantEcon._compute_sequence — Method.Private method implementing compute_sequence when state is a scalar
QuantEcon._compute_sequence — Method.Private method implementing compute_sequence when state is a scalar
QuantEcon._generate_a_indptr! — Method.Generate a_indptr; stored in out. s_indices is assumed to be in sorted order.
Parameters
num_states : Int
s_indices : Vector{Int}
out : Vector{Int} with length = num_states+1
QuantEcon._has_sorted_sa_indices — Method.Check whether s_indices and a_indices are sorted in lexicographic order.
Parameters
s_indices, a_indices : Vectors
Returns
bool: Whether s_indices and a_indices are sorted.
QuantEcon._random_stochastic_matrix — Method.Generate a "non-square column stochstic matrix" of shape (n, m), which contains as columns m probability vectors of length n with k nonzero entries.
Arguments
n::Integer: Number of states.m::Integer: Number of probability vectors.;k::Union{Integer, Void}(nothing): Number of nonzero entries in each
column of the matrix. Set to n if note specified.
Returns
p::Array: Array of shape (n, m) containing m probability vectors of length
n as columns.
QuantEcon._solve! — Method.Modified Policy Function Iteration
QuantEcon._solve! — Method.Policy Function Iteration
NOTE: The epsilon is ignored in this method. It is only here so dispatch can go from solve(::DiscreteDP, ::Type{Algo}) to any of the algorithms. See solve for further details
QuantEcon._solve! — Method.Impliments Value Iteration NOTE: See solve for further details
QuantEcon.fix — Function.fix(x)
Round x towards zero. For arrays there is a mutating version fix!
QuantEcon.getZ — Method.Simple method to return an element Z in the Riccati equation solver whose type is Matrix (to be accepted by the cond() function)
Arguments
BB::Matrix: result of B' * Bgamma::Float64: parameter in the Riccati equation solverR::Matrix
Returns
::Matrix: element Z in the Riccati equation solver
QuantEcon.getZ — Method.Simple method to return an element Z in the Riccati equation solver whose type is Float64 (to be accepted by the cond() function)
Arguments
BB::Float64: result of B' * Bgamma::Float64: parameter in the Riccati equation solverR::Float64
Returns
::Float64: element Z in the Riccati equation solver
QuantEcon.getZ — Method.Simple method to return an element Z in the Riccati equation solver whose type is Float64 (to be accepted by the cond() function)
Arguments
BB::Union{Vector, Matrix}: result of B' * Bgamma::Float64: parameter in the Riccati equation solverR::Float64
Returns
::Float64: element Z in the Riccati equation solver
QuantEcon.gth_solve! — Method.Same as gth_solve, but overwrite the input A, instead of creating a copy.
QuantEcon.random_probvec — Method.Return m randomly sampled probability vectors of size k.
Arguments
k::Integer: Size of each probability vector.m::Integer: Number of probability vectors.
Returns
a::Array: Array of shape (k, m) containing probability vectors as colums.
QuantEcon.s_wise_max! — Method.Populate out with max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).
Also fills out_argmax with the column number associated with the indmax in each row
QuantEcon.s_wise_max! — Method.Populate out with max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).
QuantEcon.s_wise_max! — Method.Populate out with max_a vals(s, a), where vals is represented as a Vector of size (num_sa_pairs,).
Also fills out_argmax with the cartesiean index associated with the indmax in each row
QuantEcon.s_wise_max! — Method.Populate out with max_a vals(s, a), where vals is represented as a Vector of size (num_sa_pairs,).
QuantEcon.s_wise_max — Method.Return the Vector max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).
QuantEcon.todense — Method.If A is already dense, return A as is
QuantEcon.todense — Method.Custom version of full, which allows convertion to type T