QuantEcon

API documentation

Exported

QuantEcon.ARMAType.

Represents a scalar ARMA(p, q) process

If phi and theta are scalars, then the model is understood to be

X_t = phi X_{t-1} + epsilon_t + theta epsilon_{t-1}

where epsilon_t is a white noise process with standard deviation sigma.

If phi and theta are arrays or sequences, then the interpretation is the ARMA(p, q) model

X_t = phi_1 X_{t-1} + ... + phi_p X_{t-p} +
epsilon_t + theta_1 epsilon_{t-1} + ...  +
theta_q epsilon_{t-q}

where

  • phi = (phi_1, phi_2,..., phi_p)

  • theta = (theta_1, theta_2,..., theta_q)

  • sigma is a scalar, the standard deviation of the white noise

Fields

  • phi::Vector : AR parameters phi_1, ..., phi_p

  • theta::Vector : MA parameters theta_1, ..., theta_q

  • p::Integer : Number of AR coefficients

  • q::Integer : Number of MA coefficients

  • sigma::Real : Standard deviation of white noise

  • ma_poly::Vector : MA polynomial –- filtering representatoin

  • ar_poly::Vector : AR polynomial –- filtering representation

Examples

using QuantEcon
phi = 0.5
theta = [0.0, -0.8]
sigma = 1.0
lp = ARMA(phi, theta, sigma)
require(joinpath(dirname(@__FILE__),"..", "examples", "arma_plots.jl"))
quad_plot(lp)
source

DiscreteDP type for specifying paramters for discrete dynamic programming model

Parameters

  • R::Array{T,NR} : Reward Array

  • Q::Array{T,NQ} : Transition Probability Array

  • beta::Float64 : Discount Factor

  • a_indices::Nullable{Vector{Tind}}: Action Indices. Null unless using SA formulation

  • a_indptr::Nullable{Vector{Tind}}: Action Index Pointers. Null unless using SA formulation

Returns

  • ddp::DiscreteDP : DiscreteDP object

source

DiscreteDP type for specifying parameters for discrete dynamic programming model State-Action Pair Formulation

Parameters

  • R::Array{T,NR} : Reward Array

  • Q::Array{T,NQ} : Transition Probability Array

  • beta::Float64 : Discount Factor

  • s_indices::Nullable{Vector{Tind}}: State Indices. Null unless using SA formulation

  • a_indices::Nullable{Vector{Tind}}: Action Indices. Null unless using SA formulation

  • a_indptr::Nullable{Vector{Tind}}: Action Index Pointers. Null unless using SA formulation

Returns

  • ddp::DiscreteDP : Constructor for DiscreteDP object

source

DiscreteDP type for specifying parameters for discrete dynamic programming model Dense Matrix Formulation

Parameters

  • R::Array{T,NR} : Reward Array

  • Q::Array{T,NQ} : Transition Probability Array

  • beta::Float64 : Discount Factor

Returns

  • ddp::DiscreteDP : Constructor for DiscreteDP object

source

Generates an array of draws from a discrete random variable with vector of probabilities given by q.

Fields

  • q::AbstractVector: A vector of non-negative probabilities that sum to 1

  • Q::AbstractVector: The cumulative sum of q

source
QuantEcon.ECDFType.

One-dimensional empirical distribution function given a vector of observations.

Fields

  • observations::Vector: The vector of observations

source
QuantEcon.LAEType.

A look ahead estimator associated with a given stochastic kernel p and a vector of observations X.

Fields

  • p::Function: The stochastic kernel. Signature is p(x, y) and it should be

vectorized in both inputs

  • X::Matrix: A vector containing observations. Note that this can be passed as

any kind of AbstractArray and will be coerced into an n x 1 vector.

source
QuantEcon.LQType.

Main constructor for LQ type

Specifies default argumets for all fields not part of the payoff function or transition equation.

Arguments

  • Q::ScalarOrArray : k x k payoff coefficient for control variable u. Must be

symmetric and nonnegative definite

  • R::ScalarOrArray : n x n payoff coefficient matrix for state variable x.

Must be symmetric and nonnegative definite

  • A::ScalarOrArray : n x n coefficient on state in state transition

  • B::ScalarOrArray : n x k coefficient on control in state transition

  • ;C::ScalarOrArray(zeros(size(R, 1))) : n x j coefficient on random shock in

state transition

  • ;N::ScalarOrArray(zeros(size(B,1), size(A, 2))) : k x n cross product in

payoff equation

  • ;bet::Real(1.0) : Discount factor in [0, 1]

  • capT::Union{Int, Void}(Void) : Terminal period in finite horizon

problem

  • rf::ScalarOrArray(fill(NaN, size(R)...)) : n x n terminal payoff in finite

horizon problem. Must be symmetric and nonnegative definite.

source
QuantEcon.LQType.

Linear quadratic optimal control of either infinite or finite horizon

The infinite horizon problem can be written

min E sum_{t=0}^{infty} beta^t r(x_t, u_t)

with

r(x_t, u_t) := x_t' R x_t + u_t' Q u_t + 2 u_t' N x_t

The finite horizon form is

min E sum_{t=0}^{T-1} beta^t r(x_t, u_t) + beta^T x_T' R_f x_T

Both are minimized subject to the law of motion

x_{t+1} = A x_t + B u_t + C w_{t+1}

Here x is n x 1, u is k x 1, w is j x 1 and the matrices are conformable for these dimensions. The sequence {w_t} is assumed to be white noise, with zero mean and E w_t w_t' = I, the j x j identity.

For this model, the time t value (i.e., cost-to-go) function V_t takes the form

x' P_T x + d_T

and the optimal policy is of the form u_T = -F_T x_T. In the infinite horizon case, V, P, d and F are all stationary.

Fields

  • Q::ScalarOrArray : k x k payoff coefficient for control variable u. Must be

symmetric and nonnegative definite

  • R::ScalarOrArray : n x n payoff coefficient matrix for state variable x.

Must be symmetric and nonnegative definite

  • A::ScalarOrArray : n x n coefficient on state in state transition

  • B::ScalarOrArray : n x k coefficient on control in state transition

  • C::ScalarOrArray : n x j coefficient on random shock in state transition

  • N::ScalarOrArray : k x n cross product in payoff equation

  • bet::Real : Discount factor in [0, 1]

  • capT::Union{Int, Void} : Terminal period in finite horizon problem

  • rf::ScalarOrArray : n x n terminal payoff in finite horizon problem. Must be

symmetric and nonnegative definite

  • P::ScalarOrArray : n x n matrix in value function representation

V(x) = x'Px + d

  • d::Real : Constant in value function representation

  • F::ScalarOrArray : Policy rule that specifies optimal control in each period

source
QuantEcon.LSSType.

A type that describes the Gaussian Linear State Space Model of the form:

x_{t+1} = A x_t + C w_{t+1}

    y_t = G x_t

where {w_t} and {v_t} are independent and standard normal with dimensions k and l respectively. The initial conditions are mu_0 and Sigma_0 for x_0 ~ N(mu_0, Sigma_0). When Sigma_0=0, the draw of x_0 is exactly mu_0.

Fields

  • A::Matrix Part of the state transition equation. It should be n x n

  • C::Matrix Part of the state transition equation. It should be n x m

  • G::Matrix Part of the observation equation. It should be k x n

  • k::Int Dimension

  • n::Int Dimension

  • m::Int Dimension

  • mu_0::Vector This is the mean of initial draw and is of length n

  • Sigma_0::Matrix This is the variance of the initial draw and is n x n and also should be positive definite and symmetric

source
QuantEcon.MPFIType.

This refers to the Modified Policy Iteration solution algorithm.

References

http://quant-econ.net/jl/ddp.html

source

Finite-state discrete-time Markov chain.

Methods are available that provide useful information such as the stationary distributions, and communication and recurrent classes, and allow simulation of state transitions.

Fields

  • p::AbstractMatrix : The transition matrix. Must be square, all elements

must be nonnegative, and all rows must sum to unity.

  • state_values::AbstractVector : Vector containing the values associated with

the states.

source

Returns the controlled Markov chain for a given policy sigma.

Parameters

  • ddp::DiscreteDP : Object that contains the model parameters

  • ddpr::DPSolveResult : Object that contains result variables

Returns

mc : MarkovChain Controlled Markov chain.

source
QuantEcon.PFIType.

This refers to the Policy Iteration solution algorithm.

References

http://quant-econ.net/jl/ddp.html

source
QuantEcon.RBLQType.

Represents infinite horizon robust LQ control problems of the form

min_{u_t}  sum_t beta^t {x_t' R x_t + u_t' Q u_t }

subject to

x_{t+1} = A x_t + B u_t + C w_{t+1}

and with model misspecification parameter theta.

Fields

  • Q::Matrix{Float64} : The cost(payoff) matrix for the controls. See above

for more. Q should be k x k and symmetric and positive definite

  • R::Matrix{Float64} : The cost(payoff) matrix for the state. See above for

more. R should be n x n and symmetric and non-negative definite

  • A::Matrix{Float64} : The matrix that corresponds with the state in the

state space system. A should be n x n

  • B::Matrix{Float64} : The matrix that corresponds with the control in the

state space system. B should be n x k

  • C::Matrix{Float64} : The matrix that corresponds with the random process in

the state space system. C should be n x j

  • beta::Real : The discount factor in the robust control problem

  • theta::Real The robustness factor in the robust control problem

  • k, n, j::Int : Dimensions of input matrices

source
QuantEcon.VFIType.

This refers to the Value Iteration solution algorithm.

References

http://quant-econ.net/jl/ddp.html

source
LightGraphs.periodMethod.

Return the period of the Markov chain mc.

Arguments

  • mc::MarkovChain : MarkovChain instance.

Returns

  • ::Int : Period of mc.

source
QuantEcon.F_to_KMethod.

Compute agent 2's best cost-minimizing response K, given F.

Arguments

  • rlq::RBLQ: Instance of RBLQ type

  • F::Matrix{Float64}: A k x n array representing agent 1's policy

Returns

  • K::Matrix{Float64} : Agent's best cost minimizing response corresponding to

F

  • P::Matrix{Float64} : The value function corresponding to F

source
QuantEcon.K_to_FMethod.

Compute agent 1's best cost-minimizing response K, given F.

Arguments

  • rlq::RBLQ: Instance of RBLQ type

  • K::Matrix{Float64}: A k x n array representing the worst case matrix

Returns

  • F::Matrix{Float64} : Agent's best cost minimizing response corresponding to

K

  • P::Matrix{Float64} : The value function corresponding to K

source
QuantEcon.RQ_sigmaMethod.

Method of RQ_sigma that extracts sigma from a DPSolveResult

See other docstring for details

source
QuantEcon.RQ_sigmaMethod.

Given a policy sigma, return the reward vector R_sigma and the transition probability matrix Q_sigma.

Parameters

  • ddp::DiscreteDP : Object that contains the model parameters

  • sigma::Vector{Int}: policy rule vector

Returns

  • R_sigma::Array{Float64}: Reward vector for sigma, of length n.

  • Q_sigma::Array{Float64}: Transition probability matrix for sigma, of shape (n, n).

source

Compute periodogram from data x, using prewhitening, smoothing and recoloring. The data is fitted to an AR(1) model for prewhitening, and the residuals are used to compute a first-pass periodogram with smoothing. The fitted coefficients are then used for recoloring.

Arguments

  • x::Array: An array containing the data to smooth

  • window_len::Int(7): An odd integer giving the length of the window

  • window::AbstractString("hanning"): A string giving the window type. Possible values

are flat, hanning, hamming, bartlett, or blackman

Returns

  • w::Array{Float64}: Fourier frequencies at which the periodogram is evaluated

  • I_w::Array{Float64}: The periodogram at frequences w

source

Compute the autocovariance function from the ARMA parameters over the integers range(num_autocov) using the spectral density and the inverse Fourier transform.

Arguments

  • arma::ARMA: Instance of ARMA type

  • ;num_autocov::Integer(16) : The number of autocovariances to calculate

source

The D operator, mapping P into

B(P) := R - beta^2 A'PB(Q + beta B'PB)^{-1}B'PA + beta A'PA

and also returning

F := (Q + beta B'PB)^{-1} beta B'PA

Arguments

  • rlq::RBLQ: Instance of RBLQ type

  • P::Matrix{Float64} : size is n x n

Returns

  • F::Matrix{Float64} : The F matrix as defined above

  • new_p::Matrix{Float64} : The matrix P after applying the B operator

source

The Bellman operator, which computes and returns the updated value function Tv for a value function v.

Parameters

  • ddp::DiscreteDP : Object that contains the model parameters

  • v::Vector{T<:AbstractFloat}: The current guess of the value function

  • Tv::Vector{T<:AbstractFloat}: A buffer array to hold the updated value function. Initial value not used and will be overwritten

  • sigma::Vector: A buffer array to hold the policy function. Initial values not used and will be overwritten

Returns

  • Tv::Vector : Updated value function vector

  • sigma::Vector : Updated policiy function vector

source

The Bellman operator, which computes and returns the updated value function Tv for a given value function v.

This function will fill the input v with Tv and the input sigma with the corresponding policy rule

Parameters

  • ddp::DiscreteDP: The ddp model

  • v::Vector{T<:AbstractFloat}: The current guess of the value function. This array will be overwritten

  • sigma::Vector: A buffer array to hold the policy function. Initial values not used and will be overwritten

Returns

  • Tv::Vector: Updated value function vector

  • sigma::Vector{T<:Integer}: Policy rule

source

Apply the Bellman operator using v=ddpr.v, Tv=ddpr.Tv, and sigma=ddpr.sigma

Notes

Updates ddpr.Tv and ddpr.sigma inplace

source

The Bellman operator, which computes and returns the updated value function Tv for a given value function v.

Parameters

  • ddp::DiscreteDP: The ddp model

  • v::Vector: The current guess of the value function

Returns

  • Tv::Vector : Updated value function vector

source
QuantEcon.bisectMethod.

Find the root of the f on the bracketing inverval [x1, x2] via bisection

Arguments

  • f::Function: The function you want to bracket

  • x1::T: Lower border for search interval

  • x2::T: Upper border for search interval

  • ;maxiter::Int(500): Maximum number of bisection iterations

  • ;xtol::Float64(1e-12): The routine converges when a root is known to lie

within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.

  • ;rtol::Float64(2*eps()):The routine converges when a root is known to lie

within rtol times the value returned of the value returned. Should be ≥ 0

Returns

  • x::T: The found root

Exceptions

  • Throws an ArgumentError if [x1, x2] does not form a bracketing interval

  • Throws a ConvergenceError if the maximum number of iterations is exceeded

References

Matches bisect function from scipy/scipy/optimize/Zeros/bisect.c

source
QuantEcon.brentMethod.

Find the root of the f on the bracketing inverval [x1, x2] via brent's algo

Arguments

  • f::Function: The function you want to bracket

  • x1::T: Lower border for search interval

  • x2::T: Upper border for search interval

  • ;maxiter::Int(500): Maximum number of bisection iterations

  • ;xtol::Float64(1e-12): The routine converges when a root is known to lie

within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.

  • ;rtol::Float64(2*eps()):The routine converges when a root is known to lie

within rtol times the value returned of the value returned. Should be ≥ 0

Returns

  • x::T: The found root

Exceptions

  • Throws an ArgumentError if [x1, x2] does not form a bracketing interval

  • Throws a ConvergenceError if the maximum number of iterations is exceeded

References

Matches brentq function from scipy/scipy/optimize/Zeros/bisectq.c

source
QuantEcon.brenthMethod.

Find a root of the f on the bracketing inverval [x1, x2] via modified brent

This routine uses a hyperbolic extrapolation formula instead of the standard inverse quadratic formula. Otherwise it is the original Brent's algorithm, as implemented in the brent function.

Arguments

  • f::Function: The function you want to bracket

  • x1::T: Lower border for search interval

  • x2::T: Upper border for search interval

  • ;maxiter::Int(500): Maximum number of bisection iterations

  • ;xtol::Float64(1e-12): The routine converges when a root is known to lie

within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.

  • ;rtol::Float64(2*eps()):The routine converges when a root is known to lie

within rtol times the value returned of the value returned. Should be ≥ 0

Returns

  • x::T: The found root

Exceptions

  • Throws an ArgumentError if [x1, x2] does not form a bracketing interval

  • Throws a ConvergenceError if the maximum number of iterations is exceeded

References

Matches brenth function from scipy/scipy/optimize/Zeros/bisecth.c

source
QuantEcon.ckronFunction.

ckron(arrays::AbstractArray...)

Repeatedly apply kronecker products to the arrays. Equilvalent to reduce(kron, arrays)

source

Find the communication classes of the Markov chain mc.

Arguments

  • mc::MarkovChain : MarkovChain instance.

Returns

  • ::Vector{Vector{Int}} : Vector of vectors that describe the communication

classes of mc.

source

Given K and F, compute the value of deterministic entropy, which is sum_t beta^t x_t' K'K x_t with x_{t+1} = (A - BF + CK) x_t.

Arguments

  • rlq::RBLQ: Instance of RBLQ type

  • F::Matrix{Float64} The policy function, a k x n array

  • K::Matrix{Float64} The worst case matrix, a j x n array

  • x0::Vector{Float64} : The initial condition for state

Returns

  • e::Float64 The deterministic entropy

source

Repeatedly apply a function to search for a fixed point

Approximates T^∞ v, where T is an operator (function) and v is an initial guess for the fixed point. Will terminate either when T^{k+1}(v) - T^k v < err_tol or max_iter iterations has been exceeded.

Provided that T is a contraction mapping or similar, the return value will be an approximation to the fixed point of T.

Arguments

  • T: A function representing the operator T

  • v::TV: The initial condition. An object of type TV

  • ;err_tol(1e-3): Stopping tolerance for iterations

  • ;max_iter(50): Maximum number of iterations

  • ;verbose(2): Level of feedback (0 for no output, 1 for warnings only, 2 for warning and convergence messages during iteration)

  • ;print_skip(10) : if verbose == 2, how many iterations to apply between print messages

Returns


  • '::TV': The fixed point of the operator T. Has type TV

Example

using QuantEcon
T(x, μ) = 4.0 * μ * x * (1.0 - x)
x_star = compute_fixed_point(x->T(x, 0.3), 0.4)  # (4μ - 1)/(4μ)
source

Compute the v-greedy policy

Parameters

  • ddp::DiscreteDP : Object that contains the model parameters

  • ddpr::DPSolveResult : Object that contains result variables

Returns

  • sigma::Vector{Int} : Array containing v-greedy policy rule

Notes

modifies ddpr.sigma and ddpr.Tv in place

source

Compute the v-greedy policy.

Arguments

  • v::Vector Value function vector of length n

  • ddp::DiscreteDP Object that contains the model parameters

Returns

  • sigma:: v-greedy policy vector, of lengthn`

source

Compute and return the optimal state and control sequence, assuming innovation N(0,1)

Arguments

  • lq::LQ : instance of LQ type

  • x0::ScalarOrArray: initial state

  • ts_length::Integer(100) : maximum number of periods for which to return

process. If lq instance is finite horizon type, the sequenes are returned only for min(ts_length, lq.capT)

Returns

  • x_path::Matrix{Float64} : An n x T+1 matrix, where the t-th column

represents x_t

  • u_path::Matrix{Float64} : A k x T matrix, where the t-th column represents

u_t

  • w_path::Matrix{Float64} : A n x T+1 matrix, where the t-th column represents

lq.C*N(0,1)

source

The D operator, mapping P into

D(P) := P + PC(theta I - C'PC)^{-1} C'P.

Arguments

  • rlq::RBLQ: Instance of RBLQ type

  • P::Matrix{Float64} : size is n x n

Returns

  • dP::Matrix{Float64} : The matrix P after applying the D operator

source

Given a function f defined on the interval [x1, x2], subdivide the interval into n equally spaced segments, and search for zero crossings of the function. nroot will be set to the number of bracketing pairs found. If it is positive, the arrays xb1[1..nroot] and xb2[1..nroot] will be filled sequentially with any bracketing pairs that are found.

Arguments

  • f::Function: The function you want to bracket

  • x1::T: Lower border for search interval

  • x2::T: Upper border for search interval

  • n::Int(50): The number of sub-intervals to divide [x1, x2] into

Returns

  • x1b::Vector{T}: Vector of lower borders of bracketing intervals

  • x2b::Vector{T}: Vector of upper borders of bracketing intervals

References

This is zbrack from Numerical Recepies Recepies in C++

source
QuantEcon.do_quadMethod.

Approximate the integral of f, given quadrature nodes and weights

Arguments

  • f::Function: A callable function that is to be approximated over the domain

spanned by nodes.

  • nodes::Array: Quadrature nodes

  • weights::Array: Quadrature nodes

  • args...(Void): additional positional arguments to pass to f

  • ;kwargs...(Void): additional keyword arguments to pass to f

Returns

  • out::Float64 : The scalar that approximates integral of f on the hypercube

formed by [a, b]

source
QuantEcon.drawMethod.

Make multiple draws from the discrete distribution represented by a DiscreteRV instance

Arguments

  • d::DiscreteRV: The DiscreteRV type representing the distribution

  • k::Int:

Returns

  • out::Vector{Int}: k draws from d

source
QuantEcon.drawMethod.

Make a single draw from the discrete distribution

Arguments

  • d::DiscreteRV: The DiscreteRV type represetning the distribution

Returns

  • out::Int: One draw from the discrete distribution

source
QuantEcon.ecdfFunction.

Evaluate the empirical cdf at one or more points

Arguments

  • e::ECDF: The ECDF instance

  • x::Union{Real, Array}: The point(s) at which to evaluate the ECDF

source

Given a fixed policy F, with the interpretation u = -F x, this function computes the matrix P_F and constant d_F associated with discounted cost J_F(x) = x' P_F x + d_F.

Arguments

  • rlq::RBLQ: Instance of RBLQ type

  • F::Matrix{Float64} : The policy function, a k x n array

Returns

  • P_F::Matrix{Float64} : Matrix for discounted cost

  • d_F::Float64 : Constant for discounted cost

  • K_F::Matrix{Float64} : Worst case policy

  • O_F::Matrix{Float64} : Matrix for discounted entropy

  • o_F::Float64 : Constant for discounted entropy

source

Compute the value of a policy.

Parameters

  • ddp::DiscreteDP : Object that contains the model parameters

  • sigma::Vector{T<:Integer} : Policy rule vector

Returns

  • v_sigma::Array{Float64} : Value vector of sigma, of length n.

source

Method of evaluate_policy that extracts sigma from a DPSolveResult

See other docstring for details

source

Given a function f and an initial guessed range x1 to x2, the routine expands the range geometrically until a root is bracketed by the returned values x1 and x2 (in which case zbrac returns true) or until the range becomes unacceptably large (in which case a ConvergenceError is thrown).

Arguments

  • f::Function: The function you want to bracket

  • x1::T: Initial guess for lower border of bracket

  • x2::T: Initial guess ofr upper border of bracket

  • ;ntry::Int(50): The maximum number of expansion iterations

  • ;fac::Float64(1.6): Expansion factor (higher ⟶ larger interval size jumps)

Returns

  • x1::T: The lower end of an actual bracketing interval

  • x2::T: The upper end of an actual bracketing interval

References

This method is zbrac from numerical recipies in C++

Exceptions

  • Throws a ConvergenceError if the maximum number of iterations is exceeded

source

Updates the moments of the time t filtering distribution to the moments of the predictive distribution, which becomes the time t+1 prior

Arguments

  • k::Kalman An instance of the Kalman filter

source
QuantEcon.gridmakeFunction.

gridmake(arrays::AbstractVector...)

Expand one or more vectors into a matrix where rows span the cartesian product of combinations of the input vectors. Each input array will correspond to one column of the output matrix. The first array varies the fastest (see example)

Example

julia> x = [1, 2, 3]; y = [10, 20]; z = [100, 200];

julia> gridmake(x, y, z)
12x3 Array{Int64,2}:
 1  10  100
 2  10  100
 3  10  100
 1  20  100
 2  20  100
 3  20  100
 1  10  200
 2  10  200
 3  10  200
 1  20  200
 2  20  200
 3  20  200
source

gridmake!(out::AbstractMatrix, arrays::AbstractVector...)

Like gridmake, but fills a pre-populated array. out must have size prod(map(length, arrays), length(arrays))

source

This routine computes the stationary distribution of an irreducible Markov transition matrix (stochastic matrix) or transition rate matrix (generator matrix) A.

More generally, given a Metzler matrix (square matrix whose off-diagonal entries are all nonnegative) A, this routine solves for a nonzero solution x to x (A - D) = 0, where D is the diagonal matrix for which the rows of A - D sum to zero (i.e., D_{ii} = sum_j A_{ij} for all i). One (and only one, up to normalization) nonzero solution exists corresponding to each reccurent class of A, and in particular, if A is irreducible, there is a unique solution; when there are more than one solution, the routine returns the solution that contains in its support the first index i such that no path connects i to any index larger than i. The solution is normalized so that its 1-norm equals one. This routine implements the Grassmann-Taksar-Heyman (GTH) algorithm (Grassmann, Taksar, and Heyman 1985), a numerically stable variant of Gaussian elimination, where only the off-diagonal entries of A are used as the input data. For a nice exposition of the algorithm, see Stewart (2009), Chapter 10.

Arguments

  • A::Matrix{T} : Stochastic matrix or generator matrix. Must be of shape n x n.

Returns

  • x::Vector{T} : Stationary distribution of A.

References

  • W. K. Grassmann, M. I. Taksar and D. P. Heyman, "Regenerative Analysis and

Steady State Distributions for Markov Chains, " Operations Research (1985), 1107-1116.

  • W. J. Stewart, Probability, Markov Chains, Queues, and Simulation, Princeton

University Press, 2009.

source

Get the impulse response corresponding to our model.

Arguments

  • arma::ARMA: Instance of ARMA type

  • ;impulse_length::Integer(30): Length of horizon for calucluating impulse

reponse. Must be at least as long as the p fields of arma

Returns

  • psi::Vector{Float64}: psi[j] is the response at lag j of the impulse

response. We take psi[1] as unity.

source

Indicate whether the Markov chain mc is aperiodic.

Arguments

  • mc::MarkovChain : MarkovChain instance.

Returns

  • ::Bool

source

Indicate whether the Markov chain mc is irreducible.

Arguments

  • mc::MarkovChain : MarkovChain instance.

Returns

  • ::Bool

source
QuantEcon.lae_estMethod.

A vectorized function that returns the value of the look ahead estimate at the values in the array y.

Arguments

  • l::LAE: Instance of LAE type

  • y::Array: Array that becomes the y in l.p(l.x, y)

Returns

  • psi_vals::Vector: Density at (x, y)

source

Computes the quadratic sum

V = sum_{j=0}^{infty} A^j B A^{j'}

V is computed by solving the corresponding discrete lyapunov equation using the doubling algorithm. See the documentation of solve_discrete_lyapunov for more information.

Arguments

  • A::Matrix{Float64} : An n x n matrix as described above. We assume in order

for convergence that the eigenvalues of A have moduli bounded by unity

  • B::Matrix{Float64} : An n x n matrix as described above. We assume in order

for convergence that the eigenvalues of B have moduli bounded by unity

  • max_it::Int(50) : Maximum number of iterations

Returns

  • gamma1::Matrix{Float64} : Represents the value V

source

Create a generator to calculate the population mean and variance-convariance matrix for both x_t and y_t, starting at the initial condition (self.mu_0, self.Sigma_0). Each iteration produces a 4-tuple of items (mu_x, mu_y, Sigma_x, Sigma_y) for the next period.

Arguments

  • lss::LSS An instance of the Gaussian linear state space model

source
QuantEcon.n_statesMethod.

Number of states in the Markov chain mc

source
QuantEcon.nnashMethod.

Compute the limit of a Nash linear quadratic dynamic game.

Player i minimizes

sum_{t=1}^{inf}(x_t' r_i x_t + 2 x_t' w_i
u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}'
m_i u_{it})

subject to the law of motion

x_{t+1} = A x_t + b_1 u_{1t} + b_2 u_{2t}

and a perceived control law :math:u_j(t) = - f_j x_t for the other player.

The solution computed in this routine is the f_i and p_i of the associated double optimal linear regulator problem.

Arguments

  • A : Corresponds to the above equation, should be of size (n, n)

  • B1 : As above, size (n, k_1)

  • B2 : As above, size (n, k_2)

  • R1 : As above, size (n, n)

  • R2 : As above, size (n, n)

  • Q1 : As above, size (k_1, k_1)

  • Q2 : As above, size (k_2, k_2)

  • S1 : As above, size (k_1, k_1)

  • S2 : As above, size (k_2, k_2)

  • W1 : As above, size (n, k_1)

  • W2 : As above, size (n, k_2)

  • M1 : As above, size (k_2, k_1)

  • M2 : As above, size (k_1, k_2)

  • ;beta::Float64(1.0) Discount rate

  • ;tol::Float64(1e-8) : Tolerance level for convergence

  • ;max_iter::Int(1000) : Maximum number of iterations allowed

Returns

  • F1::Matrix{Float64}: (k_1, n) matrix representing feedback law for agent 1

  • F2::Matrix{Float64}: (k_2, n) matrix representing feedback law for agent 2

  • P1::Matrix{Float64}: (n, n) matrix representing the steady-state solution to the associated discrete matrix ticcati equation for agent 1

  • P2::Matrix{Float64}: (n, n) matrix representing the steady-state solution to the associated discrete matrix riccati equation for agent 2

source
QuantEcon.periodogramFunction.

Computes the periodogram

I(w) = (1 / n) | sum_{t=0}^{n-1} x_t e^{itw} |^2

at the Fourier frequences w_j := 2 pi j / n, j = 0, ..., n - 1, using the fast Fourier transform. Only the frequences w_j in [0, pi] and corresponding values I(w_j) are returned. If a window type is given then smoothing is performed.

Arguments

  • x::Array: An array containing the data to smooth

  • window_len::Int(7): An odd integer giving the length of the window

  • window::AbstractString("hanning"): A string giving the window type. Possible values

are flat, hanning, hamming, bartlett, or blackman

Returns

  • w::Array{Float64}: Fourier frequencies at which the periodogram is evaluated

  • I_w::Array{Float64}: The periodogram at frequences w

source

Updates the moments (cur_x_hat, cur_sigma) of the time t prior to the time t filtering distribution, using current measurement y_t. The updates are according to x_{hat}^F = x_{hat} + Sigma G' (G Sigma G' + R)^{-1} (y - G x_{hat}) Sigma^F = Sigma - Sigma G' (G Sigma G' + R)^{-1} G Sigma

Arguments

  • k::Kalman An instance of the Kalman filter

  • y The current measurement

source
QuantEcon.qnwbetaMethod.

Computes nodes and weights for beta distribution

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : First parameter of the beta distribution,

along each dimension

  • b::Union{Real, Vector{Real}} : Second parameter of the beta distribution,

along each dimension

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwchebMethod.

Computes multivariate Guass-Checbychev quadrature nodes and weights.

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : Lower endpoint along each dimension

  • b::Union{Real, Vector{Real}} : Upper endpoint along each dimension

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwequiFunction.

Generates equidistributed sequences with property that averages value of integrable function evaluated over the sequence converges to the integral as n goes to infinity.

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : Lower endpoint along each dimension

  • b::Union{Real, Vector{Real}} : Upper endpoint along each dimension

  • kind::AbstractString("N"): One of the following:

    • N - Neiderreiter (default)

    • W - Weyl

    • H - Haber

    • R - pseudo Random

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwgammaFunction.

Computes nodes and weights for beta distribution

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : First parameter of the gamma distribution,

along each dimension

  • b::Union{Real, Vector{Real}} : Second parameter of the gamma distribution,

along each dimension

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwlegeMethod.

Computes multivariate Guass-Legendre quadrature nodes and weights.

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : Lower endpoint along each dimension

  • b::Union{Real, Vector{Real}} : Upper endpoint along each dimension

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwlognMethod.

Computes quadrature nodes and weights for multivariate uniform distribution

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • mu::Union{Real, Vector{Real}} : Mean along each dimension

  • sig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n))) : Covariance

structure

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

See also the documentation for qnwnorm

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwnormMethod.

Computes nodes and weights for multivariate normal distribution

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • mu::Union{Real, Vector{Real}} : Mean along each dimension

  • sig2::Union{Real, Vector{Real}, Matrix{Real}}(eye(length(n))) : Covariance

structure

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

This function has many methods. I try to describe them here.

n or mu can be a vector or a scalar. If just one is a scalar the other is repeated to match the length of the other. If both are scalars, then the number of repeats is inferred from sig2.

sig2 can be a matrix, vector or scalar. If it is a matrix, it is treated as the covariance matrix. If it is a vector, it is considered the diagonal of a diagonal covariance matrix. If it is a scalar it is repeated along the diagonal as many times as necessary, where the number of repeats is determined by the length of either n and/or mu (which ever is a vector).

If all 3 are scalars, then 1d nodes are computed. mu and sig2 are treated as the mean and variance of a 1d normal distribution

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwsimpMethod.

Computes multivariate Simpson quadrature nodes and weights.

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : Lower endpoint along each dimension

  • b::Union{Real, Vector{Real}} : Upper endpoint along each dimension

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwtrapMethod.

Computes multivariate trapezoid quadrature nodes and weights.

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : Lower endpoint along each dimension

  • b::Union{Real, Vector{Real}} : Upper endpoint along each dimension

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.qnwunifMethod.

Computes quadrature nodes and weights for multivariate uniform distribution

Arguments

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : Lower endpoint along each dimension

  • b::Union{Real, Vector{Real}} : Upper endpoint along each dimension

Returns

  • nodes::Array{Float64} : An array of quadrature nodes

  • weights::Array{Float64} : An array of corresponding quadrature weights

Notes

If any of the parameters to this function are scalars while others are Vectors of length n, the the scalar parameter is repeated n times.

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source
QuantEcon.quadrectFunction.

Integrate the d-dimensional function f on a rectangle with lower and upper bound for dimension i defined by a[i] and b[i], respectively; using n[i] points.

Arguments

  • f::Function The function to integrate over. This should be a function that

accepts as its first argument a matrix representing points along each dimension (each dimension is a column). Other arguments that need to be passed to the function are caught by args... and kwargs...`

  • n::Union{Int, Vector{Int}} : Number of desired nodes along each dimension

  • a::Union{Real, Vector{Real}} : Lower endpoint along each dimension

  • b::Union{Real, Vector{Real}} : Upper endpoint along each dimension

  • kind::AbstractString("lege") Specifies which type of integration to perform. Valid

values are: - "lege" : Gauss-Legendre - "cheb" : Gauss-Chebyshev - "trap" : trapezoid rule - "simp" : Simpson rule - "N" : Neiderreiter equidistributed sequence - "W" : Weyl equidistributed sequence - "H" : Haber equidistributed sequence - "R" : Monte Carlo - args...(Void): additional positional arguments to pass to f - ;kwargs...(Void): additional keyword arguments to pass to f

Returns

  • out::Float64 : The scalar that approximates integral of f on the hypercube

formed by [a, b]

References

Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.

source

Generate a DiscreteDP randomly. The reward values are drawn from the normal distribution with mean 0 and standard deviation scale.

Arguments

  • num_states::Integer : Number of states.

  • num_actions::Integer : Number of actions.

  • beta::Union{Float64, Void}(nothing) : Discount factor. Randomly chosen from

[0, 1) if not specified.

  • ;k::Union{Integer, Void}(nothing) : Number of possible next states for each

state-action pair. Equal to num_states if not specified.

  • scale::Real(1) : Standard deviation of the normal distribution for the

reward values.

Returns

  • ddp::DiscreteDP : An instance of DiscreteDP.

source

Return a randomly sampled MarkovChain instance with n states, where each state has k states with positive transition probability.

Arguments

  • n::Integer : Number of states.

Returns

  • mc::MarkovChain : MarkovChain instance.

Examples

julia> using QuantEcon

julia> mc = random_markov_chain(3, 2)
Discrete Markov Chain
stochastic matrix:
3x3 Array{Float64,2}:
 0.369124  0.0       0.630876
 0.519035  0.480965  0.0
 0.0       0.744614  0.255386
source

Return a randomly sampled MarkovChain instance with n states.

Arguments

  • n::Integer : Number of states.

Returns

  • mc::MarkovChain : MarkovChain instance.

Examples

julia> using QuantEcon

julia> mc = random_markov_chain(3)
Discrete Markov Chain
stochastic matrix:
3x3 Array{Float64,2}:
 0.281188  0.61799   0.100822
 0.144461  0.848179  0.0073594
 0.360115  0.323973  0.315912
source

Return a randomly sampled n x n stochastic matrix with k nonzero entries for each row.

Arguments

  • n::Integer : Number of states.

  • k::Union{Integer, Void}(nothing) : Number of nonzero entries in each

column of the matrix. Set to n if note specified.

Returns

  • p::Array : Stochastic matrix.

source

Find the recurrent classes of the Markov chain mc.

Arguments

  • mc::MarkovChain : MarkovChain instance.

Returns

  • ::Vector{Vector{Int}} : Vector of vectors that describe the recurrent

classes of mc.

source
QuantEcon.replicateFunction.

Simulate num_reps observations of x_T and y_T given x_0 ~ N(mu_0, Sigma_0).

Arguments

  • lss::LSS An instance of the Gaussian linear state space model.

  • t::Int = 10 The period that we want to replicate values for.

  • num_reps::Int = 100 The number of replications we want

Returns

  • x::Matrix An n x num_reps matrix, where the j-th column is the j_th observation of x_T

  • y::Matrix An k x num_reps matrix, where the j-th column is the j_th observation of y_T

source
QuantEcon.ridderMethod.

Find a root of the f on the bracketing inverval [x1, x2] via ridder algo

Arguments

  • f::Function: The function you want to bracket

  • x1::T: Lower border for search interval

  • x2::T: Upper border for search interval

  • ;maxiter::Int(500): Maximum number of bisection iterations

  • ;xtol::Float64(1e-12): The routine converges when a root is known to lie

within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of doubles.

  • ;rtol::Float64(2*eps()):The routine converges when a root is known to lie

within rtol times the value returned of the value returned. Should be ≥ 0

Returns

  • x::T: The found root

Exceptions

  • Throws an ArgumentError if [x1, x2] does not form a bracketing interval

  • Throws a ConvergenceError if the maximum number of iterations is exceeded

References

Matches ridder function from scipy/scipy/optimize/Zeros/ridder.c

source

Solves the robust control problem.

The algorithm here tricks the problem into a stacked LQ problem, as described in chapter 2 of Hansen- Sargent's text "Robustness." The optimal control with observed state is

u_t = - F x_t

And the value function is -x'Px

Arguments

  • rlq::RBLQ: Instance of RBLQ type

Returns

  • F::Matrix{Float64} : The optimal control matrix from above

  • P::Matrix{Float64} : The positive semi-definite matrix defining the value

function

  • K::Matrix{Float64} : the worst-case shock matrix K, where

w_{t+1} = K x_t is the worst case shock

source

Solve the robust LQ problem

A simple algorithm for computing the robust policy F and the corresponding value function P, based around straightforward iteration with the robust Bellman operator. This function is easier to understand but one or two orders of magnitude slower than self.robust_rule(). For more information see the docstring of that method.

Arguments

  • rlq::RBLQ: Instance of RBLQ type

  • P_init::Matrix{Float64}(zeros(rlq.n, rlq.n)) : The initial guess for the

value function matrix

  • ;max_iter::Int(80): Maximum number of iterations that are allowed

  • ;tol::Real(1e-8) The tolerance for convergence

Returns

  • F::Matrix{Float64} : The optimal control matrix from above

  • P::Matrix{Float64} : The positive semi-definite matrix defining the value

function

  • K::Matrix{Float64} : the worst-case shock matrix K, where

w_{t+1} = K x_t is the worst case shock

source
QuantEcon.rouwenhorstFunction.

Rouwenhorst's method to approximate AR(1) processes.

The process follows

y_t = μ + ρ y_{t-1} + ε_t,

where ε_t ~ N (0, σ^2)

Arguments

  • N::Integer : Number of points in markov process

  • ρ::Real : Persistence parameter in AR(1) process

  • σ::Real : Standard deviation of random component of AR(1) process

  • μ::Real(0.0) : Mean of AR(1) process

Returns

  • mc::MarkovChain{Float64} : Markov chain holding the state values and

transition matrix

source

Fill X with sample paths of the Markov chain mc as columns. The resulting matrix has the state values of mc as elements.

Arguments

  • X::Matrix : Preallocated matrix to be filled with sample paths

of the Markov chain mc. The element types in X should be the same as the type of the state values of mc

  • mc::MarkovChain : MarkovChain instance.

  • ;init=rand(1:n_states(mc)) : Can be one of the following

    • blank: random initial condition for each chain

    • scalar: same initial condition for each chain

    • vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)

source
QuantEcon.simulateMethod.

Simulate one sample path of the Markov chain mc. The resulting vector has the state values of mc as elements.

Arguments

  • mc::MarkovChain : MarkovChain instance.

  • ts_length::Int : Length of simulation

  • ;init::Int=rand(1:n_states(mc)) : Initial state

Returns

  • X::Vector : Vector containing the sample path, with length

ts_length

source

Fill X with sample paths of the Markov chain mc as columns. The resulting matrix has the indices of the state values of mc as elements.

Arguments

  • X::Matrix{Int} : Preallocated matrix to be filled with indices

of the sample paths of the Markov chain mc.

  • mc::MarkovChain : MarkovChain instance.

  • ;init=rand(1:n_states(mc)) : Can be one of the following

    • blank: random initial condition for each chain

    • scalar: same initial condition for each chain

    • vector: cycle through the elements, applying each as an initial condition until all columns have an initial condition (allows for more columns than initial conditions)

source

Simulate one sample path of the Markov chain mc. The resulting vector has the indices of the state values of mc as elements.

Arguments

  • mc::MarkovChain : MarkovChain instance.

  • ts_length::Int : Length of simulation

  • ;init::Int=rand(1:n_states(mc)) : Initial state

Returns

  • X::Vector{Int} : Vector containing the sample path, with length

ts_length

source

Compute a simulated sample path assuming Gaussian shocks.

Arguments

  • arma::ARMA: Instance of ARMA type

  • ;ts_length::Integer(90): Length of simulation

  • ;impulse_length::Integer(30): Horizon for calculating impulse response

(see also docstring for impulse_response)

Returns

  • X::Vector{Float64}: Simulation of the ARMA model arma

source
QuantEcon.smoothFunction.

Smooth the data in x using convolution with a window of requested size and type.

Arguments

  • x::Array: An array containing the data to smooth

  • window_len::Int(7): An odd integer giving the length of the window

  • window::AbstractString("hanning"): A string giving the window type. Possible values

are flat, hanning, hamming, bartlett, or blackman

Returns

  • out::Array: The array of smoothed data

source
QuantEcon.smoothMethod.

Version of smooth where window_len and window are keyword arguments

source
QuantEcon.solveFunction.

Solve the dynamic programming problem.

Parameters

  • ddp::DiscreteDP : Object that contains the Model Parameters

  • method::Type{T<Algo}(VFI): Type name specifying solution method. Acceptable

arguments are VFI for value function iteration or PFI for policy function iteration or MPFI for modified policy function iteration

  • ;max_iter::Int(250) : Maximum number of iterations

  • ;epsilon::Float64(1e-3) : Value for epsilon-optimality. Only used if

method is VFI

  • ;k::Int(20) : Number of iterations for partial policy evaluation in modified

policy iteration (irrelevant for other methods).

Returns

  • ddpr::DPSolveResult{Algo} : Optimization result represented as a

DPSolveResult. See DPSolveResult for details.

source

Solves the discrete lyapunov equation.

The problem is given by

AXA' - X + B = 0

X is computed by using a doubling algorithm. In particular, we iterate to convergence on X_j with the following recursions for j = 1, 2,... starting from X_0 = B, a_0 = A:

a_j = a_{j-1} a_{j-1}
X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}'

Arguments

  • A::Matrix{Float64} : An n x n matrix as described above. We assume in order

for convergence that the eigenvalues of A have moduli bounded by unity

  • B::Matrix{Float64} : An n x n matrix as described above. We assume in order

for convergence that the eigenvalues of B have moduli bounded by unity

  • max_it::Int(50) : Maximum number of iterations

Returns

  • gamma1::Matrix{Float64} Represents the value X

source

Solves the discrete-time algebraic Riccati equation

The prolem is defined as

X = A'XA - (N + B'XA)'(B'XB + R)^{-1}(N + B'XA) + Q

via a modified structured doubling algorithm. An explanation of the algorithm can be found in the reference below.

Arguments

  • A : k x k array.

  • B : k x n array

  • R : n x n, should be symmetric and positive definite

  • Q : k x k, should be symmetric and non-negative definite

  • N::Matrix{Float64}(zeros(size(R, 1), size(Q, 1))) : n x k array

  • tolerance::Float64(1e-10) Tolerance level for convergence

  • max_iter::Int(50) : The maximum number of iterations allowed

Note that A, B, R, Q can either be real (i.e. k, n = 1) or matrices.

Returns

  • X::Matrix{Float64} The fixed point of the Riccati equation; a k x k array

representing the approximate solution

References

Chiang, Chun-Yueh, Hung-Yuan Fan, and Wen-Wei Lin. "STRUCTURED DOUBLING ALGORITHM FOR DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS WITH SINGULAR CONTROL WEIGHTING MATRICES." Taiwanese Journal of Mathematics 14, no. 3A (2010): pp-935.

source

Compute the spectral density function.

The spectral density is the discrete time Fourier transform of the autocovariance function. In particular,

f(w) = sum_k gamma(k) exp(-ikw)

where gamma is the autocovariance function and the sum is over the set of all integers.

Arguments

  • arma::ARMA: Instance of ARMA type

  • ;two_pi::Bool(true): Compute the spectral density function over [0, pi] if false and [0, 2 pi] otherwise.

  • ;res(1200) : If res is a scalar then the spectral density is computed at

res frequencies evenly spaced around the unit circle, but if res is an array then the function computes the response at the frequencies given by the array

Returns

  • w::Vector{Float64}: The normalized frequencies at which h was computed, in radians/sample

  • spect::Vector{Float64} : The frequency response

source

Compute stationary distributions of the Markov chain mc, one for each recurrent class.

Arguments

  • mc::MarkovChain{T} : MarkovChain instance.

Returns

  • stationary_dists::Vector{Vector{T1}} : Vector of vectors that represent stationary distributions, where the element type T1 is Rational if T is Int (and equal to T otherwise).

source

Compute the moments of the stationary distributions of x_t and y_t if possible. Computation is by iteration, starting from the initial conditions lss.mu_0 and lss.Sigma_0

Arguments

  • lss::LSS An instance of the Guassian linear state space model

  • ;max_iter::Int = 200 The maximum number of iterations allowed

  • ;tol::Float64 = 1e-5 The tolerance level one wishes to achieve

Returns

  • mu_x::Vector Represents the stationary mean of x_t

  • mu_y::VectorRepresents the stationary mean of y_t

  • Sigma_x::Matrix Represents the var-cov matrix

  • Sigma_y::Matrix Represents the var-cov matrix

source

Computes value and policy functions in infinite horizon model

Arguments

  • lq::LQ : instance of LQ type

Returns

  • P::ScalarOrArray : n x n matrix in value function representation

V(x) = x'Px + d

  • d::Real : Constant in value function representation

  • F::ScalarOrArray : Policy rule that specifies optimal control in each period

Notes

This function updates the P, d, and F fields on the lq instance in addition to returning them

source

Non-mutating routine for solving for P, d, and F in infinite horizon model

See docstring for stationary_values! for more explanation

source
QuantEcon.tauchenFunction.

Tauchen's (1996) method for approximating AR(1) process with finite markov chain

The process follows

y_t = μ + ρ y_{t-1} + ε_t,

where ε_t ~ N (0, σ^2)

Arguments

  • N::Integer: Number of points in markov process

  • ρ::Real : Persistence parameter in AR(1) process

  • σ::Real : Standard deviation of random component of AR(1) process

  • μ::Real(0.0) : Mean of AR(1) process

  • n_std::Integer(3) : The number of standard deviations to each side the process

should span

Returns

  • mc::MarkovChain{Float64} : Markov chain holding the state values and

transition matrix

source
QuantEcon.update!Method.

Updates cur_x_hat and cur_sigma given array y of length k. The full update, from one period to the next

Arguments

  • k::Kalman An instance of the Kalman filter

  • y An array representing the current measurement

source

Update P and d from the value function representation in finite horizon case

Arguments

  • lq::LQ : instance of LQ type

Returns

  • P::ScalarOrArray : n x n matrix in value function representation

V(x) = x'Px + d

  • d::Real : Constant in value function representation

Notes

This function updates the P and d fields on the lq instance in addition to returning them

source

Computes the expected discounted quadratic sum

q(x_0) = E sum_{t=0}^{infty} beta^t x_t' H x_t

Here {x_t} is the VAR process x_{t+1} = A x_t + C w_t with {w_t} standard normal and x_0 the initial condition.

Arguments

  • A::Union{Float64, Matrix{Float64}} The n x n matrix described above (scalar)

if n = 1

  • C::Union{Float64, Matrix{Float64}} The n x n matrix described above (scalar)

if n = 1

  • H::Union{Float64, Matrix{Float64}} The n x n matrix described above (scalar)

if n = 1

  • beta::Float64: Discount factor in (0, 1)

  • x_0::Union{Float64, Vector{Float64}} The initial condtion. A conformable

array (of length n) or a scalar if n=1

Returns

  • q0::Float64 : Represents the value q(x_0)

Notes

The formula for computing q(x_0) is q(x_0) = x_0' Q x_0 + v where

  • Q is the solution to Q = H + beta A' Q A and

  • v = race(C' Q C) eta / (1 - eta)

source

Internal

DPSolveResult is an object for retaining results and associated metadata after solving the model

Parameters

  • ddp::DiscreteDP : DiscreteDP object

Returns

  • ddpr::DPSolveResult : DiscreteDP Results object

source
Base.:*Method.

Define Matrix Multiplication between 3-dimensional matrix and a vector

Matrix multiplication over the last dimension of A

source

Private method implementing compute_sequence when state is a scalar

source

Private method implementing compute_sequence when state is a scalar

source

Generate a_indptr; stored in out. s_indices is assumed to be in sorted order.

Parameters

num_states : Int

s_indices : Vector{Int}

out : Vector{Int} with length = num_states+1

source

Check whether s_indices and a_indices are sorted in lexicographic order.

Parameters

s_indices, a_indices : Vectors

Returns

bool: Whether s_indices and a_indices are sorted.

source

Generate a "non-square column stochstic matrix" of shape (n, m), which contains as columns m probability vectors of length n with k nonzero entries.

Arguments

  • n::Integer : Number of states.

  • m::Integer : Number of probability vectors.

  • ;k::Union{Integer, Void}(nothing) : Number of nonzero entries in each

column of the matrix. Set to n if note specified.

Returns

  • p::Array : Array of shape (n, m) containing m probability vectors of length

n as columns.

source
QuantEcon._solve!Method.

Modified Policy Function Iteration

source
QuantEcon._solve!Method.

Policy Function Iteration

NOTE: The epsilon is ignored in this method. It is only here so dispatch can go from solve(::DiscreteDP, ::Type{Algo}) to any of the algorithms. See solve for further details

source
QuantEcon._solve!Method.

Impliments Value Iteration NOTE: See solve for further details

source
QuantEcon.fixFunction.

fix(x)

Round x towards zero. For arrays there is a mutating version fix!

source
QuantEcon.getZMethod.

Simple method to return an element Z in the Riccati equation solver whose type is Matrix (to be accepted by the cond() function)

Arguments

  • BB::Matrix : result of B' * B

  • gamma::Float64 : parameter in the Riccati equation solver

  • R::Matrix

Returns

  • ::Matrix : element Z in the Riccati equation solver

source
QuantEcon.getZMethod.

Simple method to return an element Z in the Riccati equation solver whose type is Float64 (to be accepted by the cond() function)

Arguments

  • BB::Float64 : result of B' * B

  • gamma::Float64 : parameter in the Riccati equation solver

  • R::Float64

Returns

  • ::Float64 : element Z in the Riccati equation solver

source
QuantEcon.getZMethod.

Simple method to return an element Z in the Riccati equation solver whose type is Float64 (to be accepted by the cond() function)

Arguments

  • BB::Union{Vector, Matrix} : result of B' * B

  • gamma::Float64 : parameter in the Riccati equation solver

  • R::Float64

Returns

  • ::Float64 : element Z in the Riccati equation solver

source

Same as gth_solve, but overwrite the input A, instead of creating a copy.

source

Return m randomly sampled probability vectors of size k.

Arguments

  • k::Integer : Size of each probability vector.

  • m::Integer : Number of probability vectors.

Returns

  • a::Array : Array of shape (k, m) containing probability vectors as colums.

source

Populate out with max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).

Also fills out_argmax with the column number associated with the indmax in each row

source

Populate out with max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).

source

Populate out with max_a vals(s, a), where vals is represented as a Vector of size (num_sa_pairs,).

Also fills out_argmax with the cartesiean index associated with the indmax in each row

source

Populate out with max_a vals(s, a), where vals is represented as a Vector of size (num_sa_pairs,).

source

Return the Vector max_a vals(s, a), where vals is represented as a AbstractMatrix of size (num_states, num_actions).

source
QuantEcon.todenseMethod.

If A is already dense, return A as is

source
QuantEcon.todenseMethod.

Custom version of full, which allows convertion to type T

source

Index