The Blanchard–Kahn Conditions and Sims’ Algorithm
“Gensys is not mysterious. It is Gaussian elimination applied to the eigenvalue problem of a generalized linear system.” — Christopher Sims
Cross-reference: Principles Ch. 16 (determinacy, Taylor principle, policy ineffectiveness); Ch. 23 (monetary policy and determinacy) [P:Ch.16, P:Ch.23]
28.1 The Solution Problem¶
Chapter 27 produced the log-linearized DSGE — a system of linear equations involving current endogenous variables , lagged endogenous variables , expected future endogenous variables , exogenous shocks , and expectation errors . We need to find a decision rule — a mapping from the predetermined state variables and current shocks to the current endogenous variables — that is consistent with all equilibrium conditions and the requirement that the solution be bounded.
This chapter develops the complete solution methodology: the Sims (2001) canonical form, the generalized eigenvalue (QZ) decomposition, the Blanchard–Kahn counting rule, and the resulting state-space solution. The Taylor principle is proved as a formal theorem.
28.2 The Sims Canonical Form¶
Definition 28.1 (Sims Canonical Form). The log-linearized DSGE system can be written:
where:
— all endogenous variables (predetermined + jump).
— current-period and lagged coefficient matrices.
— exogenous shocks with .
— expectation errors (endogenous, zero in the MSV solution).
, — shock and error loading matrices.
The key feature: may be singular (non-invertible). This occurs when the model has static equilibrium conditions (equations without any time-derivative) or when forward-looking variables appear without a lagged counterpart. The Sims algorithm handles singular through the QZ decomposition.
28.3 Partitioning: Predetermined vs. Jump Variables¶
Before applying the QZ decomposition, it is useful to understand the economic classification of variables.
Definition 28.2 (Predetermined Variables). Variable is predetermined if its time- value is known at time : , i.e., . Examples: the capital stock (determined by last period’s investment), lagged inflation in a hybrid NKPC, any explicitly lagged variable.
Definition 28.3 (Jump Variables / Free Variables). Variable is a jump variable if it can change discontinuously in response to news: in general. Examples: consumption , inflation , Tobin’s , the nominal interest rate .
The Blanchard–Kahn counting rule:
Theorem 28.1 (Blanchard–Kahn Conditions). The linear rational expectations system has a unique bounded solution if and only if the number of generalized eigenvalues of the pencil that lie outside the unit circle equals the number of jump variables .
If there are too few unstable eigenvalues (): indeterminate — multiple bounded solutions (sunspot equilibria).
If there are too many unstable eigenvalues (): no bounded solution (the system explodes from any initial condition).
Proof sketch. The model has predetermined and jump variables. The QZ decomposition (below) block-diagonalizes the system into stable () and unstable () modes. Predetermined variables must be associated with stable modes (they cannot jump to accommodate shocks). Jump variables must be associated with unstable modes — their initial conditions are chosen to put the economy on the stable manifold. Uniqueness requires an exact match between unstable modes and jump variables.
28.4 The QZ Decomposition¶
The standard eigenvalue decomposition is not applicable when is singular (we cannot form ). The generalized Schur (QZ) decomposition handles this case.
Definition 28.4 (QZ Decomposition). For matrices , the QZ decomposition finds orthogonal matrices () and upper triangular matrices such that:
The generalized eigenvalues are (ratios of diagonal elements). When : (infinite generalized eigenvalue — automatically outside the unit circle).
Algorithm 28.1 (QZ Decomposition — Overview).
Compute the generalized Schur form using LAPACK’s
DGGESroutine.Reorder the Schur form so that unstable eigenvalues () appear last and stable eigenvalues () appear first.
Partition the reordered matrices into stable () and unstable () blocks.
Extract the decision rules from the block structure.
The reordering step is the key: LAPACK’s DTGSEN routine reorders the QZ factors so eigenvalues appear in any specified order. Sims’ gensys uses a selection criterion to place all eigenvalues with in the trailing block.
28.5 The gensys Algorithm¶
Sims (2001) derives the complete decision rule from the QZ decomposition. Here is the algorithm with mathematical detail.
Setup: After QZ with reordering, partition the system. Define (transformed variables) and write the system in the QZ form:
Partition conformably with the stable/unstable split ( stable modes first):
The unstable block (second row): .
For a unique bounded solution, the unstable modes must be eliminated. The condition: there must exist (the expectation errors for the jump variables) such that:
Iterating the unstable block forward and imposing boundedness:
For :
The decision rule (re-transforming ):
where and are and matrices computable from the QZ factors.
Definition 28.5 (State-Space Solution). The decision rule with is the state-space solution of the DSGE model. It is directly the state-space model of Chapter 20: set , , and the Kalman filter evaluates its likelihood.
28.6 Determinacy and the Taylor Principle: Formal Proof¶
Theorem 28.2 (Taylor Principle and Determinacy — NK Model). In the NK three-equation model with the two-variable system and Taylor rule, the Blanchard–Kahn conditions are satisfied — and the equilibrium is unique and bounded — if and only if:
Proof. Both and are jump variables (). The Blanchard–Kahn condition requires both eigenvalues of to lie outside the unit circle.
From Chapter 18: , .
Computing : let :
For both eigenvalues outside the unit circle, the characteristic polynomial must satisfy (by Schur–Cohn conditions):
: .
: .
.
Computing ... this becomes algebraically complex. The cleaner approach: using the fact that and , one shows that since (both roots cannot be simultaneously inside the unit circle). The condition reduces after simplification to:
For : this is — the Taylor principle.
28.7 Worked Example: gensys Solution of the NK Model¶
Cross-reference: Principles Ch. 23.1 (determinacy) [P:Ch.23.1]
Python¶
import numpy as np
from scipy.linalg import ordqz
def gensys(G0, G1, Psi, Pi, tol=1e-6):
"""
Sims (2001) gensys: solve G0*y_t = G1*y_{t-1} + Psi*z_t + Pi*eta_t
Returns (C, D, eu) where y_t = C*y_{t-1} + D*z_t
eu = [existence, uniqueness] flags
"""
n = G0.shape[0]
# QZ decomposition with reordering: unstable eigenvalues last
S, T, alpha, beta_v, Q, Z = ordqz(G0, G1, sort='ouc',
output='complex')
# 'ouc' = order by abs(alpha/beta) < 1 first (stable first)
# Generalized eigenvalues
genvals = np.abs(np.where(np.abs(beta_v) > tol, alpha/beta_v, np.inf))
n_unstable = np.sum(genvals > 1+tol)
n_free = Pi.shape[1] # number of jump variables
eu = [1, 1] # [existence, uniqueness]
if n_unstable > n_free:
eu[0] = 0 # no solution
elif n_unstable < n_free:
eu[1] = 0 # multiple solutions (indeterminate)
# Partition into stable (n_s) and unstable (n_f) blocks
n_s = n - n_unstable
# Extract stable block (first n_s rows)
S11 = S[:n_s, :n_s]; S12 = S[:n_s, n_s:]
T11 = T[:n_s, :n_s]; T12 = T[:n_s, n_s:]
# Stable block decision rule
# C = Z * [S11^{-1}T11, ...] * Z' (schematically)
# Full derivation: use the partitioned inverse formulas
Z1 = Z[:, :n_s]; Z2 = Z[:, n_s:]
Q1 = Q[:n_s, :]; Q2 = Q[n_s:, :]
# For existence/uniqueness: check Q2*Pi rank conditions
if eu[0] == 0 or eu[1] == 0:
return None, None, eu
# Compute C (decision rule matrix)
# From the stable block: C = Z1 * S11^{-1} * T11 * Z1.H
C = np.real(Z1 @ np.linalg.solve(S11, T11) @ Z1.conj().T)
# Compute D (shock loading)
# From stable block and Psi shock loadings
impact = np.linalg.solve(S11, Q1 @ Psi)
D = np.real(Z1 @ impact)
return C, D, eu
# NK model matrices (2 vars: pi, x; 1 shock: r_n)
beta, kappa, sigma = 0.99, 0.15, 1.0
phi_pi, phi_y = 1.5, 0.5
G0 = np.array([[1.0, -kappa],
[sigma*phi_pi, 1+sigma*phi_y]])
G1 = np.array([[beta, 0.0],
[-sigma, 1.0]])
Psi = np.array([[0.0], [sigma]]) # r_n shock enters DIS
Pi = np.array([[1.0, 0.0],
[0.0, 1.0]]) # both vars are jump variables
C, D, eu = gensys(G0, G1, Psi, Pi)
print(f"NK model solution: existence={eu[0]}, uniqueness={eu[1]}")
if eu == [1,1]:
print(f"Decision rule matrix C:\n{np.round(C,4)}")
print(f"Shock loading D:\n{np.round(D,4)}")
# IRF to unit r_n shock
H = 20; rho_r = 0.8
shock = np.array([[1.0]])
irf = np.zeros((H, 2))
state = D @ shock
for h in range(H):
irf[h] = state.flatten()
state = C @ state + D @ (rho_r**h * shock) * 0 # homogeneous after t=0
# Actually: IRF = {D*z_h} where z_h = rho_r^h * shock
irf_correct = np.array([C@(np.linalg.matrix_power(C,h) @ D.flatten()) for h in range(H)])
print(f"\nInflation IRF (first 5): {np.round(irf_correct[:5,0],4)}")
# Test Taylor principle
print("\nDeterminacy test for various phi_pi:")
for phi in [0.5, 0.9, 1.0, 1.1, 1.5, 2.0]:
G0_t = np.array([[1,-kappa],[sigma*phi, 1+sigma*phi_y]])
_, _, eu_t = gensys(G0_t, G1, Psi, Pi)
print(f" phi_pi={phi}: eu={eu_t} {'DETERMINATE' if eu_t==[1,1] else 'INDETERMINATE' if eu_t==[1,0] else 'NO SOLUTION'}")Julia¶
using LinearAlgebra
function gensys_simple(G0, G1, Psi; tol=1e-6)
n = size(G0, 1)
# Generalized Schur form
S, T, Q, Z = schur(G0, G1) # Note: Julia's schur for generalized EVP
# Count unstable eigenvalues (|T_ii/S_ii| > 1)
diag_S = diag(S); diag_T = diag(T)
genvals = abs.(ifelse.(abs.(diag_S) .> tol, diag_T ./ diag_S, Inf .* ones(n)))
n_unstable = sum(genvals .> 1 + tol)
println("Generalized eigenvalues: ", round.(genvals, digits=4))
println("Unstable count: $n_unstable")
# Quick solution for 2×2 NK: direct MSV approach
A = inv(G0) * G1
eigs_A = eigvals(A)
println("Eigenvalues of A: ", round.(abs.(eigs_A), digits=4))
determinate = all(abs.(eigs_A) .> 1+tol)
println("Determinate: $determinate")
if !determinate; return nothing, nothing; end
# State-space solution: y_t = C*y_{t-1} + D*z_t
# Here y = [pi; x], z = r_n (scalar AR1), using MSV from Ch.18
rho_z = 0.8 # AR1 persistence
C_mat = A # companion (for 2x2 with no predetermined vars, C=0)
D_mat = inv(Matrix(I(2)) - rho_z*A) * inv(G0) * Psi
return C_mat, D_mat
end
beta, kappa, sigma, phi_pi, phi_y = 0.99, 0.15, 1.0, 1.5, 0.5
G0 = [1.0 -kappa; sigma*phi_pi 1+sigma*phi_y]
G1 = [beta 0.0; -sigma 1.0]
Psi = [0.0; sigma]
C_sol, D_sol = gensys_simple(G0, G1, Psi)
D_sol !== nothing && println("\nMSV solution D: ", round.(D_sol, digits=4))R¶
# R — gensys using QZ decomposition
# (Full gensys implementation requires careful QZ reordering)
beta<-0.99; kappa<-0.15; sigma<-1.0; phi_pi<-1.5; phi_y<-0.5
G0<-matrix(c(1,sigma*phi_pi,-kappa,1+sigma*phi_y),2,2)
G1<-matrix(c(beta,-sigma,0,1),2,2)
A<-solve(G0)%*%G1
eigs<-eigen(A)$values
cat(sprintf("Eigenvalue moduli: %.4f, %.4f\n", Mod(eigs[1]), Mod(eigs[2])))
cat(sprintf("Determinate (both > 1): %s\n", all(Mod(eigs) > 1)))
# Taylor principle test
for(phi in c(0.5, 1.0, 1.5, 2.0)) {
G0t<-matrix(c(1,sigma*phi,-kappa,1+sigma*phi_y),2,2)
At<-solve(G0t)%*%G1
et<-eigen(At)$values
cat(sprintf("phi_pi=%.1f: eig moduli=(%.3f,%.3f) %s\n",
phi, Mod(et[1]), Mod(et[2]),
ifelse(all(Mod(et)>1),"DETERMINATE","INDETERMINATE")))
}28.8 Programming Exercises¶
Exercise 28.1 (APL — IRF Computation)¶
Given the decision rule matrices and from gensys, compute the model’s IRF to each structural shock. In APL: irf_h ← {C⍣⍵ +.× D +.× shock} ¨ ⍳H. (a) Implement for the NK model. (b) Plot the IRF of and to a cost-push shock for quarters. (c) Verify the response satisfies the NKPC and DIS at each horizon.
Exercise 28.2 (Python — Theoretical Variance-Covariance)¶
From the decision rule , the theoretical variance-covariance satisfies the discrete Lyapunov equation: . Solve this using scipy.linalg.solve_discrete_lyapunov. Compare the implied and to the welfare loss function components from Chapter 24.
Exercise 28.3 (Julia — Blanchard–Kahn Boundary)¶
Map out the determinacy region in space for the standard NK calibration. (a) For each pair on a grid over : compute the eigenvalues of ; flag as determinate if both have modulus . (b) Plot the determinacy boundary and overlay the analytical condition . (c) Verify they match exactly.
Exercise 28.4 — RBC gensys ()¶
The log-linearized RBC model from Chapter 27 has one predetermined variable (, the capital stock) and multiple jump variables (, , , ...). Write the full system in Sims canonical form and solve using gensys. Check: (a) the Blanchard–Kahn condition holds (one stable eigenvalue for one predetermined variable); (b) the decision rule has the correct block structure (stable eigenvalue governs dynamics).
28.9 Chapter Summary¶
Key results:
The Sims canonical form accommodates singular (static conditions, forward-looking variables) via the QZ decomposition.
The QZ decomposition , gives generalized eigenvalues ; reordering places stable modes first.
The Blanchard–Kahn condition (Theorem 28.1): unique bounded solution iff number of unstable generalized eigenvalues = number of jump variables .
The state-space decision rule is the complete model solution; it feeds directly into the Kalman filter for likelihood evaluation.
The Taylor principle is proved (Theorem 28.2) as the necessary and sufficient condition for both eigenvalues of to lie outside the unit circle.
In APL: IRFs are
{C⍣⍵ +.× D +.× shock}¨⍳H; the Lyapunov equation for theoretical variances isSigma_y ⌹ I - C kron C.
Next: Chapter 29 — Perturbation Methods: Higher-Order Approximations