Business Cycle Dynamics
“In discrete time, every model is an iterated map. The question is whether that map has a fixed point, a cycle, or chaos.”
Cross-reference: Principles Ch. 6 (time-series properties, HP filter); Ch. 10 (NKPC as a forward difference equation); Ch. 16 (rational expectations, forward solutions); Ch. 27 (RBC model log-linearized as a discrete system) [P:Ch.6, P:Ch.10, P:Ch.16, P:Ch.27]
4.1 Why Discrete Time? Data, Policy, and Computation¶
The choice between continuous and discrete time is not merely aesthetic. Several compelling reasons push modern macroeconomics toward discrete-time formulations.
Data observability. GDP is measured quarterly, inflation monthly, the federal funds rate daily. The natural time unit for policy analysis — the period between Federal Open Market Committee meetings — is six weeks. A continuous-time model that maps cleanly to quarterly data requires a discretization step anyway; it is simpler to start in discrete time.
Computational tractability. Value function iteration, the Kalman filter, the Metropolis–Hastings sampler for Bayesian DSGE estimation — all are inherently discrete. The state space must be discretized, time must be measured in steps, and probability distributions are approximated by finite grids.
Forward-looking equations. The New Keynesian Phillips Curve [P:Ch.10] and the Dynamic IS curve are difference equations with expectations of future values. These arise naturally from discrete-time optimization but have no direct continuous-time analogue for policy analysis.
This chapter develops the theory of difference equations — the discrete-time counterpart to Chapter 3’s ODEs. Every result has a direct parallel in the continuous-time theory; the key substitution is and the stability condition changes from to .
4.2 First-Order Linear Difference Equations¶
4.2.1 The Homogeneous Case¶
The equation has the solution:
Stability is determined by :
: (stable).
: constant (neutrally stable).
: (unstable).
Theorem 4.1 (Stability of First-Order Difference Equations). The equilibrium of is globally asymptotically stable if and only if .
4.2.2 The Nonhomogeneous Case¶
The equation (with ) has the general solution:
For , as .
Convergence speed: The gap shrinks by factor each period. After periods, the gap is .
4.2.3 Backward vs. Forward Solutions¶
A crucial distinction in macroeconomics is between backward-looking (predeterminate) variables, whose current value is determined by past behavior, and forward-looking (free) variables, whose current value is determined by expected future behavior.
Backward solution for : start from and iterate forward. Requires for stability.
Forward solution for : rearrange as and iterate forward:
For the forward solution to be bounded (a transversality condition), we need as , which requires (so ). If , the forward solution is:
This is the present-value formula structure that appears throughout the New Keynesian model. The NKPC forward solution is exactly this form with [P:Ch.10].
4.3 Second-Order Difference Equations¶
Second-order equations appear in the multiplier–accelerator model of Samuelson (1939) [P:Ch.8] and in continuous-time models after discretization. The general solution is:
where are the roots of the characteristic equation:
and is the particular solution (assumed ). The constants , are determined by two initial conditions and .
Classification by roots:
| Discriminant | Roots | Path Type |
|---|---|---|
| Two real roots | Monotone (same sign) or S-shaped | |
| Repeated real root | Monotone or boundary case | |
| Complex conjugate pair | Oscillatory |
For complex roots (where and ):
The system oscillates with period and amplitude decaying like . Stability requires .
4.3.1 The Multiplier–Accelerator Model¶
Samuelson’s (1939) model of endogenous business cycles [P:Ch.8] sets:
where is the MPC and is the accelerator. Substituting:
This is exactly a second-order difference equation with and . The steady state is .
Stability: Requires both roots . Equivalently, the necessary and sufficient conditions are and , i.e., (always true) and . Stability requires .
Business cycles arise from complex roots: When , i.e., , the roots are complex and the system oscillates. The amplitude determines whether cycles decay (), persist (), or explode ().
4.4 Systems of Linear Difference Equations¶
The discrete-time state-space form:
where is the state vector and is an exogenous input (shock or policy variable). This is the linearized DSGE model after log-linearization — the starting point of Parts VII.
General solution: Iterating the state equation:
Using the eigendecomposition :
Stability: Requires all eigenvalues of to satisfy .
Impulse response: The response of to a unit shock (with for ) at horizon is — the -th column of . This is computed efficiently as using repeated matrix multiplication or via the eigendecomposition.
⎕IO←0 ⋄ ⎕ML←1
A ← 2 2 ⍴ 0.85 0.10 ¯0.05 0.90
B ← 0.5 ¯0.3 ⍝ Simplified to a vector to match state dimensions
shock ← 0.1
H ← 20
⍝ 1. Fix: Use a derived function for the Power operator
⍝ We define a 'Step' function first to make the IRF calculation clear
Step ← {A +.× ⍵}
impact ← B × shock
irf ← { (Step⍣⍵) impact } ¨ ⍳H
⍝ Display IRF
↑ irf
⍝ 2. Shocks
T ← 100
e ← 0.1 × (¯0.5 + ?T⍴0)
shocks ← {0.9 × ⍺ + ⍵} \ e
⍝ 3. Fix: State Simulation
⍝ Use , to ensure the B vector and shock scalar play nice
sim_path ← { (A +.× ⍺) + B × ⍵ } \ (⊂2⍴0), shocks
⍝ Show first 5 states
5 ↑ sim_path]plot ↑irf]plot ↑sim_path4.5 Forward-Looking Difference Equations and the Minimum State Variable Solution¶
The distinctive challenge of modern macroeconomics is that many key equations involve expected future values of endogenous variables. The NKPC is ; the DIS is . These are expectational difference equations with no unique solution without additional restrictions.
Definition 4.1 (Minimum State Variable Solution). The minimum state variable (MSV) solution is the unique solution of a linear expectational difference equation that expresses each endogenous variable as a linear function of the minimal set of state variables — typically the exogenous forcing variables and any predetermined (backward-looking) variables.
McCallum’s (1983) algorithm for finding the MSV solution:
Algorithm 4.1 (McCallum’s Undetermined Coefficients).
Given the expectational system where is a vector of exogenous state variables with law of motion :
Guess an MSV solution: (endogenous variables are linear functions of exogenous states).
Compute .
Substitute into the original equation: .
Match coefficients: .
Solve this matrix equation for . This is a discrete Sylvester equation of the form .
The Sylvester equation can be vectorized: , which is a linear system in . It has a unique solution when is nonsingular — equivalently, when no eigenvalue of equals the reciprocal of any eigenvalue of .
4.5.1 Application: The NK Model Under a Taylor Rule¶
Cross-reference: Principles Ch. 10, Ch. 23 [P:Ch.10, P:Ch.23]
The three-equation NK model:
Substituting the Taylor rule into the DIS:
Wait — let us use the compact matrix form. Stack and note that both are jump variables (free to adjust at each date). The system can be written:
This is the canonical form , which we solve via . Let and . The MSV solution is where solves the scalar Sylvester equation (since follows AR(1) with ).
Determinacy: The system has a unique bounded solution (MSV solution is the only equilibrium) iff the number of eigenvalues of outside the unit circle equals the number of free variables — here, both and are free, so we need exactly 2 eigenvalues of outside the unit circle. This is the Blanchard–Kahn condition for this model (developed fully in Chapter 28).
The Taylor principle ensures this: in the 2×2 NK model, implies both eigenvalues of have modulus greater than 1, satisfying the Blanchard–Kahn condition. This is the formal derivation of the result stated in Principles Ch. 23 [P:Ch.23.1] that the Taylor principle is necessary and sufficient for determinacy.
4.6 Stability Analysis and the Unit Circle¶
For a system , stability is determined by the eigenvalues of relative to the unit circle in the complex plane.
Definition 4.2 (Stable, Unstable, and Centre Subspaces). Decompose into three subspaces based on eigenvalues:
Stable subspace : spanned by generalised eigenvectors corresponding to .
Unstable subspace : spanned by generalised eigenvectors corresponding to .
Centre subspace : spanned by generalised eigenvectors corresponding to .
For the DSGE model with predetermined variables and free (jump) variables, the Blanchard–Kahn condition requires:
Number of eigenvalues of outside unit circle = number of free variables .
When this holds, the model has a unique bounded rational expectations equilibrium. When there are too few eigenvalues outside the unit circle, there are multiple equilibria (indeterminacy — sunspot equilibria exist). When there are too many, no bounded equilibrium exists (the model “explodes”).
Checking stability in APL:
⎕IO←0 ⋄ ⎕ML←1
A ← 2 2 ⍴ 1.2 0.3 ¯0.1 0.85
⍝ 1. Calculate Trace and Determinant
tr ← A[0;0] + A[1;1]
det ← (A[0;0]×A[1;1]) - (A[0;1]×A[1;0])
⍝ 2. Solve Quadratic: λ² - tr*λ + det = 0
⍝ We calculate the two roots separately to avoid vector confusion
disc ← (tr*2) - 4×det
root_part ← disc * 0.5
eig1 ← (tr + root_part) ÷ 2
eig2 ← (tr - root_part) ÷ 2
eig_vals ← eig1, eig2
⍝ 3. Moduli and BK Condition
moduli ← | eig_vals
n_outside ← +/ moduli > 1
⍝ Display
'Eigenvalues:' eig_vals
'Moduli: ' moduli
'BK Count: ' n_outside4.7 Worked Example: Solving the Two-Equation NK System¶
Cross-reference: Principles Ch. 10, Ch. 23 [P:Ch.Ch.10, P:Ch.23]
Consider the simplified NK model with only a cost-push shock :
Calibration: , , , , , .
Step 1: Canonical form. Stack (the shock is a state variable since it is predetermined). Write:
but since is predetermined, it belongs in the state vector rather than as a free variable.
Step 2: MSV guess. Since the only state is , guess:
Step 3: Solve for coefficients. Substituting , , and , into the NKPC:
And into the DIS (with Taylor rule substituted):
Step 4: Solve the 2×2 system. Let , , :
Substituting the second into the first:
Then .
Step 5: Numerical evaluation.
Interpretation: A positive cost-push shock of 1 unit raises inflation by 1.700 and lowers the output gap by 3.188. The large output gap response reflects the central bank’s aggressive tightening (high ) in response to the inflation increase.
⍝ APL — MSV solution for 2-variable NK system
⎕IO←0 ⋄ ⎕ML←1
beta←0.99 ⋄ kappa←0.15 ⋄ sigma←1
phi_pi←1.5 ⋄ phi_y←0.5 ⋄ rho_u←0.7
a ← 1 - beta×rho_u
b ← 1 - rho_u + sigma×phi_y
c ← sigma×phi_pi
denom ← (a×b) + c×kappa
omega_pi ← b÷denom
omega_x ← (-c)÷denom
omega_pi ⍝ ≈ 1.700
omega_x ⍝ ≈ ¯3.188
⍝ Impulse response: response to unit shock at t=0
T ← 20
irf_pi ← omega_pi × rho_u * ⍳T ⍝ omega_pi × rho_u^t
irf_x ← omega_x × rho_u * ⍳T
irf_pi ⍝ inflation response path
irf_x ⍝ output gap response path]plot irf_x]plot irf_pi4.8 The Cobweb Model and Expectational Stability¶
The classic cobweb model illustrates how adaptive versus rational expectations change the dynamics of a market with production lags.
Supply: (firms produce based on expected price) Demand: (consumers respond to actual price)
Market clearing: , so:
Under adaptive expectations (naive: expect last period’s price):
This is a first-order difference equation with coefficient . Stability requires , i.e., supply is more elastic than demand (). If supply is more elastic (), the cobweb explodes.
Under rational expectations (firms correctly anticipate the equilibrium price), the market always clears at the rational expectations equilibrium price — no cobweb dynamics at all. Rational expectations “flatten” the cobweb by eliminating the lag between expectation and realization.
This example, while simple, illustrates the fundamental point of Principles Chapter 16 [P:Ch.16]: rational expectations can eliminate systematic forecast errors and with them the price dynamics that drive business cycles. Whether prices and wages are actually set as rationally as this model implies is an empirical question — but it sets the theoretical benchmark.
4.9 Programming Exercises¶
Exercise 4.1 (APL — Multiplier–Accelerator)¶
Implement the Samuelson multiplier–accelerator dynamics as a scan operation.
⎕IO←0 ⋄ ⎕ML←1
b ← 0.8 ⋄ v ← 1.2 ⋄ G ← 100
Yss ← G ÷ 1 - b
initial ← Yss, (Yss × 1.05)
⍝ Samuelson logic: Y_t = b(1+v)Y_{t-1} - bvY_{t-2} + G
⍝ ⍵ is the growing vector of Y values
f ← {⍵, (b×(1+v)×⊃⌽⍵) - (b×v×⊃⌽¯1↓⍵) + G}
⍝ Run for 15 steps
Y_seq ← f⍣15 ⊢ initial
⍝ DISPLAY: Mix into a column to make it "Jupyter-safe"
↑ Y_seq]plot ↑Y_seqExercise 4.2 (Python — Stability Diagram)¶
import numpy as np
import matplotlib.pyplot as plt
b_vals = np.linspace(0, 1, 100)
v_vals = np.linspace(0, 3, 100)
B, V = np.meshgrid(b_vals, v_vals)
# Characteristic roots
p = -B*(1+V)
q = B*V
disc = p**2 - 4*q
lambda_modulus = np.where(disc >= 0,
np.maximum(np.abs((-p + np.sqrt(np.abs(disc)))/2),
np.abs((-p - np.sqrt(np.abs(disc)))/2)),
np.sqrt(q)) # for complex roots, modulus = sqrt(q)
stable = lambda_modulus < 1
oscillatory = (disc < 0) & stable
fig, ax = plt.subplots()
ax.contourf(B, V, stable.astype(int), levels=[0.5,1.5], colors=['lightblue'], alpha=0.5)
ax.contour(B, V, oscillatory.astype(int), levels=[0.5], colors=['red'])
ax.set_xlabel('MPC (b)'); ax.set_ylabel('Accelerator (v)')
ax.set_title('Stability diagram: blue=stable, red boundary=oscillatory')
plt.show()Exercise 4.3 (Julia — MSV Solution)¶
# MSV solution for arbitrary NK calibration
function msv_nk(; beta=0.99, kappa=0.15, sigma=1.0,
phi_pi=1.5, phi_y=0.5, rho_u=0.7)
a = 1 - beta*rho_u
b = 1 - rho_u + sigma*phi_y
c = sigma*phi_pi
denom = a*b + c*kappa
omega_pi = b / denom
omega_x = -c / denom
return (omega_pi=omega_pi, omega_x=omega_x)
end
sol = msv_nk()
println("ω_π = $(round(sol.omega_pi, digits=3))")
println("ω_x = $(round(sol.omega_x, digits=3))")
# Sweep over phi_pi to show determinacy
phi_pis = 0.5:0.1:3.0
for φ in phi_pis
s = msv_nk(phi_pi=φ)
println("φ_π = $φ: ω_π = $(round(s.omega_pi,digits=3)), ω_x = $(round(s.omega_x,digits=3))")
endExercise 4.4 (R — Forward Solution)¶
# Forward solution of NKPC: pi_t = kappa * sum_{j>=0} beta^j E_t[x_{t+j}]
# Given AR(1) output gap: x_t = rho_x * x_{t-1} + eps_t
kappa <- 0.15; beta <- 0.99; rho_x <- 0.8
# Closed-form: pi_t = kappa/(1-beta*rho_x) * x_t
omega_pi_forward <- kappa / (1 - beta*rho_x)
cat("π_t = ", round(omega_pi_forward, 4), "* x_t\n")
# Simulate and verify numerically
T <- 1000
set.seed(42)
x <- filter(rnorm(T, sd=0.1), rho_x, method="recursive")
pi_formula <- omega_pi_forward * x
# Compare to direct present-value calculation
pi_pv <- sapply(1:(T-50), function(t) {
kappa * sum(sapply(0:49, function(j) beta^j * x[t+j]))
})
cat("Max discrepancy:", max(abs(pi_formula[1:(T-50)] - pi_pv)), "\n")Exercise 4.5 — Blanchard–Kahn ()¶
For the 2×2 NK system in Section 4.5.1, write the matrix explicitly and compute its eigenvalues for the calibration in Worked Example 4.7. Verify that both eigenvalues lie outside the unit circle when and , and find the critical value of below which one eigenvalue moves inside the unit circle (violating determinacy). Show this threshold is consistent with the Taylor principle .
Exercise 4.6 — Indeterminacy and Sunspots ()¶
When , the NK model has indeterminate equilibria. The general solution becomes where is an arbitrary martingale difference sequence (the “sunspot”). (a) Explain why this generates volatility unrelated to fundamentals. (b) Simulate the model under with a sunspot and compare the resulting volatility of and to the fundamental-only case . (c) Show that the Taylor principle eliminates the sunspot component by making .
4.10 Chapter Summary¶
Key results:
First-order linear difference equations are stable iff , with convergence rate and steady state .
Backward solutions require (predetermined variables); forward solutions require (free/jump variables). The present-value formula structure of the NKPC arises from the forward solution with .
Second-order equations generate oscillatory dynamics when the discriminant ; stability requires .
The MSV solution of an expectational difference equation is found by guessing a linear function of state variables and matching coefficients — the Sylvester equation.
The Blanchard–Kahn condition for a unique bounded rational expectations equilibrium: number of eigenvalues outside the unit circle equals number of free variables.
In APL: matrix powers
A⍣hcompute impulse responses efficiently; the scan\collects path histories;rho_u * ⍳Tgenerates geometric decay sequences directly.
Connections forward: Chapter 14 revisits difference equations in the multiplier–accelerator context. Chapter 18 develops the full rational expectations solution methodology (undetermined coefficients, the Sims algorithm). Chapter 28 provides the definitive treatment of the Blanchard–Kahn conditions and the QZ decomposition algorithm for DSGE models.
Next: Chapter 5 — Stochastic Processes for Aggregate Shocks