Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Chapter 4: Difference Equations in Discrete-Time Macro Models

kapitaali.com

Business Cycle Dynamics

“In discrete time, every model is an iterated map. The question is whether that map has a fixed point, a cycle, or chaos.”

Cross-reference: Principles Ch. 6 (time-series properties, HP filter); Ch. 10 (NKPC as a forward difference equation); Ch. 16 (rational expectations, forward solutions); Ch. 27 (RBC model log-linearized as a discrete system) [P:Ch.6, P:Ch.10, P:Ch.16, P:Ch.27]


4.1 Why Discrete Time? Data, Policy, and Computation

The choice between continuous and discrete time is not merely aesthetic. Several compelling reasons push modern macroeconomics toward discrete-time formulations.

Data observability. GDP is measured quarterly, inflation monthly, the federal funds rate daily. The natural time unit for policy analysis — the period between Federal Open Market Committee meetings — is six weeks. A continuous-time model that maps cleanly to quarterly data requires a discretization step anyway; it is simpler to start in discrete time.

Computational tractability. Value function iteration, the Kalman filter, the Metropolis–Hastings sampler for Bayesian DSGE estimation — all are inherently discrete. The state space must be discretized, time must be measured in steps, and probability distributions are approximated by finite grids.

Forward-looking equations. The New Keynesian Phillips Curve π^t=βEt[π^t+1]+κx^t\hat{\pi}_t = \beta\mathbb{E}_t[\hat{\pi}_{t+1}] + \kappa\hat{x}_t [P:Ch.10] and the Dynamic IS curve x^t=Et[x^t+1]σ(itEt[πt+1]rtn)\hat{x}_t = \mathbb{E}_t[\hat{x}_{t+1}] - \sigma(i_t - \mathbb{E}_t[\pi_{t+1}] - r^n_t) are difference equations with expectations of future values. These arise naturally from discrete-time optimization but have no direct continuous-time analogue for policy analysis.

This chapter develops the theory of difference equations — the discrete-time counterpart to Chapter 3’s ODEs. Every result has a direct parallel in the continuous-time theory; the key substitution is eatλte^{at} \leftrightarrow \lambda^t and the stability condition changes from a<0a < 0 to λ<1|\lambda| < 1.


4.2 First-Order Linear Difference Equations

4.2.1 The Homogeneous Case

The equation xt+1=λxtx_{t+1} = \lambda x_t has the solution:

xt=λtx0.x_t = \lambda^t x_0.

Stability is determined by λ|\lambda|:

  • λ<1|\lambda| < 1: xt0x_t \to 0 (stable).

  • λ=1|\lambda| = 1: xt=x0|x_t| = |x_0| constant (neutrally stable).

  • λ>1|\lambda| > 1: xt|x_t| \to \infty (unstable).

Theorem 4.1 (Stability of First-Order Difference Equations). The equilibrium x=0x^* = 0 of xt+1=λxtx_{t+1} = \lambda x_t is globally asymptotically stable if and only if λ<1|\lambda| < 1.

4.2.2 The Nonhomogeneous Case

The equation xt+1=λxt+cx_{t+1} = \lambda x_t + c (with λ1\lambda \neq 1) has the general solution:

xt=λt(x0x)+x,x=c1λ.x_t = \lambda^t\left(x_0 - x^*\right) + x^*, \quad x^* = \frac{c}{1-\lambda}.

For λ<1|\lambda| < 1, xtxx_t \to x^* as tt \to \infty.

Convergence speed: The gap xtxx_t - x^* shrinks by factor λ\lambda each period. After TT periods, the gap is λT(x0x)\lambda^T(x_0 - x^*).

4.2.3 Backward vs. Forward Solutions

A crucial distinction in macroeconomics is between backward-looking (predeterminate) variables, whose current value is determined by past behavior, and forward-looking (free) variables, whose current value is determined by expected future behavior.

Backward solution for xt+1=λxt+cx_{t+1} = \lambda x_t + c: start from x0x_0 and iterate forward. Requires λ<1|\lambda| < 1 for stability.

Forward solution for xt+1=λxt+cx_{t+1} = \lambda x_t + c: rearrange as xt=λ1(xt+1c)x_t = \lambda^{-1}(x_{t+1} - c) and iterate forward:

xt=λTxt+Tcλ1j=0T1λj.x_t = \lambda^{-T}x_{t+T} - c\lambda^{-1}\sum_{j=0}^{T-1}\lambda^{-j}.

For the forward solution to be bounded (a transversality condition), we need λTxt+T0|\lambda^{-T}x_{t+T}| \to 0 as TT \to \infty, which requires λ>1|\lambda| > 1 (so λT0|\lambda^{-T}| \to 0). If λ>1|\lambda| > 1, the forward solution is:

xt=cλ1+j=0λ(j+1)Et[ct+j]=j=0λ(j+1)Et[forcing variablet+j].x_t = -\frac{c}{\lambda - 1} + \sum_{j=0}^\infty \lambda^{-(j+1)}\mathbb{E}_t[c_{t+j}] = \sum_{j=0}^\infty \lambda^{-(j+1)}\mathbb{E}_t[\text{forcing variable}_{t+j}].

This is the present-value formula structure that appears throughout the New Keynesian model. The NKPC forward solution π^t=κj=0βjEt[x^t+j]\hat{\pi}_t = \kappa\sum_{j=0}^\infty \beta^j\mathbb{E}_t[\hat{x}_{t+j}] is exactly this form with λ=1/β>1\lambda = 1/\beta > 1 [P:Ch.10].


4.3 Second-Order Difference Equations

Second-order equations xt+2+pxt+1+qxt=cx_{t+2} + px_{t+1} + qx_t = c appear in the multiplier–accelerator model of Samuelson (1939) [P:Ch.8] and in continuous-time models after discretization. The general solution is:

xt=C1λ1t+C2λ2t+x,x_t = C_1\lambda_1^t + C_2\lambda_2^t + x^*,

where λ1,λ2\lambda_1, \lambda_2 are the roots of the characteristic equation:

λ2+pλ+q=0    λ1,2=p±p24q2,\lambda^2 + p\lambda + q = 0 \implies \lambda_{1,2} = \frac{-p \pm \sqrt{p^2 - 4q}}{2},

and x=c/(1+p+q)x^* = c/(1 + p + q) is the particular solution (assumed 1+p+q01 + p + q \neq 0). The constants C1C_1, C2C_2 are determined by two initial conditions x0x_0 and x1x_1.

Classification by roots:

Discriminant Δ=p24q\Delta = p^2 - 4qRootsPath Type
Δ>0\Delta > 0Two real roots λ1λ2\lambda_1 \neq \lambda_2Monotone (same sign) or S-shaped
Δ=0\Delta = 0Repeated real root λ1=λ2=p/2\lambda_1 = \lambda_2 = -p/2Monotone or boundary case
Δ<0\Delta < 0Complex conjugate pair λ=re±iθ\lambda = re^{\pm i\theta}Oscillatory

For complex roots λ=reiθ\lambda = re^{i\theta} (where r=qr = \sqrt{q} and θ=arctan(4qp2/(p))\theta = \arctan(\sqrt{4q-p^2}/(-p))):

xt=x+rt(C1cos(θt)+C2sin(θt)).x_t = x^* + r^t(C_1\cos(\theta t) + C_2\sin(\theta t)).

The system oscillates with period 2π/θ2\pi/\theta and amplitude decaying like rtr^t. Stability requires r=q<1r = \sqrt{q} < 1.

4.3.1 The Multiplier–Accelerator Model

Samuelson’s (1939) model of endogenous business cycles [P:Ch.8] sets:

Yt=Ct+It+G,Ct=bYt1,It=v(CtCt1),Y_t = C_t + I_t + G, \quad C_t = bY_{t-1}, \quad I_t = v(C_t - C_{t-1}),

where bb is the MPC and vv is the accelerator. Substituting:

Yt=b(1+v)Yt1bvYt2+G.Y_t = b(1+v)Y_{t-1} - bvY_{t-2} + G.

This is exactly a second-order difference equation with p=b(1+v)p = -b(1+v) and q=bvq = bv. The steady state is Y=G/(1b)Y^* = G/(1-b).

Stability: Requires both roots λ1,2<1|\lambda_{1,2}| < 1. Equivalently, the necessary and sufficient conditions are p<1+q|p| < 1 + q and q<1q < 1, i.e., b(1+v)<1+bvb(1+v) < 1 + bv (always true) and bv<1bv < 1. Stability requires bv<1bv < 1.

Business cycles arise from complex roots: When p2<4qp^2 < 4q, i.e., b2(1+v)2<4bvb^2(1+v)^2 < 4bv, the roots are complex and the system oscillates. The amplitude r=bvr = \sqrt{bv} determines whether cycles decay (bv<1bv < 1), persist (bv=1bv = 1), or explode (bv>1bv > 1).


4.4 Systems of Linear Difference Equations

The discrete-time state-space form:

xt+1=Axt+But,\mathbf{x}_{t+1} = A\mathbf{x}_t + B\mathbf{u}_t,

where xtRn\mathbf{x}_t \in \mathbb{R}^n is the state vector and utRm\mathbf{u}_t \in \mathbb{R}^m is an exogenous input (shock or policy variable). This is the linearized DSGE model after log-linearization — the starting point of Parts VII.

General solution: Iterating the state equation:

xt=Atx0+j=0t1AjBut1j.\mathbf{x}_t = A^t\mathbf{x}_0 + \sum_{j=0}^{t-1}A^j B\mathbf{u}_{t-1-j}.

Using the eigendecomposition A=PDP1A = PDP^{-1}:

xt=PDtP1x0+j=0t1PDjP1But1j.\mathbf{x}_t = PD^tP^{-1}\mathbf{x}_0 + \sum_{j=0}^{t-1}PD^jP^{-1}B\mathbf{u}_{t-1-j}.

Stability: Requires all eigenvalues of AA to satisfy λi<1|\lambda_i| < 1.

Impulse response: The response of x\mathbf{x} to a unit shock u0=ej\mathbf{u}_0 = \mathbf{e}_j (with ut=0\mathbf{u}_t = \mathbf{0} for t>0t > 0) at horizon hh is Ah1BejA^{h-1}B\mathbf{e}_j — the jj-th column of Ah1BA^{h-1}B. This is computed efficiently as AhvA^h\mathbf{v} using repeated matrix multiplication or via the eigendecomposition.

⎕IO←0 ⋄ ⎕ML←1

A ← 2 2 ⍴ 0.85 0.10 ¯0.05 0.90
B ← 0.5 ¯0.3        ⍝ Simplified to a vector to match state dimensions
shock ← 0.1
H ← 20

⍝ 1. Fix: Use a derived function for the Power operator
⍝ We define a 'Step' function first to make the IRF calculation clear
Step ← {A +.× ⍵}
impact ← B × shock
irf ← { (Step⍣⍵) impact } ¨ ⍳H

⍝ Display IRF
↑ irf

⍝ 2. Shocks
T ← 100
e ← 0.1 × (¯0.5 + ?T⍴0)
shocks ← {0.9 × ⍺ + ⍵} \ e

⍝ 3. Fix: State Simulation
⍝ Use , to ensure the B vector and shock scalar play nice
sim_path ← { (A +.× ⍺) + B × ⍵ } \ (⊂2⍴0), shocks

⍝ Show first 5 states
5 ↑ sim_path
Loading...
Loading...
]plot ↑irf
Loading...
]plot ↑sim_path
Loading...

4.5 Forward-Looking Difference Equations and the Minimum State Variable Solution

The distinctive challenge of modern macroeconomics is that many key equations involve expected future values of endogenous variables. The NKPC is π^t=βEt[π^t+1]+κx^t\hat{\pi}_t = \beta\mathbb{E}_t[\hat{\pi}_{t+1}] + \kappa\hat{x}_t; the DIS is x^t=Et[x^t+1]σ(i^tEt[π^t+1]r^tn)\hat{x}_t = \mathbb{E}_t[\hat{x}_{t+1}] - \sigma(\hat{i}_t - \mathbb{E}_t[\hat{\pi}_{t+1}] - \hat{r}^n_t). These are expectational difference equations with no unique solution without additional restrictions.

Definition 4.1 (Minimum State Variable Solution). The minimum state variable (MSV) solution is the unique solution of a linear expectational difference equation that expresses each endogenous variable as a linear function of the minimal set of state variables — typically the exogenous forcing variables and any predetermined (backward-looking) variables.

McCallum’s (1983) algorithm for finding the MSV solution:

Algorithm 4.1 (McCallum’s Undetermined Coefficients).

Given the expectational system yt=AEt[yt+1]+Czt\mathbf{y}_t = A\mathbb{E}_t[\mathbf{y}_{t+1}] + C\mathbf{z}_t where zt\mathbf{z}_t is a vector of exogenous state variables with law of motion zt+1=Φzt+εt+1\mathbf{z}_{t+1} = \Phi\mathbf{z}_t + \boldsymbol{\varepsilon}_{t+1}:

  1. Guess an MSV solution: yt=Ωzt\mathbf{y}_t = \Omega\mathbf{z}_t (endogenous variables are linear functions of exogenous states).

  2. Compute Et[yt+1]=ΩEt[zt+1]=ΩΦzt\mathbb{E}_t[\mathbf{y}_{t+1}] = \Omega\mathbb{E}_t[\mathbf{z}_{t+1}] = \Omega\Phi\mathbf{z}_t.

  3. Substitute into the original equation: Ωzt=AΩΦzt+Czt\Omega\mathbf{z}_t = A\Omega\Phi\mathbf{z}_t + C\mathbf{z}_t.

  4. Match coefficients: Ω=AΩΦ+C\Omega = A\Omega\Phi + C.

  5. Solve this matrix equation for Ω\Omega. This is a discrete Sylvester equation of the form ΩAΩΦ=C\Omega - A\Omega\Phi = C.

The Sylvester equation ΩAΩΦ=C\Omega - A\Omega\Phi = C can be vectorized: vec(Ω)(IΦA)=vec(C)\text{vec}(\Omega)(I - \Phi' \otimes A) = \text{vec}(C), which is a linear system in vec(Ω)\text{vec}(\Omega). It has a unique solution when (IΦA)(I - \Phi' \otimes A) is nonsingular — equivalently, when no eigenvalue of Φ\Phi equals the reciprocal of any eigenvalue of AA.

4.5.1 Application: The NK Model Under a Taylor Rule

Cross-reference: Principles Ch. 10, Ch. 23 [P:Ch.10, P:Ch.23]

The three-equation NK model:

x^t=Et[x^t+1]σ(i^tEt[π^t+1]r^tn)\hat{x}_t = \mathbb{E}_t[\hat{x}_{t+1}] - \sigma(\hat{i}_t - \mathbb{E}_t[\hat{\pi}_{t+1}] - \hat{r}^n_t)

π^t=βEt[π^t+1]+κx^t\hat{\pi}_t = \beta\mathbb{E}_t[\hat{\pi}_{t+1}] + \kappa\hat{x}_t

i^t=ϕππ^t+ϕyx^t+r^tn\hat{i}_t = \phi_\pi\hat{\pi}_t + \phi_y\hat{x}_t + \hat{r}^n_t

Substituting the Taylor rule into the DIS:

x^t=Et[x^t+1]σϕππ^tσϕyx^t+σr^tnσEt[π^t+1]+σr^tn.\hat{x}_t = \mathbb{E}_t[\hat{x}_{t+1}] - \sigma\phi_\pi\hat{\pi}_t - \sigma\phi_y\hat{x}_t + \sigma\hat{r}^n_t - \sigma\mathbb{E}_t[\hat{\pi}_{t+1}] + \sigma\hat{r}^n_t.

Wait — let us use the compact matrix form. Stack yt=(π^t,x^t)\mathbf{y}_t = (\hat{\pi}_t, \hat{x}_t)' and note that both are jump variables (free to adjust at each date). The system can be written:

(10σϕπ1+σϕy)Γ0yt=(βκσ1)Γ1Et[yt+1]+(0σ)Ψr^tn.\underbrace{\begin{pmatrix} 1 & 0 \\ \sigma\phi_\pi & 1+\sigma\phi_y \end{pmatrix}}_{\equiv\Gamma_0} \mathbf{y}_t = \underbrace{\begin{pmatrix} \beta & \kappa \\ \sigma & 1 \end{pmatrix}}_{\equiv\Gamma_1} \mathbb{E}_t[\mathbf{y}_{t+1}] + \underbrace{\begin{pmatrix} 0 \\ \sigma \end{pmatrix}}_{\equiv\Psi}\hat{r}^n_t.

This is the canonical form Γ0yt=Γ1Et[yt+1]+Ψzt\Gamma_0\mathbf{y}_t = \Gamma_1\mathbb{E}_t[\mathbf{y}_{t+1}] + \Psi z_t, which we solve via yt=Γ01Γ1Et[yt+1]+Γ01Ψzt\mathbf{y}_t = \Gamma_0^{-1}\Gamma_1\mathbb{E}_t[\mathbf{y}_{t+1}] + \Gamma_0^{-1}\Psi z_t. Let A=Γ01Γ1A = \Gamma_0^{-1}\Gamma_1 and C=Γ01ΨC = \Gamma_0^{-1}\Psi. The MSV solution is yt=Ωzt\mathbf{y}_t = \Omega z_t where Ω\Omega solves the scalar Sylvester equation ΩAΩρr=C\Omega - A\Omega\rho_r = C (since zt=r^tnz_t = \hat{r}^n_t follows AR(1) with ϕ=ρr\phi = \rho_r).

Determinacy: The system has a unique bounded solution (MSV solution is the only equilibrium) iff the number of eigenvalues of A=Γ01Γ1A = \Gamma_0^{-1}\Gamma_1 outside the unit circle equals the number of free variables — here, both π^t\hat{\pi}_t and x^t\hat{x}_t are free, so we need exactly 2 eigenvalues of AA outside the unit circle. This is the Blanchard–Kahn condition for this model (developed fully in Chapter 28).

The Taylor principle ϕπ>1\phi_\pi > 1 ensures this: in the 2×2 NK model, ϕπ>1\phi_\pi > 1 implies both eigenvalues of AA have modulus greater than 1, satisfying the Blanchard–Kahn condition. This is the formal derivation of the result stated in Principles Ch. 23 [P:Ch.23.1] that the Taylor principle is necessary and sufficient for determinacy.


4.6 Stability Analysis and the Unit Circle

For a system xt+1=Axt\mathbf{x}_{t+1} = A\mathbf{x}_t, stability is determined by the eigenvalues of AA relative to the unit circle in the complex plane.

Definition 4.2 (Stable, Unstable, and Centre Subspaces). Decompose Rn\mathbb{R}^n into three subspaces based on eigenvalues:

  • Stable subspace EsE^s: spanned by generalised eigenvectors corresponding to λi<1|\lambda_i| < 1.

  • Unstable subspace EuE^u: spanned by generalised eigenvectors corresponding to λi>1|\lambda_i| > 1.

  • Centre subspace EcE^c: spanned by generalised eigenvectors corresponding to λi=1|\lambda_i| = 1.

For the DSGE model with nsn_s predetermined variables and nfn_f free (jump) variables, the Blanchard–Kahn condition requires:

Number of eigenvalues of AA outside unit circle = number of free variables nfn_f.

When this holds, the model has a unique bounded rational expectations equilibrium. When there are too few eigenvalues outside the unit circle, there are multiple equilibria (indeterminacy — sunspot equilibria exist). When there are too many, no bounded equilibrium exists (the model “explodes”).

Checking stability in APL:

⎕IO←0 ⋄ ⎕ML←1

A ← 2 2 ⍴ 1.2 0.3 ¯0.1 0.85

⍝ 1. Calculate Trace and Determinant
tr ← A[0;0] + A[1;1]
det ← (A[0;0]×A[1;1]) - (A[0;1]×A[1;0])

⍝ 2. Solve Quadratic: λ² - tr*λ + det = 0
⍝ We calculate the two roots separately to avoid vector confusion
disc ← (tr*2) - 4×det
root_part ← disc * 0.5

eig1 ← (tr + root_part) ÷ 2
eig2 ← (tr - root_part) ÷ 2
eig_vals ← eig1, eig2

⍝ 3. Moduli and BK Condition
moduli ← | eig_vals
n_outside ← +/ moduli > 1

⍝ Display
'Eigenvalues:' eig_vals
'Moduli:     ' moduli
'BK Count:   ' n_outside
Loading...
Loading...
Loading...

4.7 Worked Example: Solving the Two-Equation NK System

Cross-reference: Principles Ch. 10, Ch. 23 [P:Ch.Ch.10, P:Ch.23]

Consider the simplified NK model with only a cost-push shock utu_t:

π^t=βEt[π^t+1]+κx^t+ut,ut=ρuut1+εt\hat{\pi}_t = \beta\mathbb{E}_t[\hat{\pi}_{t+1}] + \kappa\hat{x}_t + u_t, \quad u_t = \rho_u u_{t-1} + \varepsilon_t

x^t=Et[x^t+1]σϕππ^tσϕyx^t.\hat{x}_t = \mathbb{E}_t[\hat{x}_{t+1}] - \sigma\phi_\pi\hat{\pi}_t - \sigma\phi_y\hat{x}_t.

Calibration: β=0.99\beta = 0.99, κ=0.15\kappa = 0.15, σ=1\sigma = 1, ϕπ=1.5\phi_\pi = 1.5, ϕy=0.5\phi_y = 0.5, ρu=0.7\rho_u = 0.7.

Step 1: Canonical form. Stack yt=(π^t,x^t,ut)\mathbf{y}_t = (\hat{\pi}_t, \hat{x}_t, u_t)' (the shock is a state variable since it is predetermined). Write:

Γ0yt=Γ1Et[yt+1]+noise,\Gamma_0\mathbf{y}_t = \Gamma_1\mathbb{E}_t[\mathbf{y}_{t+1}] + \text{noise},

but since utu_t is predetermined, it belongs in the state vector rather than as a free variable.

Step 2: MSV guess. Since the only state is utu_t, guess:

π^t=ωπut,x^t=ωxut.\hat{\pi}_t = \omega_\pi u_t, \quad \hat{x}_t = \omega_x u_t.

Step 3: Solve for coefficients. Substituting π^t=ωπut\hat{\pi}_t = \omega_\pi u_t, x^t=ωxut\hat{x}_t = \omega_x u_t, and Et[π^t+1]=ωπρuut\mathbb{E}_t[\hat{\pi}_{t+1}] = \omega_\pi\rho_u u_t, Et[x^t+1]=ωxρuut\mathbb{E}_t[\hat{x}_{t+1}] = \omega_x\rho_u u_t into the NKPC:

ωπut=βωπρuut+κωxut+ut    ωπ(1βρu)=κωx+1.\omega_\pi u_t = \beta\omega_\pi\rho_u u_t + \kappa\omega_x u_t + u_t \implies \omega_\pi(1 - \beta\rho_u) = \kappa\omega_x + 1.

And into the DIS (with Taylor rule substituted):

ωxut=ωxρuutσϕπωπutσϕyωxut    ωx(1ρu+σϕy)=σϕπωπ.\omega_x u_t = \omega_x\rho_u u_t - \sigma\phi_\pi\omega_\pi u_t - \sigma\phi_y\omega_x u_t \implies \omega_x(1 - \rho_u + \sigma\phi_y) = -\sigma\phi_\pi\omega_\pi.

Step 4: Solve the 2×2 system. Let a=1βρua = 1 - \beta\rho_u, b=1ρu+σϕyb = 1 - \rho_u + \sigma\phi_y, c=σϕπc = \sigma\phi_\pi:

ωπ=κωx+1a,ωx=cωπb.\omega_\pi = \frac{\kappa\omega_x + 1}{a}, \quad \omega_x = \frac{-c\omega_\pi}{b}.

Substituting the second into the first:

ωπ=cκωπ/b+1a    ωπ(a+cκb)=1    ωπ=bab+cκ.\omega_\pi = \frac{-c\kappa\omega_\pi/b + 1}{a} \implies \omega_\pi\left(a + \frac{c\kappa}{b}\right) = 1 \implies \omega_\pi = \frac{b}{ab + c\kappa}.

Then ωx=cωπ/b=c/(ab+cκ)\omega_x = -c\omega_\pi/b = -c/(ab + c\kappa).

Step 5: Numerical evaluation.

a=10.99×0.7=0.307a = 1 - 0.99 \times 0.7 = 0.307

b=10.7+1×0.5=0.8b = 1 - 0.7 + 1 \times 0.5 = 0.8

c=1×1.5=1.5c = 1 \times 1.5 = 1.5

ab+cκ=0.307×0.8+1.5×0.15=0.2456+0.225=0.4706ab + c\kappa = 0.307 \times 0.8 + 1.5 \times 0.15 = 0.2456 + 0.225 = 0.4706

ωπ=0.8/0.4706=1.700\omega_\pi = 0.8/0.4706 = 1.700

ωx=1.5/0.4706=3.188\omega_x = -1.5/0.4706 = -3.188

Interpretation: A positive cost-push shock of 1 unit raises inflation by 1.700 and lowers the output gap by 3.188. The large output gap response reflects the central bank’s aggressive tightening (high ϕπ=1.5\phi_\pi = 1.5) in response to the inflation increase.

⍝ APL — MSV solution for 2-variable NK system
⎕IO←0 ⋄ ⎕ML←1

beta←0.99  ⋄  kappa←0.15  ⋄  sigma←1
phi_pi←1.5  ⋄  phi_y←0.5  ⋄  rho_u←0.7

a ← 1 - beta×rho_u
b ← 1 - rho_u + sigma×phi_y
c ← sigma×phi_pi

denom ← (a×b) + c×kappa
omega_pi ← b÷denom
omega_x  ← (-c)÷denom

omega_pi  ⍝ ≈ 1.700
omega_x   ⍝ ≈ ¯3.188

⍝ Impulse response: response to unit shock at t=0
T ← 20
irf_pi ← omega_pi × rho_u * ⍳T   ⍝ omega_pi × rho_u^t
irf_x  ← omega_x  × rho_u * ⍳T

irf_pi    ⍝ inflation response path
irf_x     ⍝ output gap response path
Loading...
Loading...
Loading...
Loading...
]plot irf_x
Loading...
]plot irf_pi
Loading...

4.8 The Cobweb Model and Expectational Stability

The classic cobweb model illustrates how adaptive versus rational expectations change the dynamics of a market with production lags.

Supply: Qts=a+bPteQ_t^s = a + b P_t^e (firms produce based on expected price) Demand: Qtd=cdPtQ_t^d = c - d P_t (consumers respond to actual price)

Market clearing: Qts=QtdQ_t^s = Q_t^d, so:

Pt=cadbdPte.P_t = \frac{c - a}{d} - \frac{b}{d}P_t^e.

Under adaptive expectations Pte=Pt1P_t^e = P_{t-1} (naive: expect last period’s price):

Pt=cadbdPt1.P_t = \frac{c-a}{d} - \frac{b}{d}P_{t-1}.

This is a first-order difference equation with coefficient b/d-b/d. Stability requires b/d<1|{-b/d}| < 1, i.e., supply is more elastic than demand (b<db < d). If supply is more elastic (b>db > d), the cobweb explodes.

Under rational expectations Pte=E[Pt]P_t^e = \mathbb{E}[P_t] (firms correctly anticipate the equilibrium price), the market always clears at the rational expectations equilibrium price P=(ca)/(b+d)P^* = (c-a)/(b+d) — no cobweb dynamics at all. Rational expectations “flatten” the cobweb by eliminating the lag between expectation and realization.

This example, while simple, illustrates the fundamental point of Principles Chapter 16 [P:Ch.16]: rational expectations can eliminate systematic forecast errors and with them the price dynamics that drive business cycles. Whether prices and wages are actually set as rationally as this model implies is an empirical question — but it sets the theoretical benchmark.


4.9 Programming Exercises

Exercise 4.1 (APL — Multiplier–Accelerator)

Implement the Samuelson multiplier–accelerator dynamics as a scan operation.

⎕IO←0 ⋄ ⎕ML←1

b ← 0.8 ⋄ v ← 1.2 ⋄ G ← 100
Yss ← G ÷ 1 - b
initial ← Yss, (Yss × 1.05)

⍝ Samuelson logic: Y_t = b(1+v)Y_{t-1} - bvY_{t-2} + G
⍝ ⍵ is the growing vector of Y values
f ← {⍵, (b×(1+v)×⊃⌽⍵) - (b×v×⊃⌽¯1↓⍵) + G}

⍝ Run for 15 steps
Y_seq ← f⍣15 ⊢ initial

⍝ DISPLAY: Mix into a column to make it "Jupyter-safe" 
↑ Y_seq
Loading...
]plot ↑Y_seq
Loading...

Exercise 4.2 (Python — Stability Diagram)

import numpy as np
import matplotlib.pyplot as plt

b_vals = np.linspace(0, 1, 100)
v_vals = np.linspace(0, 3, 100)
B, V = np.meshgrid(b_vals, v_vals)

# Characteristic roots
p = -B*(1+V)
q = B*V
disc = p**2 - 4*q
lambda_modulus = np.where(disc >= 0,
    np.maximum(np.abs((-p + np.sqrt(np.abs(disc)))/2),
               np.abs((-p - np.sqrt(np.abs(disc)))/2)),
    np.sqrt(q))  # for complex roots, modulus = sqrt(q)

stable = lambda_modulus < 1
oscillatory = (disc < 0) & stable

fig, ax = plt.subplots()
ax.contourf(B, V, stable.astype(int), levels=[0.5,1.5], colors=['lightblue'], alpha=0.5)
ax.contour(B, V, oscillatory.astype(int), levels=[0.5], colors=['red'])
ax.set_xlabel('MPC (b)'); ax.set_ylabel('Accelerator (v)')
ax.set_title('Stability diagram: blue=stable, red boundary=oscillatory')
plt.show()

Exercise 4.3 (Julia — MSV Solution)

# MSV solution for arbitrary NK calibration
function msv_nk(; beta=0.99, kappa=0.15, sigma=1.0,
                  phi_pi=1.5, phi_y=0.5, rho_u=0.7)
    a = 1 - beta*rho_u
    b = 1 - rho_u + sigma*phi_y
    c = sigma*phi_pi
    denom = a*b + c*kappa
    omega_pi = b / denom
    omega_x  = -c / denom
    return (omega_pi=omega_pi, omega_x=omega_x)
end

sol = msv_nk()
println("ω_π = $(round(sol.omega_pi, digits=3))")
println("ω_x = $(round(sol.omega_x, digits=3))")

# Sweep over phi_pi to show determinacy
phi_pis = 0.5:0.1:3.0
for φ in phi_pis
    s = msv_nk(phi_pi=φ)
    println("φ_π = $φ: ω_π = $(round(s.omega_pi,digits=3)), ω_x = $(round(s.omega_x,digits=3))")
end

Exercise 4.4 (R — Forward Solution)

# Forward solution of NKPC: pi_t = kappa * sum_{j>=0} beta^j E_t[x_{t+j}]
# Given AR(1) output gap: x_t = rho_x * x_{t-1} + eps_t
kappa <- 0.15; beta <- 0.99; rho_x <- 0.8

# Closed-form: pi_t = kappa/(1-beta*rho_x) * x_t
omega_pi_forward <- kappa / (1 - beta*rho_x)
cat("π_t = ", round(omega_pi_forward, 4), "* x_t\n")

# Simulate and verify numerically
T <- 1000
set.seed(42)
x <- filter(rnorm(T, sd=0.1), rho_x, method="recursive")
pi_formula <- omega_pi_forward * x

# Compare to direct present-value calculation
pi_pv <- sapply(1:(T-50), function(t) {
  kappa * sum(sapply(0:49, function(j) beta^j * x[t+j]))
})
cat("Max discrepancy:", max(abs(pi_formula[1:(T-50)] - pi_pv)), "\n")

Exercise 4.5 — Blanchard–Kahn (\star)

For the 2×2 NK system in Section 4.5.1, write the matrix A=Γ01Γ1A = \Gamma_0^{-1}\Gamma_1 explicitly and compute its eigenvalues for the calibration in Worked Example 4.7. Verify that both eigenvalues lie outside the unit circle when ϕπ=1.5\phi_\pi = 1.5 and ϕy=0.5\phi_y = 0.5, and find the critical value of ϕπ\phi_\pi below which one eigenvalue moves inside the unit circle (violating determinacy). Show this threshold is consistent with the Taylor principle ϕπ>1\phi_\pi > 1.

Exercise 4.6 — Indeterminacy and Sunspots (\star\star)

When ϕπ<1\phi_\pi < 1, the NK model has indeterminate equilibria. The general solution becomes yt=Ωzt+Πηt\mathbf{y}_t = \Omega z_t + \Pi\eta_t where ηt\eta_t is an arbitrary martingale difference sequence (the “sunspot”). (a) Explain why this generates volatility unrelated to fundamentals. (b) Simulate the model under ϕπ=0.8\phi_\pi = 0.8 with a sunspot ηtN(0,1)\eta_t \sim \mathcal{N}(0,1) and compare the resulting volatility of π^t\hat{\pi}_t and x^t\hat{x}_t to the fundamental-only case ϕπ=1.5\phi_\pi = 1.5. (c) Show that the Taylor principle eliminates the sunspot component by making Π=0\Pi = \mathbf{0}.


4.10 Chapter Summary

Key results:

  • First-order linear difference equations xt+1=λxt+cx_{t+1} = \lambda x_t + c are stable iff λ<1|\lambda| < 1, with convergence rate λ|\lambda| and steady state x=c/(1λ)x^* = c/(1-\lambda).

  • Backward solutions require λ<1|\lambda| < 1 (predetermined variables); forward solutions require λ>1|\lambda| > 1 (free/jump variables). The present-value formula structure of the NKPC arises from the forward solution with λ=1/β>1|\lambda| = 1/\beta > 1.

  • Second-order equations xt+2+pxt+1+qxt=cx_{t+2} + px_{t+1} + qx_t = c generate oscillatory dynamics when the discriminant p24q<0p^2 - 4q < 0; stability requires q<1\sqrt{q} < 1.

  • The MSV solution of an expectational difference equation is found by guessing a linear function of state variables and matching coefficients — the Sylvester equation.

  • The Blanchard–Kahn condition for a unique bounded rational expectations equilibrium: number of eigenvalues outside the unit circle equals number of free variables.

  • In APL: matrix powers A⍣h compute impulse responses efficiently; the scan \ collects path histories; rho_u * ⍳T generates geometric decay sequences directly.

Connections forward: Chapter 14 revisits difference equations in the multiplier–accelerator context. Chapter 18 develops the full rational expectations solution methodology (undetermined coefficients, the Sims algorithm). Chapter 28 provides the definitive treatment of the Blanchard–Kahn conditions and the QZ decomposition algorithm for DSGE models.


Next: Chapter 5 — Stochastic Processes for Aggregate Shocks