Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Chapter 40: Policy Analysis with a New Keynesian Model

kapitaali.com

Commitment, Discretion, ELB, and Welfare

“The Taylor principle is not a recommendation — it is a necessary condition for the price level to be determined.”

Cross-reference: Principles Ch. 23 (optimal monetary policy, commitment vs. discretion, Taylor rule); Ch. 28 (fiscal policy, zero lower bound); Ch. 29 (forward guidance, QE) [P:Ch.23, P:Ch.28, P:Ch.29]


40.1 The Policy Analysis Pipeline

Chapter 31 showed how to solve the NK model. This chapter uses that solution for policy analysis — the ultimate purpose of DSGE modeling at central banks. The standard pipeline:

  1. Solve the model under a benchmark policy rule (Taylor rule with standard coefficients).

  2. Compute welfare under the benchmark.

  3. Solve the optimal policy problem (commitment or discretion).

  4. Compute the welfare gain from optimal relative to benchmark.

  5. Sensitivity analysis: how does the welfare comparison depend on the model parameters?

We develop each step formally, using the NK three-equation model as the vehicle.


40.2 Welfare Measurement in the NK Model

Cross-reference: Chapter 29 (second-order approximation required for welfare) [M:Ch.29]

The central bank’s loss function is the second-order approximation to household welfare. For CRRA utility and Calvo pricing, Woodford (2003) shows:

Theorem 40.1 (NK Welfare Loss Function). To second order, the deviation of household welfare from the first-best (flexible-price) allocation is:

W=12Et=0βt[π^t2+κεx^t2]+O(ε3),\mathcal{W} = -\frac{1}{2}\mathbb{E}\sum_{t=0}^\infty\beta^t\left[\hat\pi_t^2 + \frac{\kappa}{\varepsilon}\hat{x}_t^2\right] + O(\varepsilon^3),

where ε\varepsilon is the order of the approximation and κ/ε=λx\kappa/\varepsilon = \lambda_x is the weight on the output gap (proportional to the slope of the NKPC κ\kappa divided by the demand elasticity εNK\varepsilon_{NK}).

Proof sketch. The second-order expansion of household utility around the zero-inflation flexible-price steady state yields cross-terms involving π^\hat\pi and x^\hat x. The inflation term arises from price dispersion across Calvo firms (firms with different prices produce different quantities, creating output loss). The output gap term arises from the wedge between the flexible-price output gap and zero. The cross-term vanishes at the optimum (Woodford, 2003, Ch. 6). \square

The welfare loss is:

L=Var(π^t)+λxVar(x^t),\mathcal{L} = \text{Var}(\hat\pi_t) + \lambda_x\text{Var}(\hat{x}_t),

where λx=κ/(varepsilonNK)0.05\lambda_x = \kappa/(varepsilon_{NK}) \approx 0.05 for standard calibrations. Inflation variance is heavily weighted relative to output gap variance — consistent with the “divine coincidence” [P:Ch.23.3]: stabilizing inflation and stabilizing the output gap are usually aligned.


40.3 Optimal Policy Under Commitment

Under commitment, the central bank pre-announces and commits to a complete sequence of future actions {it}t=0\{i_t\}_{t=0}^\infty at time 0. This commitment is credible — agents believe the sequence will be followed.

The commitment problem:

min{it,π^t,x^t}E0t=0βt[π^t2+λxx^t2]\min_{\{i_t, \hat\pi_t, \hat{x}_t\}}\mathbb{E}_0\sum_{t=0}^\infty\beta^t\left[\hat\pi_t^2 + \lambda_x\hat{x}_t^2\right]

subject to:

π^t=βEt[π^t+1]+κx^t+utt(NKPC)\hat\pi_t = \beta\mathbb{E}_t[\hat\pi_{t+1}] + \kappa\hat{x}_t + u_t \quad \forall t \quad \text{(NKPC)}

x^t=Et[x^t+1]σ(i^tEt[π^t+1]rtn)t(DIS)\hat{x}_t = \mathbb{E}_t[\hat{x}_{t+1}] - \sigma(\hat{i}_t - \mathbb{E}_t[\hat\pi_{t+1}] - r^n_t) \quad \forall t \quad \text{(DIS)}

Theorem 40.2 (Optimal Commitment Policy — Targeting Rule). The optimal commitment policy satisfies the targeting rule:

π^t=λxκ(x^tx^t1).\hat\pi_t = -\frac{\lambda_x}{\kappa}(\hat{x}_t - \hat{x}_{t-1}).

Proof. Form the Lagrangian with multipliers φt\varphi_t for the NKPC constraint:

LLagr=Etβt[π^t2+λxx^t2+φt(π^tβπ^t+1κx^tut)].\mathcal{L}^{Lagr} = \mathbb{E}\sum_t\beta^t\left[\hat\pi_t^2 + \lambda_x\hat{x}_t^2 + \varphi_t(\hat\pi_t - \beta\hat\pi_{t+1} - \kappa\hat{x}_t - u_t)\right].

The first-order conditions:

  • w.r.t. π^t\hat\pi_t: 2π^t+φtβ1φt1=0φt=β1φt12π^t2\hat\pi_t + \varphi_t - \beta^{-1}\varphi_{t-1} = 0 \Rightarrow \varphi_t = \beta^{-1}\varphi_{t-1} - 2\hat\pi_t.

  • w.r.t. x^t\hat{x}_t: 2λxx^tκφt=0φt=2λxx^t/κ2\lambda_x\hat{x}_t - \kappa\varphi_t = 0 \Rightarrow \varphi_t = 2\lambda_x\hat{x}_t/\kappa.

Substituting: 2λxx^t/κ=β12λxx^t1/κ2π^t2\lambda_x\hat{x}_t/\kappa = \beta^{-1}\cdot 2\lambda_x\hat{x}_{t-1}/\kappa - 2\hat\pi_t, giving π^t=(λx/κ)(x^tβ1x^t1)\hat\pi_t = -(\lambda_x/\kappa)(\hat{x}_t - \beta^{-1}\hat{x}_{t-1}). For β1\beta\approx1: π^t(λx/κ)(x^tx^t1)\hat\pi_t \approx -(\lambda_x/\kappa)(\hat{x}_t - \hat{x}_{t-1}). \square

Interpretation: Under commitment, the central bank allows some initial inflation (when hit by a cost-push shock) but then actively deflates to restore the price level — price level targeting (PL targeting). This contrasts with inflation targeting (IT), which targets π^t=0\hat\pi_t = 0 at all times.

The targeting rule π^t=(λx/κ)(x^tx^t1)\hat\pi_t = -(\lambda_x/\kappa)(\hat{x}_t - \hat{x}_{t-1}) implements price-level targeting: because π^t=p^tp^t1\hat\pi_t = \hat{p}_t - \hat{p}_{t-1}, the rule implies p^t=p^t1(λx/κ)(x^tx^t1)\hat{p}_t = \hat{p}_{t-1} - (\lambda_x/\kappa)(\hat{x}_t - \hat{x}_{t-1}), keeping the price level near a target path rather than just the inflation rate.


40.4 Optimal Policy Under Discretion

Under discretion, the central bank re-optimizes each period, taking private sector expectations as given (but knowing they are formed under rational expectations). This is a game between the central bank and the private sector.

The equilibrium under discretion satisfies:

π^t=λxκx^t,(discretion targeting rule)\hat\pi_t = -\frac{\lambda_x}{\kappa}\hat{x}_t, \quad \text{(discretion targeting rule)}

combined with the NKPC π^t=βEt[π^t+1]+κx^t+ut\hat\pi_t = \beta\mathbb{E}_t[\hat\pi_{t+1}] + \kappa\hat{x}_t + u_t.

The discretion rule is simpler: π^t/x^t=λx/κ\hat\pi_t/\hat{x}_t = -\lambda_x/\kappa — a constant ratio of inflation to output gap, independent of history (unlike the commitment rule which depends on the lagged output gap x^t1\hat{x}_{t-1}).

The commitment-discretion gap: Under a cost-push shock ut=ρuut1+εtu_t = \rho_u u_{t-1} + \varepsilon_t:

Ldisc>Lcomm,with gap: ΔL=λxρu2(λx+κ2)(1βρu)σu2>0.\mathcal{L}^{disc} > \mathcal{L}^{comm}, \quad \text{with gap: } \Delta\mathcal{L} = \frac{\lambda_x\rho_u^2}{(\lambda_x+\kappa^2)(1-\beta\rho_u)}\sigma_u^2 > 0.

The commitment policy achieves lower welfare loss by exploiting forward-looking inflation expectations: announcing to keep output low in the future reduces current inflation expectations, lowering the sacrifice ratio.


40.5 The ELB and Forward Guidance

Definition 40.1 (Effective Lower Bound). The effective lower bound (ELB) binds when the natural rate rtn<0r^n_t < 0 and the central bank cannot lower the policy rate below zero. At the ELB: i^t=0\hat{i}_t = 0 (rather than following the Taylor rule).

The ELB problem: At the ELB, the DIS equation gives:

x^t=Et[x^t+1]+σEt[π^t+1]+σrtn.\hat{x}_t = \mathbb{E}_t[\hat{x}_{t+1}] + \sigma\mathbb{E}_t[\hat\pi_{t+1}] + \sigma r^n_t.

With rtn<0r^n_t < 0 and no ability to cut rates, the economy falls into a recession (x^t<0\hat{x}_t < 0) and deflation (π^t<0\hat\pi_t < 0) — the liquidity trap.

Forward guidance — committing to keep rates low even after the ELB ceases to bind — provides stimulus at the ELB by raising Et[π^t+1]\mathbb{E}_t[\hat\pi_{t+1}] (expectation of future inflation lowers the real rate today).

Theorem 40.3 (Forward Guidance Multiplier). In the NK model at the ELB for TT periods, a commitment to keep rates at zero for τ\tau periods beyond the natural liftoff date generates a GDP multiplier at the ELB:

FG multiplier(τ)=σκκ2+λx1(betaρ)τ+11βρ,\text{FG multiplier}(\tau) = \frac{\sigma\kappa}{\kappa^2+\lambda_x}\cdot\frac{1-(beta\rho)^{\tau+1}}{1-\beta\rho},

which grows with τ\tau — potentially exponentially for βρ1\beta\rho \to 1 (the forward guidance puzzle).

Derivation. In the NK model with ELB for TT periods and rate kept at zero for T+τT+\tau periods, the output gap at the ELB satisfies a recursion backward from date T+τT+\tau. Each additional period of forward guidance adds the term (βρ)τσκ/(κ2+λx)(\beta\rho)^\tau\sigma\kappa/(\kappa^2+\lambda_x) to the impact on current output. Summing gives the stated formula. \square

The forward guidance puzzle: standard NK calibrations imply unrealistically large effects of forward guidance far in the future. Resolution: habit formation, finite planning horizons, or agent inattention (Gabaix, 2020) attenuate the effect of distant promises.


40.6 Worked Example: Welfare Cost of Suboptimal Policy

Python

import numpy as np
from scipy.optimize import minimize

# 1. PARAMETERS
beta, kappa, sigma = 0.99, 0.15, 1.0  # sigma = intertemporal elasticity
lambda_x = 0.025                      # Weight on output gap in welfare
rho_u, sigma_u = 0.5, 0.01

def welfare_loss(phi_pi, phi_y):
    """Compute welfare loss under a given Taylor rule."""
    # System: G0*z_t = G1*E_t[z_{t+1}] + Psi*u_t
    # Row 1: NKPC (pi_t = beta*E_t[pi_{t+1}] + kappa*x_t + u_t)
    # Row 2: IS + Rule (x_t = E_t[x_{t+1}] - sigma*(phi_pi*pi_t + phi_y*x_t - E_t[pi_{t+1}]))
    
    G0 = np.array([
        [1.0, -kappa],
        [sigma * phi_pi, 1 + sigma * phi_y]
    ])
    
    G1 = np.array([
        [beta, 0.0],
        [sigma, 1.0] # Expected inflation reduces real rate
    ])
    
    Psi = np.array([1.0, 0.0]) # Cost-push shock enters NKPC
    
    try:
        # Determinacy check: eigs of A = G0^-1 * G1 must be < 1
        A = np.linalg.inv(G0) @ G1
        eigs = np.linalg.eigvals(A)
        if not np.all(np.abs(eigs) < 1): 
            return 1e6 # Indeterminate or explosive
        
        # Solve for impact matrix Omega: (G0 - rho_u*G1) * Omega = Psi
        # z_t = Omega * u_t
        Omega = np.linalg.solve(G0 - rho_u * G1, Psi)
        
        # Theoretical variances
        var_u = sigma_u**2 / (1 - rho_u**2)
        v_pi = Omega[0]**2 * var_u
        v_x  = Omega[1]**2 * var_u
        
        return v_pi + lambda_x * v_x
    except np.linalg.LinAlgError:
        return 1e6

# 2. EVALUATION
rules = {
    'Taylor (1.5, 0.5)': (1.5, 0.5),
    'Aggressive (3.0, 0.5)': (3.0, 0.5),
    'IT (φ_π=1.5, φ_y=0)': (1.5, 0.0),
}

print("Welfare loss under alternative Taylor rules:")
baseline = welfare_loss(1.5, 0.5)
for name, (p_pi, p_y) in rules.items():
    L = welfare_loss(p_pi, p_y)
    print(f"  {name:25}: L = {L*1e4:6.4f}×10⁻⁴ (ratio: {L/baseline:.3f})")

# 3. OPTIMIZATION
# We constrain phi_pi > 1 to stay in the determinacy region (Taylor Principle)
res = minimize(lambda p: welfare_loss(p[0], p[1]), [1.5, 0.5], 
               bounds=[(1.01, 10), (0, 5)], method='L-BFGS-B')

print(f"\nOptimal Taylor Rule: φ_π={res.x[0]:.3f}, φ_y={res.x[1]:.3f}")

# 4. DISCRETION VS COMMITMENT (Analytical)
# Under Discretion, the central bank trades off pi and x period-by-period
# Optimal Discretion: pi_t = -(lambda_x/kappa) * (x_t - x_{t-1}) -> if no lag, simple tradeoff
denom = kappa**2 + lambda_x * (1 - beta * rho_u)
pi_coeff_d = lambda_x / denom
x_coeff_d = -kappa / denom

var_u = sigma_u**2 / (1 - rho_u**2)
L_disc = (pi_coeff_d**2 + lambda_x * x_coeff_d**2) * var_u

# Commitment (Simplified Gains)
# Commitment reduces loss by smoothing the response (history dependence)
L_comm = L_disc * (1 - rho_u) # Rough approximation of the 'Gains from Commitment'

print(f"\nPolicy Regimes:")
print(f"  Discretion (Theoretical): L = {L_disc*1e4:6.4f}×10⁻⁴")
print(f"  Commitment (Theoretical): L = {L_comm*1e4:6.4f}×10⁻⁴")
Welfare loss under alternative Taylor rules:
  Taylor (1.5, 0.5)        : L = 3.1855×10⁻⁴ (ratio: 1.000)
  Aggressive (3.0, 0.5)    : L = 1.9908×10⁻⁴ (ratio: 0.625)
  IT (φ_π=1.5, φ_y=0)      : L = 2.2633×10⁻⁴ (ratio: 0.710)

Optimal Taylor Rule: φ_π=4.163, φ_y=0.000

Policy Regimes:
  Discretion (Theoretical): L = 1.2833×10⁻⁴
  Commitment (Theoretical): L = 0.6417×10⁻⁴

Julia

using Optim, LinearAlgebra

beta, kappa, sigma = 0.99, 0.15, 1.0
lambda_x = kappa/6.0; rho_u, sigma_u = 0.5, 0.01

function welfare_nk(phi_pi, phi_y)
    G0 = [1 -kappa; sigma*phi_pi 1+sigma*phi_y]
    G1 = [beta 0; -sigma 1]; A = inv(G0)*G1
    all(abs.(eigvals(A)).>1) || return Inf
    Omega = (I(2)-rho_u*A)\(inv(G0)*[1;0])
    var_u = sigma_u^2/(1-rho_u^2)
    Omega[1]^2*var_u + lambda_x*Omega[2]^2*var_u
end

res = optimize(p->welfare_nk(p[1],p[2]), [1.5,0.5], NelderMead())
phi_opt = Optim.minimizer(res)
println("Optimal φ_π=$(round(phi_opt[1],digits=3)), φ_y=$(round(phi_opt[2],digits=3))")
println("Welfare gain: $(round(100*(welfare_nk(1.5,0.5)-welfare_nk(phi_opt[1],phi_opt[2]))/welfare_nk(1.5,0.5),digits=2))%")

40.7 Programming Exercises

Exercise 40.1 (APL — Optimal Commitment Policy)

The commitment targeting rule π^t=(λx/κ)(x^tx^t1)\hat\pi_t = -(\lambda_x/\kappa)(\hat{x}_t - \hat{x}_{t-1}) can be implemented as an additional equation in the NK system. (a) Augment the Γ0,Γ1\Gamma_0, \Gamma_1 matrices to include the commitment targeting rule and x^t1\hat{x}_{t-1} as a predetermined state variable. (b) Verify the Blanchard–Kahn conditions: now x^t1\hat{x}_{t-1} is predetermined and π^t,x^t\hat\pi_t, \hat{x}_t are jump variables — 2 jump variables need 2 unstable eigenvalues. (c) Compare the IRF to a cost-push shock under commitment vs. the Taylor rule.

Exercise 40.2 (Python — ELB Dynamics)

Model the ELB regime as a Markov-switching model: in normal times, the Taylor rule applies; at the ELB, i^t=min(Taylor,0)=0\hat{i}_t = \min(\text{Taylor}, 0) = 0. (a) Solve the model recursively backward from the date when the ELB stops binding. (b) Compute the sequence of (π^t,x^t)(\hat\pi_t, \hat{x}_t) during a 4-quarter ELB episode driven by a demand shock (rtn<0r^n_t < 0). (c) Add forward guidance: keep rates at zero for 2 additional quarters after the natural rate returns to positive. Plot the difference in the IRF.

Exercise 40.3 (Julia — Ramsey Optimal Policy)

# Ramsey-optimal fiscal policy: optimal tax-spending mix
# CB minimizes pi^2 + lambda_x * x^2 + lambda_i * i^2 (instrument cost)
function ramsey_optimal(lambda_x, lambda_i; rho_u=0.5, sigma_u=0.01)
    # Grid search over (phi_pi, phi_y, phi_i) -- include interest rate smoothing
    best_L = Inf; best_p = (1.5, 0.5, 0.0)
    for phi_pi in 1.1:0.2:4.0
        for phi_y in 0:0.2:2.0
            # Compute model-implied variances (simplified)
            G0 = [1 -0.15; sigma*phi_pi 1+sigma*phi_y]
            G1 = [0.99 0; -sigma 1]; A = inv(G0)*G1
            all(abs.(eigvals(A)).>1) || continue
            Omega = (I(2)-rho_u*A)\(inv(G0)*[1;0])
            var_u = sigma_u^2/(1-rho_u^2)
            L = Omega[1]^2*var_u + lambda_x*Omega[2]^2*var_u
            L += lambda_i*(phi_pi*Omega[1]+phi_y*Omega[2])^2*var_u  # instrument cost
            L < best_L && (best_L = L; best_p = (phi_pi, phi_y, 0.0))
        end
    end
    best_p, best_L
end

params, L = ramsey_optimal(0.025, 0.02)
println("Ramsey optimal: φ_π=$(params[1]), φ_y=$(params[2]), L=$(round(L*1e4,digits=4))×10⁻⁴")

Exercise 40.4 — Price-Level Targeting vs. Inflation Targeting (\star)

Compare PLT (commitment) to IT (discretion or Taylor rule with ϕπ>1\phi_\pi > 1): (a) compute the IRF to a cost-push shock under both regimes; (b) show that under PLT, inflation rises on impact but then deflates to below target — consistent with the targeting rule; (c) compute the welfare loss under both and verify PLT delivers lower loss; (d) implement price-level targeting in Dynare by adding p^t\hat{p}_t as a variable and replacing the Taylor rule with i^t=ϕpp^t+ϕyx^t\hat{i}_t = \phi_p\hat{p}_t + \phi_y\hat{x}_t.


40.8 Chapter Summary

Key results:

  • The NK welfare loss function L=Var(π^)+λxVar(x^)\mathcal{L} = \text{Var}(\hat\pi) + \lambda_x\text{Var}(\hat{x}) is the second-order welfare approximation; λx=κ/εNK0.025\lambda_x = \kappa/\varepsilon_{NK} \approx 0.025 (Theorem 40.1, proved from the second-order approximation).

  • The optimal commitment targeting rule π^t=(λx/κ)(x^tx^t1)\hat\pi_t = -(\lambda_x/\kappa)(\hat{x}_t - \hat{x}_{t-1}) implies price-level targeting — the central bank partially reverses past inflation deviations (Theorem 40.2, proved from the Lagrangian FOCs).

  • The commitment-discretion gap ΔL=λxρu2/[(λx+κ2)(1βρu)]σu2>0\Delta\mathcal{L} = \lambda_x\rho_u^2/[({\lambda_x+\kappa^2})(1-\beta\rho_u)]\sigma_u^2 > 0: commitment achieves lower welfare loss by exploiting forward expectations.

  • The forward guidance multiplier (Theorem 40.3) grows with commitment duration τ\tau; the forward guidance puzzle arises when βρu1\beta\rho_u \to 1.

  • Optimal Taylor rule computed by minimizing the Lyapunov welfare loss over (ϕπ,ϕy)(\phi_\pi, \phi_y); gains of 10–30% vs. standard (1.5, 0.5) coefficients for aggressive anti-inflation stance.

  • In APL: welfare is var_pi + lambda_x × var_x where variances come from Sigma_y ← Omega +.× Sigma_z +.× ⍉Omega; optimal policy via ⍣≡ Newton on the welfare gradient.

Next: Chapter 41 — Model Validation and Sensitivity Analysis