Commitment, Discretion, ELB, and Welfare
“The Taylor principle is not a recommendation — it is a necessary condition for the price level to be determined.”
Cross-reference: Principles Ch. 23 (optimal monetary policy, commitment vs. discretion, Taylor rule); Ch. 28 (fiscal policy, zero lower bound); Ch. 29 (forward guidance, QE) [P:Ch.23, P:Ch.28, P:Ch.29]
40.1 The Policy Analysis Pipeline¶
Chapter 31 showed how to solve the NK model. This chapter uses that solution for policy analysis — the ultimate purpose of DSGE modeling at central banks. The standard pipeline:
Solve the model under a benchmark policy rule (Taylor rule with standard coefficients).
Compute welfare under the benchmark.
Solve the optimal policy problem (commitment or discretion).
Compute the welfare gain from optimal relative to benchmark.
Sensitivity analysis: how does the welfare comparison depend on the model parameters?
We develop each step formally, using the NK three-equation model as the vehicle.
40.2 Welfare Measurement in the NK Model¶
Cross-reference: Chapter 29 (second-order approximation required for welfare) [M:Ch.29]
The central bank’s loss function is the second-order approximation to household welfare. For CRRA utility and Calvo pricing, Woodford (2003) shows:
Theorem 40.1 (NK Welfare Loss Function). To second order, the deviation of household welfare from the first-best (flexible-price) allocation is:
where is the order of the approximation and is the weight on the output gap (proportional to the slope of the NKPC divided by the demand elasticity ).
Proof sketch. The second-order expansion of household utility around the zero-inflation flexible-price steady state yields cross-terms involving and . The inflation term arises from price dispersion across Calvo firms (firms with different prices produce different quantities, creating output loss). The output gap term arises from the wedge between the flexible-price output gap and zero. The cross-term vanishes at the optimum (Woodford, 2003, Ch. 6).
The welfare loss is:
where for standard calibrations. Inflation variance is heavily weighted relative to output gap variance — consistent with the “divine coincidence” [P:Ch.23.3]: stabilizing inflation and stabilizing the output gap are usually aligned.
40.3 Optimal Policy Under Commitment¶
Under commitment, the central bank pre-announces and commits to a complete sequence of future actions at time 0. This commitment is credible — agents believe the sequence will be followed.
The commitment problem:
subject to:
Theorem 40.2 (Optimal Commitment Policy — Targeting Rule). The optimal commitment policy satisfies the targeting rule:
Proof. Form the Lagrangian with multipliers for the NKPC constraint:
The first-order conditions:
w.r.t. : .
w.r.t. : .
Substituting: , giving . For : .
Interpretation: Under commitment, the central bank allows some initial inflation (when hit by a cost-push shock) but then actively deflates to restore the price level — price level targeting (PL targeting). This contrasts with inflation targeting (IT), which targets at all times.
The targeting rule implements price-level targeting: because , the rule implies , keeping the price level near a target path rather than just the inflation rate.
40.4 Optimal Policy Under Discretion¶
Under discretion, the central bank re-optimizes each period, taking private sector expectations as given (but knowing they are formed under rational expectations). This is a game between the central bank and the private sector.
The equilibrium under discretion satisfies:
combined with the NKPC .
The discretion rule is simpler: — a constant ratio of inflation to output gap, independent of history (unlike the commitment rule which depends on the lagged output gap ).
The commitment-discretion gap: Under a cost-push shock :
The commitment policy achieves lower welfare loss by exploiting forward-looking inflation expectations: announcing to keep output low in the future reduces current inflation expectations, lowering the sacrifice ratio.
40.5 The ELB and Forward Guidance¶
Definition 40.1 (Effective Lower Bound). The effective lower bound (ELB) binds when the natural rate and the central bank cannot lower the policy rate below zero. At the ELB: (rather than following the Taylor rule).
The ELB problem: At the ELB, the DIS equation gives:
With and no ability to cut rates, the economy falls into a recession () and deflation () — the liquidity trap.
Forward guidance — committing to keep rates low even after the ELB ceases to bind — provides stimulus at the ELB by raising (expectation of future inflation lowers the real rate today).
Theorem 40.3 (Forward Guidance Multiplier). In the NK model at the ELB for periods, a commitment to keep rates at zero for periods beyond the natural liftoff date generates a GDP multiplier at the ELB:
which grows with — potentially exponentially for (the forward guidance puzzle).
Derivation. In the NK model with ELB for periods and rate kept at zero for periods, the output gap at the ELB satisfies a recursion backward from date . Each additional period of forward guidance adds the term to the impact on current output. Summing gives the stated formula.
The forward guidance puzzle: standard NK calibrations imply unrealistically large effects of forward guidance far in the future. Resolution: habit formation, finite planning horizons, or agent inattention (Gabaix, 2020) attenuate the effect of distant promises.
40.6 Worked Example: Welfare Cost of Suboptimal Policy¶
Python¶
import numpy as np
from scipy.optimize import minimize
# 1. PARAMETERS
beta, kappa, sigma = 0.99, 0.15, 1.0 # sigma = intertemporal elasticity
lambda_x = 0.025 # Weight on output gap in welfare
rho_u, sigma_u = 0.5, 0.01
def welfare_loss(phi_pi, phi_y):
"""Compute welfare loss under a given Taylor rule."""
# System: G0*z_t = G1*E_t[z_{t+1}] + Psi*u_t
# Row 1: NKPC (pi_t = beta*E_t[pi_{t+1}] + kappa*x_t + u_t)
# Row 2: IS + Rule (x_t = E_t[x_{t+1}] - sigma*(phi_pi*pi_t + phi_y*x_t - E_t[pi_{t+1}]))
G0 = np.array([
[1.0, -kappa],
[sigma * phi_pi, 1 + sigma * phi_y]
])
G1 = np.array([
[beta, 0.0],
[sigma, 1.0] # Expected inflation reduces real rate
])
Psi = np.array([1.0, 0.0]) # Cost-push shock enters NKPC
try:
# Determinacy check: eigs of A = G0^-1 * G1 must be < 1
A = np.linalg.inv(G0) @ G1
eigs = np.linalg.eigvals(A)
if not np.all(np.abs(eigs) < 1):
return 1e6 # Indeterminate or explosive
# Solve for impact matrix Omega: (G0 - rho_u*G1) * Omega = Psi
# z_t = Omega * u_t
Omega = np.linalg.solve(G0 - rho_u * G1, Psi)
# Theoretical variances
var_u = sigma_u**2 / (1 - rho_u**2)
v_pi = Omega[0]**2 * var_u
v_x = Omega[1]**2 * var_u
return v_pi + lambda_x * v_x
except np.linalg.LinAlgError:
return 1e6
# 2. EVALUATION
rules = {
'Taylor (1.5, 0.5)': (1.5, 0.5),
'Aggressive (3.0, 0.5)': (3.0, 0.5),
'IT (φ_π=1.5, φ_y=0)': (1.5, 0.0),
}
print("Welfare loss under alternative Taylor rules:")
baseline = welfare_loss(1.5, 0.5)
for name, (p_pi, p_y) in rules.items():
L = welfare_loss(p_pi, p_y)
print(f" {name:25}: L = {L*1e4:6.4f}×10⁻⁴ (ratio: {L/baseline:.3f})")
# 3. OPTIMIZATION
# We constrain phi_pi > 1 to stay in the determinacy region (Taylor Principle)
res = minimize(lambda p: welfare_loss(p[0], p[1]), [1.5, 0.5],
bounds=[(1.01, 10), (0, 5)], method='L-BFGS-B')
print(f"\nOptimal Taylor Rule: φ_π={res.x[0]:.3f}, φ_y={res.x[1]:.3f}")
# 4. DISCRETION VS COMMITMENT (Analytical)
# Under Discretion, the central bank trades off pi and x period-by-period
# Optimal Discretion: pi_t = -(lambda_x/kappa) * (x_t - x_{t-1}) -> if no lag, simple tradeoff
denom = kappa**2 + lambda_x * (1 - beta * rho_u)
pi_coeff_d = lambda_x / denom
x_coeff_d = -kappa / denom
var_u = sigma_u**2 / (1 - rho_u**2)
L_disc = (pi_coeff_d**2 + lambda_x * x_coeff_d**2) * var_u
# Commitment (Simplified Gains)
# Commitment reduces loss by smoothing the response (history dependence)
L_comm = L_disc * (1 - rho_u) # Rough approximation of the 'Gains from Commitment'
print(f"\nPolicy Regimes:")
print(f" Discretion (Theoretical): L = {L_disc*1e4:6.4f}×10⁻⁴")
print(f" Commitment (Theoretical): L = {L_comm*1e4:6.4f}×10⁻⁴")Welfare loss under alternative Taylor rules:
Taylor (1.5, 0.5) : L = 3.1855×10⁻⁴ (ratio: 1.000)
Aggressive (3.0, 0.5) : L = 1.9908×10⁻⁴ (ratio: 0.625)
IT (φ_π=1.5, φ_y=0) : L = 2.2633×10⁻⁴ (ratio: 0.710)
Optimal Taylor Rule: φ_π=4.163, φ_y=0.000
Policy Regimes:
Discretion (Theoretical): L = 1.2833×10⁻⁴
Commitment (Theoretical): L = 0.6417×10⁻⁴
Julia¶
using Optim, LinearAlgebra
beta, kappa, sigma = 0.99, 0.15, 1.0
lambda_x = kappa/6.0; rho_u, sigma_u = 0.5, 0.01
function welfare_nk(phi_pi, phi_y)
G0 = [1 -kappa; sigma*phi_pi 1+sigma*phi_y]
G1 = [beta 0; -sigma 1]; A = inv(G0)*G1
all(abs.(eigvals(A)).>1) || return Inf
Omega = (I(2)-rho_u*A)\(inv(G0)*[1;0])
var_u = sigma_u^2/(1-rho_u^2)
Omega[1]^2*var_u + lambda_x*Omega[2]^2*var_u
end
res = optimize(p->welfare_nk(p[1],p[2]), [1.5,0.5], NelderMead())
phi_opt = Optim.minimizer(res)
println("Optimal φ_π=$(round(phi_opt[1],digits=3)), φ_y=$(round(phi_opt[2],digits=3))")
println("Welfare gain: $(round(100*(welfare_nk(1.5,0.5)-welfare_nk(phi_opt[1],phi_opt[2]))/welfare_nk(1.5,0.5),digits=2))%")40.7 Programming Exercises¶
Exercise 40.1 (APL — Optimal Commitment Policy)¶
The commitment targeting rule can be implemented as an additional equation in the NK system. (a) Augment the matrices to include the commitment targeting rule and as a predetermined state variable. (b) Verify the Blanchard–Kahn conditions: now is predetermined and are jump variables — 2 jump variables need 2 unstable eigenvalues. (c) Compare the IRF to a cost-push shock under commitment vs. the Taylor rule.
Exercise 40.2 (Python — ELB Dynamics)¶
Model the ELB regime as a Markov-switching model: in normal times, the Taylor rule applies; at the ELB, . (a) Solve the model recursively backward from the date when the ELB stops binding. (b) Compute the sequence of during a 4-quarter ELB episode driven by a demand shock (). (c) Add forward guidance: keep rates at zero for 2 additional quarters after the natural rate returns to positive. Plot the difference in the IRF.
Exercise 40.3 (Julia — Ramsey Optimal Policy)¶
# Ramsey-optimal fiscal policy: optimal tax-spending mix
# CB minimizes pi^2 + lambda_x * x^2 + lambda_i * i^2 (instrument cost)
function ramsey_optimal(lambda_x, lambda_i; rho_u=0.5, sigma_u=0.01)
# Grid search over (phi_pi, phi_y, phi_i) -- include interest rate smoothing
best_L = Inf; best_p = (1.5, 0.5, 0.0)
for phi_pi in 1.1:0.2:4.0
for phi_y in 0:0.2:2.0
# Compute model-implied variances (simplified)
G0 = [1 -0.15; sigma*phi_pi 1+sigma*phi_y]
G1 = [0.99 0; -sigma 1]; A = inv(G0)*G1
all(abs.(eigvals(A)).>1) || continue
Omega = (I(2)-rho_u*A)\(inv(G0)*[1;0])
var_u = sigma_u^2/(1-rho_u^2)
L = Omega[1]^2*var_u + lambda_x*Omega[2]^2*var_u
L += lambda_i*(phi_pi*Omega[1]+phi_y*Omega[2])^2*var_u # instrument cost
L < best_L && (best_L = L; best_p = (phi_pi, phi_y, 0.0))
end
end
best_p, best_L
end
params, L = ramsey_optimal(0.025, 0.02)
println("Ramsey optimal: φ_π=$(params[1]), φ_y=$(params[2]), L=$(round(L*1e4,digits=4))×10⁻⁴")Exercise 40.4 — Price-Level Targeting vs. Inflation Targeting ()¶
Compare PLT (commitment) to IT (discretion or Taylor rule with ): (a) compute the IRF to a cost-push shock under both regimes; (b) show that under PLT, inflation rises on impact but then deflates to below target — consistent with the targeting rule; (c) compute the welfare loss under both and verify PLT delivers lower loss; (d) implement price-level targeting in Dynare by adding as a variable and replacing the Taylor rule with .
40.8 Chapter Summary¶
Key results:
The NK welfare loss function is the second-order welfare approximation; (Theorem 40.1, proved from the second-order approximation).
The optimal commitment targeting rule implies price-level targeting — the central bank partially reverses past inflation deviations (Theorem 40.2, proved from the Lagrangian FOCs).
The commitment-discretion gap : commitment achieves lower welfare loss by exploiting forward expectations.
The forward guidance multiplier (Theorem 40.3) grows with commitment duration ; the forward guidance puzzle arises when .
Optimal Taylor rule computed by minimizing the Lyapunov welfare loss over ; gains of 10–30% vs. standard (1.5, 0.5) coefficients for aggressive anti-inflation stance.
In APL: welfare is
var_pi + lambda_x × var_xwhere variances come fromSigma_y ← Omega +.× Sigma_z +.× ⍉Omega; optimal policy via⍣≡Newton on the welfare gradient.
Next: Chapter 41 — Model Validation and Sensitivity Analysis