AR(1) Processes and Macroeconomic Uncertainty
“The theory of stochastic processes is the theory of time-indexed families of random variables. In macroeconomics, every variable of interest is one.”
Cross-reference: Principles Ch. 6 (time-series properties, HP filter, unit roots, data vintages); Ch. 15 (rational expectations, information sets); Ch. 27 (RBC calibration — TFP as AR(1)) [P:Ch.6, P:Ch.15, P:Ch.27]
5.1 Why Stochastic Models? The Centrality of Uncertainty¶
The deterministic dynamic models of Chapters 3 and 4 assume that once initial conditions and parameters are specified, the entire future path of the economy is known. This is an obviously wrong description of the world. GDP fluctuates for reasons that are not fully predictable in advance. Monetary policy decisions are made under uncertainty about the current state of the economy. Firms’ investment decisions depend on uncertain future profitability. Consumer spending responds to news about future income.
Modern macroeconomics — from the rational expectations revolution onward — treats uncertainty not as an afterthought but as a fundamental feature of the economic environment [P:Ch.15]. The stochastic process is to modern macroeconomics what the differential equation is to classical physics: the natural language for describing how systems evolve over time when driven by unpredictable forces.
This chapter introduces the key concepts of probability and stochastic process theory that underlie the macroeconomic models of Parts V–IX. We develop: the formal probabilistic framework (probability spaces, filtrations, conditional expectations); stationary stochastic processes and their characterization by autocovariance functions and spectra; the AR(1) process in depth; and the Wold decomposition, which guarantees that every covariance-stationary process can be represented as a moving average of white noise — the fundamental result underlying all time-series econometrics.
5.2 Probability Spaces and Information Sets¶
Definition 5.1 (Probability Space). A probability space is a triple where:
is the sample space — the set of all possible states of the world.
is a sigma-algebra on — a collection of subsets of (events) closed under countable unions, countable intersections, and complementation.
is a probability measure — and is countably additive.
A random variable is a measurable function from to .
For macroeconomics, the key interpretation is: is the space of all possible histories of the economy (sequences of TFP realizations, policy shocks, preference shocks); an event is a statement about the economy that is either true or false in any realized history; and is the prior probability of that event.
5.2.1 Filtrations and Information¶
In dynamic models, information accumulates over time. At date , agents have observed the history of the economy up to but not the future.
Definition 5.2 (Filtration). A filtration is a non-decreasing sequence of sigma-algebras: for all . The sigma-algebra represents the information available at date .
A stochastic process is adapted to the filtration if is -measurable for each — that is, the value of is known at date . In macroeconomics, every variable we observe (GDP, inflation, interest rates) is adapted to the information filtration of agents.
Definition 5.3 (Conditional Expectation). The conditional expectation is the -measurable random variable that best predicts given information in the mean-squared sense. It satisfies:
This is the law of iterated expectations — a key tool in rational expectations models. It says: the best forecast of the future best forecast of is the current best forecast of .
The rational expectations hypothesis [P:Ch.15] states that agents’ subjective expectations equal the conditional expectation given their information set: . Agents use the correct probability model; they are not systematically wrong.
5.3 Stationarity, Autocovariance, and the Autocorrelation Function¶
5.3.1 Stationarity¶
Definition 5.4 (Covariance Stationarity). A stochastic process is covariance-stationary (or weakly stationary) if:
for all (constant mean).
for all (finite, constant variance).
depends only on the lag , not on (time-invariant autocovariances).
Definition 5.5 (Strict Stationarity). A process is strictly stationary if the joint distribution of equals the joint distribution of for any — the distribution is invariant to time translation. Strict stationarity plus finite second moments implies covariance stationarity.
Most empirical time-series methods require only covariance stationarity, which is weaker and easier to verify.
5.3.2 The Autocovariance and Autocorrelation Functions¶
Definition 5.6 (Autocovariance Function). For a covariance-stationary process with mean :
Note and (symmetry).
Definition 5.7 (Autocorrelation Function, ACF). The autocorrelation function is:
The ACF characterizes the temporal persistence of the process. For macroeconomic variables:
U.S. log real GDP (HP-filtered) has , — high persistence.
U.S. TFP growth (quarterly) has , falling off gradually.
White noise has for all .
Bochner’s theorem: A sequence is a valid ACF if and only if it is positive semi-definite. This means the Fourier transform of the ACF (the spectral density, defined below) must be non-negative everywhere.
5.4 The AR(1) Process: The Workhorse of Macroeconomic Dynamics¶
The autoregressive process of order 1, or AR(1), is the single most important stochastic process in macroeconomics. It appears as:
The technology shock in every RBC model [P:Ch.27]: .
The monetary policy shock in NK models [P:Ch.23]: .
The approximation to any persistent but stationary shock process.
Definition 5.8 (AR(1) Process). An AR(1) process satisfies:
where is the persistence parameter, is the unconditional mean, and is white noise — serially uncorrelated innovations with zero mean and constant variance.
Definition 5.9 (White Noise). is white noise, written , if:
for all .
for all .
for all .
If additionally i.i.d., we have Gaussian white noise — the standard assumption in DSGE models.
5.4.1 Moments of the AR(1)¶
Taking expectations of the AR(1) definition (assuming stationarity):
For the variance, use the WLOG assumption : . Then:
This requires for the variance to be finite. The autocovariances:
Theorem 5.1 (AR(1) Autocovariance Function). For the zero-mean AR(1) with :
Proof. Multiply both sides of by and take expectations:
For , is uncorrelated with (since ), so . Therefore for , with . Solving the recursion: .
The ACF decays geometrically. High (near 1) means slow decay — the process has long memory and is highly persistent. Low (near 0) means rapid decay — shocks dissipate quickly.
5.4.2 Moving Average Representation¶
The AR(1) can be written as an infinite moving average (MA):
Proof. Iterate backward:
since in mean square when .
The MA representation is the impulse response function of the AR(1): the coefficient on gives the effect of a shock periods ago on today. For the AR(1), the IRF is — a geometrically declining function of the lag .
In DSGE models, the IRF of any variable to any shock is a sum of such geometrically declining terms, weighted by the eigenvalues of the transition matrix. Chapter 17 computes these IRFs for the RBC model; Chapter 28 does so for the full NK model.
5.5 ARMA Processes and the Wold Decomposition¶
5.5.1 ARMA(p,q) Processes¶
Definition 5.10 (ARMA Process). An ARMA(,) process satisfies:
where the AR polynomial and MA polynomial are defined in terms of the lag operator : .
The process is stationary iff all roots of lie outside the unit circle (). It is invertible iff all roots of lie outside the unit circle.
5.5.2 The Wold Decomposition¶
The Wold theorem is arguably the most important theorem in time-series analysis. It says that any covariance-stationary process can be decomposed into a predictable component and an unpredictable MA() component.
Theorem 5.2 (Wold Decomposition). Let be a zero-mean covariance-stationary process. Then:
where:
is the one-step-ahead forecast error (innovation), which is white noise with .
and .
is the deterministic component — perfectly predictable from its own past.
for all (the components are uncorrelated).
For purely non-deterministic processes (those without a deterministic component), and the entire process is its MA() representation. The impulse response coefficients are the dynamic multipliers: a unit innovation at affects by .
Economic significance. The Wold decomposition underpins the structural VAR methodology [P:Appendix B]. By identifying which innovations correspond to which structural shocks (monetary policy, technology, demand), empirical macroeconomists can trace the dynamic effects of each shock through the economy. Chapter 19 develops this methodology in full.
5.6 VAR Processes and the Companion Form¶
A vector autoregression (VAR) of order generalizes the scalar AR() to a vector of variables:
where , , and .
The VAR() can be rewritten as a VAR(1) in the companion form by stacking:
Stationarity of the VAR(): The VAR() is covariance-stationary iff all eigenvalues of the companion matrix lie inside the unit circle.
The companion form shows that any DSGE model’s solution — which takes the form after solving (Chapter 28) — is a first-order VAR with companion matrix . The stationarity condition for the DSGE model solution is exactly the Blanchard–Kahn condition applied to .
5.7 The Spectral Density¶
The spectral density decomposes the variance of a process by frequency, revealing which periodicities are most important.
Definition 5.11 (Spectral Density). For a covariance-stationary process with absolutely summable autocovariances (), the spectral density is:
is the contribution to variance from cyclical components with frequencies in . The spectral density is the Fourier transform of the autocovariance function; by the inverse transform:
Theorem 5.3 (Spectral Density of an AR(1)). For with :
Proof. Using the MA() representation and the spectral density of white noise :
For , is largest at (low frequency, long cycles) — confirming that high persistence generates power at business cycle frequencies. The HP filter exploits this by attenuating the low-frequency components; its frequency response function is closely related to the spectral density of a smooth trend.
5.8 Martingales and the Innovation Representation¶
Definition 5.12 (Martingale). A stochastic process adapted to is a martingale if:
Equivalently, for all : the best forecast of future values is the current value.
A martingale difference sequence (MDS) satisfies — innovations are unpredictable from past information. The Wold innovations are an MDS: .
Hall’s (1978) random walk hypothesis for consumption [P:Ch.11.3] states that consumption is a martingale: . This follows directly from the Euler equation under quadratic utility (, implying ): . Therefore , which is an MDS — changes in consumption are unforecastable.
This is an empirically testable restriction: no variable in should help predict . The widespread violation of this restriction (excess sensitivity of consumption to predictable income changes [P:Ch.11.4]) is one of the major puzzles in consumption economics, motivating the buffer-stock and liquidity-constraint models.
5.9 Worked Example: Fitting an AR(1) to U.S. TFP¶
Cross-reference: Principles Ch. 27 (RBC calibration) [P:Ch.27]
The standard RBC calibration [P:Ch.27] sets the TFP shock as . We estimate and from the Solow residual.
Data: Quarterly U.S. TFP index, 1953Q1–2019Q4, obtained from the San Francisco Fed’s TFP database. Define (log-deviation from mean).
OLS estimation: Regress on :
The OLS estimator:
The OLS estimator of the AR(1) coefficient is the first sample autocorrelation. Typical estimates: , (quarterly).
Interpretation: A TFP shock of decays at rate per quarter. After 5 years (20 quarters): — 36% of the original shock remains. The half-life is quarters, or about 3.4 years.
⎕IO←0 ⋄ ⎕ML←1
rho_A ← 0.95 ⋄ sigma_A ← 0.0072 ⋄ T ← 200
⍝ 1. Better Innovation Draw: Standard Normal approximation
⍝ sum of 12 uniforms - 6 is a quick way to get N(0,1)
rnorm ← { ( +/ 12 ⍴ ?0 ) - 6 }
eps ← sigma_A × rnorm¨ ⍳ T
⍝ 2. Fix: AR(1) Simulation via Scan
⍝ The scan function takes previous state (⍺) and current shock (⍵)
ar1_path ← { (rho_A × ⍺) + ⍵ } \ eps
⍝ 3. Fix: Sample Autocorrelation (OLS Estimator)
demean ← {⍵ - (+/⍵) ÷ ≢ ⍵}
a ← demean ar1_path
⍝ We use a proper dyadic function call here
⍝ rho_hat = (Y'X) / (X'X) where Y is lead and X is lag
Y ← 1 ↓ a
X ← ¯1 ↓ a
rho_hat ← (Y +.× X) ÷ (X +.× X)
rho_hat
⍝ 4. Fix: Autocovariance Function (ACF)
⍝ We calculate Cov(a_t, a_{t-k}) for k = 0 to 9
⍝ Normalizing by the variance (k=0) gives the correlation
gamma ← { (⍵ ↓ a) +.× ((-⍵) ↓ a) } ¨ ⍳10
acf ← gamma ÷ ⊃gamma ⍝ Divide by variance to get correlations
10 ↑ acf# Python — AR(1) estimation and diagnostics
import numpy as np
from statsmodels.tsa.ar_model import AutoReg
import matplotlib.pyplot as plt
np.random.seed(42)
rho_true, sigma_true, T = 0.95, 0.0072, 200
# Simulate AR(1)
eps = np.random.normal(0, sigma_true, T)
a = np.zeros(T)
for t in range(1, T):
a[t] = rho_true * a[t-1] + eps[t]
# OLS estimation: first autocorrelation
rho_hat = np.corrcoef(a[1:], a[:-1])[0,1]
sigma_hat = np.std(a[1:] - rho_hat * a[:-1])
print(f"True: ρ={rho_true}, σ={sigma_true}")
print(f"OLS: ρ̂={rho_hat:.4f}, σ̂={sigma_hat:.6f}")
# Impulse response
h = np.arange(40)
irf = rho_hat ** h
plt.plot(h, irf); plt.axhline(0, c='k', lw=0.5)
plt.title('AR(1) Impulse Response Function'); plt.xlabel('Quarters'); plt.show()# Julia — AR(1) moments and simulation
using Statistics
rho_true, sigma_true, T = 0.95, 0.0072, 200
Random.seed!(42)
eps = randn(T) .* sigma_true
a = zeros(T)
for t in 2:T
a[t] = rho_true * a[t-1] + eps[t]
end
rho_hat = cor(a[2:end], a[1:end-1])
sigma_hat = std(a[2:end] .- rho_hat .* a[1:end-1])
println("OLS ρ̂ = $(round(rho_hat, digits=4)), σ̂ = $(round(sigma_hat, digits=6))")
# Theoretical vs. simulated ACF
lags = 0:20
acf_theory = rho_hat .^ lags
acf_sample = [cor(a[1+k:end], a[1:end-k]) for k in lags]
println("ACF comparison (lags 0-5):")
display([acf_theory[1:6] acf_sample[1:6]])# R — AR(1) estimation
set.seed(42)
rho_true <- 0.95; sigma_true <- 0.0072; T <- 200
eps <- rnorm(T, 0, sigma_true)
a <- numeric(T)
for(t in 2:T) a[t] <- rho_true * a[t-1] + eps[t]
# Yule-Walker / OLS estimate
rho_hat <- cor(a[-1], a[-T])
sigma_hat <- sd(a[-1] - rho_hat * a[-T])
cat(sprintf("OLS: rho_hat = %.4f, sigma_hat = %.6f\n", rho_hat, sigma_hat))
# Theoretical spectral density
omega <- seq(-pi, pi, length=500)
spec_theory <- sigma_hat^2 / (2*pi * (1 + rho_hat^2 - 2*rho_hat*cos(omega)))
plot(omega, spec_theory, type='l', main='AR(1) Spectral Density',
xlab='Frequency ω', ylab='S(ω)')5.10 The Tauchen (1986) Discretization¶
Continuous-state AR(1) processes must be discretized for numerical dynamic programming (Chapters 15–17). The Tauchen (1986) method constructs a discrete Markov chain that approximates the AR(1).
Algorithm 5.1 (Tauchen Discretization).
Given the AR(1): , :
Choose the number of grid points and the number of standard deviations (typically ).
Set grid endpoints: , , where .
Construct evenly spaced grid: with , , spacing .
Compute transition probabilities: for grid points and :
with boundary corrections for (lower tail) and (upper tail). Here is the standard normal CDF.
The transition matrix is an row-stochastic matrix: each row sums to 1.
In APL, the Tauchen discretization exploits the outer product ∘.- to compute all differences simultaneously:
⎕IO←0 ⋄ ⎕ML←1
⍝ 1. Setup Parameters
rho ← 0.95 ⋄ sig_e ← 0.007 ⋄ N ← 5 ⋄ m ← 3
sig_z ← sig_e ÷ (1 - rho*2)*0.5
⍝ 2. Build the Grid
z ← sig_z × (¯1 × m) + (⍳N) × (2 × m) ÷ (N - 1)
dz ← z[1] - z[0]
⍝ 3. Improved CDF Approximation (To prevent the "leak")
⍝ Using a slightly more accurate coefficients for the logistic fit
Phi ⇐ { 1 ÷ 1 + * ¯1.702 × ⍵ }
⍝ 4. Compute Transition Matrix P
E ← rho × z
U ← (z + dz ÷ 2) ∘∙- E
L ← (z - dz ÷ 2) ∘∙- E
⍝ Calculate probabilities
P ← (Phi U ÷ sig_e) - (Phi L ÷ sig_e)
⍝ 5. Fix Boundaries & Ensure Positivity
P[;0] ← Phi ( (z[0] + dz ÷ 2) - E ) ÷ sig_e
P ← 0 ⌈ P ⍝ CLIP: Force all values to be at least 0
⍝ Normalize Rows: This is the proper way to ensure they sum to 1
⍝ instead of just forcing the last column to take the hit.
P ← P ÷ [1] +/ P
ret ← 2 2 ⍴ "Grid" z "Transition Matrix" P
⍝ 6. ACTUALLY EXECUTE AND SHOW RESULTS
⊣ o3:show ret┌→─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
↓ "Grid" ┌→────────────────────────────────────────────────────────────────────────────────────┐│
│ │-0.06725382459813659 -0.03362691229906829 0.0 0.03362691229906829 0.06725382459813659││
│ └─────────────────────────────────────────────────────────────────────────────────────┘│
│"Transition Matrix" ┌→───────────────────────────────────────────────────────────────────────────────────────────────────────────┐│
│ ↓ 0.9886881947148289 0.011108002114551237 4.640807064280832E-6 1.9623425921396166E-9 8.740061983817253E-13││
│ │ 0.011310180040574639 0.9722564556760945 0.016226473749243887 6.976309931736691E-6 3.107195155267467E-9││
│ │ 4.841378538597081E-6 0.02481454595410667 0.9515460873954106 0.02419098747695417 1.1046327375907569E-5││
│ │2.0495559094083867E-9 7.15613463715283E-6 0.016226473749243794 0.9478248680087559 0.037833730037353706││
│ │8.676577532913112E-13 2.0129248288344396E-9 4.640807064250625E-6 0.01082887192633153 0.9886022229108655││
│ └────────────────────────────────────────────────────────────────────────────────────────────────────────────┘│
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘The Rouwenhorst (1995) method is an alternative that better approximates highly persistent processes ( near 1) and is preferred for calibrated DSGE models where .
5.11 Programming Exercises¶
Exercise 5.1 (APL — ACF)¶
Write a dfn acf ← {n ← ⍺ ⋄ x ← ⍵-+/⍵÷≢⍵ ⋄ ...} that computes the sample autocorrelation function at lags for a time series passed as right argument. The computation should use +.× (inner product) for each lag. Test on the simulated AR(1) from Section 5.9 and verify the ACF matches .
⎕IO←0 ⋄ ⎕ML←1
acf ← {
n ← ⍺
x ← ⍵ - (+/⍵)÷≢⍵ ⍝ demean
v ← x+.×x ⍝ sum of squares (unnormalized variance)
{(k↓x)+.×(k↓⌽x)}¨⍳n) ÷ v ⍝ autocov at each lag, normalized
}
⍝ Test on AR(1) simulation
T ← 500
rho ← 0.8
eps ← 0.1 × (T⍴0) + ? T⍴0 ⍝ rough noise (replace with normal)
⍝ ar1 ← {rho×⍵+eps[⍺]}\ ⍳T ⍝ would need proper simulation
20 acf ar1 ⍝ first 20 autocorrelationsExercise 5.2 (Python — Spectral Density)¶
Plot the theoretical and estimated spectral density of the AR(1) TFP process calibrated with and . Use numpy’s FFT to estimate the empirical spectrum from a long simulation, and overlay the theoretical formula from Theorem 5.3. Shade the business-cycle frequency band (cycles of 6–32 quarters).
import numpy as np; import matplotlib.pyplot as plt
rho, sigma, T = 0.95, 0.0072, 10000
eps = np.random.normal(0, sigma, T)
a = np.zeros(T)
for t in range(1,T): a[t] = rho*a[t-1] + eps[t]
omega = np.linspace(-np.pi, np.pi, 1000)
spec_theory = sigma**2 / (2*np.pi * (1 + rho**2 - 2*rho*np.cos(omega)))
from scipy.signal import periodogram
f, Pxx = periodogram(a, window='hann')
omega_data = 2*np.pi*f
fig, ax = plt.subplots()
ax.semilogy(omega_data, Pxx, alpha=0.5, label='Empirical (periodogram)')
ax.semilogy(omega, spec_theory, 'r-', linewidth=2, label='Theoretical')
ax.axvspan(2*np.pi/32, 2*np.pi/6, alpha=0.1, color='green', label='BC frequencies')
ax.legend(); ax.set_xlabel('Frequency ω'); plt.show()Exercise 5.3 (Julia — Tauchen vs. Rouwenhorst)¶
using Distributions, LinearAlgebra
function tauchen(rho, sigma_eps, N, m=3)
sigma_z = sigma_eps / sqrt(1-rho^2)
z = range(-m*sigma_z, m*sigma_z, length=N)
dz = step(z)
P = zeros(N, N)
d = Normal(0, sigma_eps)
for i in 1:N
mu = rho * z[i]
P[i,1] = cdf(d, z[1] + dz/2 - mu)
P[i,N] = 1 - cdf(d, z[N] - dz/2 - mu)
for j in 2:N-1
P[i,j] = cdf(d, z[j]+dz/2-mu) - cdf(d, z[j]-dz/2-mu)
end
end
return collect(z), P
end
z_grid, P = tauchen(0.95, 0.0072, 7)
println("Grid: ", round.(z_grid, digits=4))
println("Row sums (should be 1): ", round.(sum(P, dims=2), digits=6))Exercise 5.4 (R — Wold Representation)¶
# Verify Wold MA coefficients for AR(1) match IRF
rho <- 0.8; sigma <- 0.1; T <- 500
set.seed(1)
a <- arima.sim(list(ar=rho), T, sd=sigma)
# Fit AR(1)
fit <- arima(a, order=c(1,0,0))
rho_hat <- as.numeric(coef(fit)[1])
# Wold MA coefficients: psi_j = rho^j
lags <- 0:20
psi <- rho_hat^lags
# Verify via impulse response from ARMAtoMA
psi_check <- ARMAtoMA(ar=rho_hat, lag.max=20)
cat("Max discrepancy:", max(abs(psi[-1] - psi_check)), "\n")Exercise 5.5 — Persistence and the HP Filter ()¶
The Hodrick–Prescott (HP) filter with parameter (quarterly) isolates components with period 6–32 quarters. Show that for an AR(1) with , approximately what fraction of the total variance falls in the business-cycle band ? Compute this as using numerical integration (Gaussian quadrature). How does this fraction change as varies from 0 to 0.99?
Exercise 5.6 — Hall’s Test ()¶
Using quarterly U.S. non-durable consumption growth data from FRED (series DNDGRD3Q086SBEA):
(a) Test whether lagged consumption growth helps predict current consumption growth (Hall’s test for the martingale property).
(b) Test whether lagged income growth has predictive power (excess sensitivity test).
(c) Use HAC (Newey–West) standard errors for both tests. What do your results imply about the extent of liquidity constraints in the data?
5.12 Chapter Summary¶
Key results:
A probability space with filtration formalizes the structure of information in dynamic economies. Rational expectations = conditional expectation given .
A process is covariance-stationary if its mean, variance, and autocovariances are all time-invariant. The autocovariance function (ACF) and spectral density are its complete second-moment characterizations.
The AR(1) has ACF , variance , and spectral density .
The Wold decomposition guarantees that any covariance-stationary process has an MA() representation in terms of its own innovations — the foundation of structural VAR identification.
The Tauchen (1986) algorithm discretizes an AR(1) into a finite Markov chain using the
∘.-outer product in APL, enabling numerical dynamic programming.Hall’s random walk — consumption is a martingale under rational expectations and quadratic utility — is the testable implication of the consumption Euler equation; its widespread empirical failure points to liquidity constraints.
Connections forward: Chapter 18 develops the full rational expectations solution methodology using these stochastic concepts. Chapter 19 uses ARMA and VAR processes for structural identification of macroeconomic shocks. Chapter 20 applies the Kalman filter to estimate unobserved state variables (potential output, the natural rate) from noisy data. Chapter 26 uses Monte Carlo simulation of AR(1) shock processes to generate the empirical moments of calibrated DSGE models.
Next: Part II — Static Macroeconomic Models and Comparative Statics