Input-Output Matrices, Eigenvalues, and Economic Structure
“The beauty of mathematics is that the same abstract structure — a system of linear equations — describes both the flow of goods between industries and the stability of a business cycle model.”
Cross-reference: Principles Ch. 4 (circular flow, input-output analysis); Ch. 6 (data and econometric methods); Ch. 9 (IS–LM as a simultaneous system); Ch. 27 (RBC model, matrix solution) [P:Ch.4, P:Ch.6, P:Ch.9, P:Ch.27]
2.1 Why Linear Algebra? Economic Systems as Matrix Problems¶
A recurring structure in macroeconomics is the simultaneous equation system: a set of conditions that must hold jointly, involving multiple endogenous variables that are all determined together. The IS–LM model [P:Ch.9] is a 2×2 system determining from the intersection of two curves. The input-output model [P:Ch.4] is an system determining production levels across all sectors simultaneously. The log-linearized DSGE model of Part VII is a system of expectational difference equations. In each case, the right language is linear algebra.
Beyond solving systems, linear algebra provides the tools for:
Stability analysis: whether a dynamic system returns to equilibrium after a shock depends on the eigenvalues of its transition matrix — a central concept in Chapters 4, 14, and 28.
Dimensionality reduction: eigendecomposition reveals which directions in the state space matter most, which underlies the spectral analysis of time series and the solution algorithms for DSGE models.
Efficient computation: the matrix operations
+.×(multiply) and⌹(divide/solve) in APL, and their analogues in NumPy and Julia’s LinearAlgebra, are the workhorses of every numerical algorithm in this book.
This chapter develops the linear algebra toolkit with these applications in mind. Every definition is illustrated with an economic example; every theorem is used somewhere in the book.
2.2 Vectors and Matrices as Economic Objects¶
2.2.1 Economic Vectors¶
A vector is an ordered list of real numbers. In macroeconomics, vectors appear as:
State vectors in dynamic models: in the RBC model, where is the capital stock and is the technology level.
Output vectors in input-output analysis: where is gross output of industry .
Impulse vectors in VAR models: is the -th standard basis vector, representing a unit shock to variable .
Definition 2.1 (Inner Product). The inner product of two vectors is:
The inner product where is a price vector and is a quantity vector gives the value of the basket at those prices — the fundamental operation underlying both GDP measurement and the computation of price indices [P:Ch.3.1].
2.2.2 Matrices as Linear Transformations¶
A matrix represents a linear map via . The action of on a vector transforms it — rotating, scaling, projecting — without any nonlinear distortion. This linearity is exactly why linearized DSGE models are tractable: every period’s state vector is a linear function of the previous state and the current shocks.
Definition 2.2 (Matrix Multiplication). For and , the product has elements:
where is the -th row of and is the -th column of . Matrix multiplication is associative and distributive , but not commutative in general: .
In APL, matrix multiplication is the inner product A +.× B — read as “sum of products of corresponding elements, by column.” This primitive is the single most-used operation in the entire book.
⍝ APL — matrix multiplication
⎕IO←0 ⋄ ⎕ML←1
A ← 2 2 ⍴ 1 2 3 4 ⍝ 2×2 matrix: [[1 2][3 4]]
B ← 2 2 ⍴ 5 6 7 8 ⍝ 2×2 matrix: [[5 6][7 8]]
C ← A +.× B ⍝ matrix product: [[19 22][43 50]]
C2.3 Matrix Inversion and the Solution of Linear Systems¶
2.3.1 The Inverse Matrix¶
Definition 2.3 (Matrix Inverse). A square matrix is invertible (or nonsingular) if there exists a matrix such that , where is the identity matrix. is invertible if and only if , equivalently, if and only if has full rank .
The system has a unique solution when is invertible.
In APL, is written b ⌹ A. The monadic form ⌹A computes directly. The ⌹ primitive (called “domino” or “matrix divide”) calls LAPACK’s LU solver internally, so it is both convenient and numerically stable.
⍝ APL — solving Ax = b using ⌹
A ← 2 2 ⍴ 2 1 5 3 ⍝ coefficient matrix
b ← 8 5 ⍝ right-hand side (as a vector)
x ← b ⌹ A ⍝ solution: x = A⁻¹b
x ⍝ should give 19 ¯30 (check: 2×19 + 1×(¯30) = 8 ✓)
⍝ Explicit inverse (use with care — solve directly when possible)
Ainv ← ⌹A
Ainv +.× b ⍝ same result via explicit inverseNote on numerical practice: Computing explicitly and then multiplying is less numerically stable than solving directly. Always prefer b ⌹ A over (⌹A) +.× b in production code.
2.3.2 Cramer’s Rule¶
For small systems (2×2, 3×3), Cramer’s rule gives explicit closed-form solutions that are useful for comparative statics.
Theorem 2.1 (Cramer’s Rule). For the system with invertible, the -th component of the solution is:
where is the matrix obtained by replacing the -th column of with .
Proof sketch for 2×2. With , :
Direct calculation: , , which one can verify satisfies .
2.4 The Leontief Input-Output Model¶
The input-output model of Wassily Leontief (1941) is both an important economic framework and an elegant illustration of linear algebra in macroeconomics. It is the foundation of the production-side national accounts [P:Ch.4.5].
Definition 2.4 (Technical Coefficient Matrix). For an economy with industries, the technical coefficient matrix has element equal to the dollar value of input from industry required to produce one dollar of gross output in industry . Each column represents the production recipe of industry .
The accounting identity for gross output: each industry’s output equals its deliveries to other industries plus its deliveries to final demand :
Definition 2.5 (Leontief Inverse). The matrix is the Leontief inverse or total requirements matrix. Its element gives the total output of industry — direct plus all indirect upstream requirements — needed to deliver one dollar of final demand for industry ’s product.
Theorem 2.2 (Existence of the Leontief Inverse). exists and has all non-negative elements if and only if all eigenvalues of have modulus strictly less than 1. When this holds:
Proof of the series representation. For any matrix with all eigenvalues inside the unit circle, as . The partial sums satisfy as . The limit is therefore .
The economic interpretation of the series is the multiplier chain: the first term represents direct demand; captures first-round intermediate input requirements; captures second-round requirements; and so on. This is precisely the input-output analogue of the Keynesian spending multiplier [P:Ch.8].
In APL, the Leontief inverse of a 3-sector economy:
⍝ APL — Leontief inverse
⎕IO←0 ⋄ ⎕ML←1
⍝ Technical coefficient matrix (3 sectors)
A ← 3 3 ⍴ 0.1 0.2 0.0 0.3 0.1 0.2 0.0 0.2 0.1
⍝ Identity matrix
I ← ∘.=⍨ ⍳ ≢A ⍝ ∘.=⍨ generates an n×n identity matrix via outer product
⍝ Leontief inverse: (I-A)⁻¹ = ⌹(I-A)
L ← ⌹ I - A
⍝ Final demand vector
d ← 100 80 60
⍝ Gross output required
x ← L +.× d
x ⍝ total output by sector
⍝ Verify: (I-A)x = d
(I - A) +.× x ⍝ should equal dThe APL expression =⍨⍳n deserves explanation: ⍳n generates the vector 0 1 2 ... n-1 (with ⎕IO←0); =⍨ applies the outer product of equality to this vector with itself, yielding an identity matrix. This is a characteristic APL idiom — generating structured matrices from primitive operations.
2.5 Determinants, Rank, and Singularity¶
2.5.1 The Determinant¶
Definition 2.6 (Determinant). The determinant of a square matrix is a scalar that measures the signed volume scaling factor of the linear transformation represented by . For :
The determinant has several key properties:
.
when is invertible.
.
is invertible iff .
In macroeconomics, appears in Cramer’s rule (Section 2.3.2) and in the IS–LM multiplier formulas of Chapter 6. The sign of in the Leontief model determines whether the economy is productive.
2.5.2 Rank and Linear Independence¶
Definition 2.7 (Rank). The rank of a matrix , denoted , is the dimension of the column space of — the number of linearly independent columns. Equivalently, it is the number of nonzero singular values of .
For the DSGE model identification problem (Chapter 41), rank conditions are central: a model is identified only if the Jacobian matrix mapping structural parameters to model predictions has full column rank.
2.6 Eigenvalues and Eigenvectors¶
Eigenvalues and eigenvectors are the most important concepts in linear algebra for dynamic macroeconomics. The stability of any linear dynamic system — whether it converges to a steady state, oscillates, or explodes — is determined entirely by the eigenvalues of its transition matrix.
Definition 2.8 (Eigenvalue and Eigenvector). Let . A scalar is an eigenvalue of with corresponding eigenvector if:
The set of all eigenvalues is the spectrum of , denoted . The eigenvalues are the roots of the characteristic polynomial:
For , this gives the quadratic , with roots:
Note two useful identities: and .
2.6.1 Eigenvalues and Stability¶
Theorem 2.3 (Stability of Linear Discrete-Time Systems). The system converges to from any initial condition if and only if all eigenvalues of satisfy . The system diverges if any .
Theorem 2.4 (Stability of Linear Continuous-Time Systems). The system converges to from any initial condition if and only if all eigenvalues of satisfy .
These two theorems underlie all stability analysis in macroeconomics:
The Solow model’s convergence to the steady state (Chapter 10) corresponds to the continuous-time condition for the linearized ODE.
The Blanchard–Kahn condition for a unique stable solution to a DSGE model (Chapter 28) requires counting eigenvalues inside vs. outside the unit circle.
The determinacy condition for the Taylor rule (Chapter 28, [P:Ch.23.1]) translates into a requirement on the eigenvalues of the NK system’s transition matrix.
2.6.2 Diagonalization and Matrix Powers¶
If has linearly independent eigenvectors with eigenvalues , form the matrix . Then:
Powers are trivial: where .
This is the basis for computing impulse response functions in VAR and DSGE models: the IRF at horizon is , which reduces to — the -th column of .
⎕IO ← 0 ⋄ ⎕ML ← 1
A ← 2 2 ⍴ 0.9 0.1 0.0 0.8
shock ← 1 0
⍝ Your original, brilliant use of nested Dfns to handle the powers
irf ← {({A+.×⍵}⍣⍵) shock} ¨ ⍳20
⍝ Turn the nested vectors into a 20x2 matrix
↑ irfThe APL idiom (A⍣h) +.× shock computes : ⍣h applies the matrix multiply h times. With ¨ (each) this generates the full IRF sequence in one expression.
2.7 The Spectral Decomposition and Symmetric Matrices¶
Many matrices in macroeconomics are symmetric: covariance matrices, Hessians, and the matrices arising from certain DSGE structures. Symmetric matrices have a particularly clean spectral structure.
Theorem 2.5 (Spectral Decomposition of Symmetric Matrices). If is symmetric, then:
All eigenvalues are real.
Eigenvectors corresponding to distinct eigenvalues are orthogonal.
has an orthogonal eigendecomposition where is orthogonal () and .
Definition 2.9 (Quadratic Form). For a symmetric matrix and vector , the expression is a quadratic form. The sign of this form for all nonzero is determined entirely by the signs of the eigenvalues of — which is why positive/negative definiteness of the Hessian determines whether a critical point is a minimum/maximum (Theorem 1.4).
In APL, the quadratic form :
⍝ APL — quadratic form x'Ax
quadratic ← {⍺ +.× ⍵ +.× ⍺} ⍝ ⍺ is x, ⍵ is A: x +.× (A +.× x)
x ← 1 2
A ← 2 2 ⍴ 3 1 1 2 ⍝ positive definite (eigenvalues both positive)
x quadratic A ⍝ should be 15 > 02.8 The IS–LM Model as a Linear System¶
Cross-reference: Principles Ch. 9 (IS–LM model) [P:Ch.9]
The IS–LM model determines equilibrium output and the nominal interest rate . Using the linear specifications from Principles Ch. 9.3:
IS curve: (where captures autonomous spending and is investment interest sensitivity)
LM curve: (where is income elasticity of money demand and is interest elasticity)
Writing as a linear system :
The solution by Cramer’s rule:
The fiscal multiplier (noting increases one-for-one with ):
This is the IS–LM fiscal multiplier derived in Principles Ch. 9.3 [P:Ch.9.3] — now from the matrix inverse rather than from graphical reasoning. Chapter 6 of this book develops this analysis fully, adding Cramer’s rule derivations for all policy multipliers and extending to the open economy (Mundell–Fleming).
⍝ Dyalog APL — IS-LM solution via ⌹ (Domino)
⍝⎕IO←0 ⋄ ⎕ML←1
⍝ Parameters
br ← 2 ⍝ investment-interest sensitivity
k ← 0.5 ⍝ income elasticity of money demand
h ← 4 ⍝ interest elasticity of money demand
⍝ Coefficient Matrix Function: [ 1 br ]
⍝ [ k -h ]
⍝ We use '(-h)' to ensure h is negated as a variable
islm_matrix ⇐ { (b_r k_e h_e) ← ⍵ ⋄ 2 2 ⍴ 1 b_r k_e (-h_e) }
A ← islm_matrix br k h
⍝ Exogenous variables: Abar = 200, M/P = 500
Abar ← 200
MP ← 500
b ← Abar MP
⍝ Solve for equilibrium: (Y_star i_star) ← b ⌹ A
⍝ Domino solves the system: A × [Y, i] = b
(Y_star i_star) ← b ⌹ A
Y_star ⍝ Result: 360
i_star ⍝ Result: ¯80
⍝ Fiscal multiplier: ∂Y*/∂G = h / (h + br×k)
⍝ Evaluates R-to-L: h ÷ (h + (br × k))
dY_dG ← h ÷ h + br × k
Y_star i_star dY_dG ⍝ Result: 0.8┌→───────────────────────────┐
│360.0 -79.99999999999999 0.8│
└────────────────────────────┘2.9 The Jordan Normal Form and Defective Matrices¶
Not every matrix is diagonalisable. When a matrix has repeated eigenvalues and insufficient eigenvectors, we need the Jordan normal form.
Definition 2.10 (Jordan Block). A Jordan block of size with eigenvalue is the matrix:
Theorem 2.6 (Jordan Normal Form). Every square matrix (over ) is similar to a block-diagonal matrix , where .
For the stability analysis of dynamic systems, Jordan blocks with eigenvalue still converge to zero, but more slowly than the diagonal case — the presence of the superdiagonal 1s means the -th power of contains terms like , which still go to zero as when .
In practice, most matrices arising in macroeconomic models are either diagonalisable or can be handled with the generalized Schur (QZ) decomposition (Chapter 28), so the Jordan form is primarily of theoretical importance.
2.10 Worked Example: Three-Sector Leontief Economy¶
Cross-reference: Principles Ch. 4.5 (input-output analysis) [P:Ch.4.5]
Consider a three-sector economy (manufacturing, services, agriculture) with the following technical coefficient matrix:
and final demand vector .
Step 1: Verify is productive. Compute , the spectral radius of . If , the Leontief inverse exists.
The eigenvalues of can be found from its characteristic polynomial. For a rough check, note that the maximum column sum of is . Since this column-sum norm bounds the spectral radius, , confirming productivity.
Step 2: Compute the Leontief inverse .
Computing numerically (shown below):
Step 3: Compute gross output .
Step 4: Interpretation. To deliver $120 of manufacturing to final demand, the economy must produce $185.8 of manufacturing gross output in total — the additional $65.8 supplies intermediate inputs to all three sectors through the full chain of upstream requirements.
Step 5: Multiplier. The total output multiplier for manufacturing final demand is : one dollar of final demand for manufacturing generates $1.825 of total gross output across all sectors.
⍝ APL — Three-sector Leontief model
⎕IO ← 0 ⋄ ⎕ML ← 1
⍝ 1. Fix: Ensure A is defined as a single flat vector before reshaping
A ← 3 3 ⍴ 0.20 0.15 0.05 0.25 0.10 0.20 0.05 0.08 0.15
d ← 120 80 50 ⍝ final demand
⍝ 2. Fix: Use Outer Product (∘.) to create the identity matrix
I3 ← ∘.=⍨ ⍳ 3 ⍝ 3×3 identity matrix
L ← ⌹ I3 - A ⍝ Leontief inverse
x ← L +.× d ⍝ gross output
x ⍝ Result: 185.8 130.0 84.3
⍝ Output multipliers (column sums of L)
+⌿ L ⍝ total output multiplier per sector
⍝ Verify: (I-A)x = d
(I3 - A) +.× x ⍝ recovers d (120 80 50)2.11 Programming Exercises¶
Exercise 2.1 (APL)¶
Implement a function leontief ← {⌹ (=⍨⍳≢⍵) - ⍵} that takes a technical coefficient matrix and returns the Leontief inverse in one line. Test it on the 3-sector example above. Then implement the full output multiplier calculation multipliers ← {+⌿ leontief ⍵} and verify that the multiplier for sector equals the -th column sum of .
⎕IO ← 0 ⋄ ⎕ML ← 1
⍝ 1. FIX: Add the outer product '∘.' to create the identity matrix
leontief ← {⌹ (∘.=⍨ ⍳ ≢ ⍵) - ⍵}
⍝ 2. This logic is solid (column sums of the inverse)
multipliers ← {+⌿ leontief ⍵}
⍝ Define the Technical Coefficients Matrix A
A ← 3 3 ⍴ 0.20 0.15 0.05 0.25 0.10 0.20 0.05 0.08 0.15
⍝ Calculate
multipliers AExercise 2.2 (Python)¶
import numpy as np
A = np.array([[0.20, 0.15, 0.05],
[0.25, 0.10, 0.20],
[0.05, 0.08, 0.15]])
d = np.array([120, 80, 50])
n = A.shape[0]
L = np.linalg.inv(np.eye(n) - A) # Leontief inverse
x = L @ d # gross output
print("Gross output:", x.round(2))
print("Output multipliers:", L.sum(axis=0).round(4))
print("Spectral radius:", max(abs(np.linalg.eigvals(A))).round(4))Exercise 2.3 (Julia)¶
using LinearAlgebra
A = [0.20 0.15 0.05;
0.25 0.10 0.20;
0.05 0.08 0.15]
d = [120.0, 80.0, 50.0]
L = inv(I - A)
x = L * d
println("Gross output: ", round.(x, digits=2))
println("Multipliers: ", round.(sum(L, dims=1), digits=4))
println("Spectral radius: ", maximum(abs.(eigvals(A))) |> x -> round(x, digits=4))Exercise 2.4 (R)¶
A <- matrix(c(0.20,0.25,0.05, 0.15,0.10,0.08, 0.05,0.20,0.15), 3, 3)
d <- c(120, 80, 50)
n <- nrow(A)
L <- solve(diag(n) - A)
x <- L %*% d
cat("Gross output:", round(x, 2), "\n")
cat("Multipliers:", round(colSums(L), 4), "\n")
cat("Spectral radius:", round(max(abs(eigen(A)$values)), 4), "\n")Exercise 2.5 — IS–LM Parameter Sweep ()¶
Using the IS–LM matrix system from Section 2.8, write an APL dfn islm_multipliers ← {br k h ← ⍵ ⋄ ...} that returns the fiscal and monetary multipliers for given parameters. Generate a 10×10 grid of fiscal multipliers over using ∘.f outer product syntax and plot as a heat map.
Exercise 2.6 — Eigenvalue Stability ()¶
For the 2×2 transition matrix , find the range of for which the system is stable (all eigenvalues inside the unit circle). Note: the eigenvalues of a triangular matrix are its diagonal entries, so stability holds for any . Now perturb to and repeat. What does this tell you about how off-diagonal elements affect stability?
Exercise 2.7 — Perron–Frobenius ()¶
The Perron–Frobenius theorem states that a non-negative irreducible matrix has a unique largest real eigenvalue (the Perron root) with a corresponding non-negative eigenvector. In the Leontief context, if , the economy is productive. (a) Verify the Perron root numerically for the 3-sector matrix above. (b) Find by bisection the largest value of a scalar multiplier such that remains productive (i.e., ). (c) Interpret economically.
2.12 Chapter Summary¶
This chapter developed the linear algebra toolkit for macroeconomic modeling.
Key results:
The Leontief inverse exists when the spectral radius and gives total (direct plus indirect) output requirements per unit of final demand.
Cramer’s rule provides explicit closed-form solutions for small linear systems — the foundation for IS–LM multiplier derivations in Chapter 6.
Eigenvalues determine stability: a discrete system is stable iff all ; a continuous system is stable iff all .
Diagonalization makes powers trivial, enabling analytical and numerical computation of impulse response functions.
In APL:
⌹solves linear systems and computes matrix inverses;+.×is matrix multiplication;=⍨⍳ngenerates the identity matrix;+⌿computes column sums (output multipliers).
Connections forward: Chapter 3 uses eigenvalue analysis to classify the equilibria of differential equation systems. Chapter 4 applies it to difference equations and previews the Blanchard–Kahn condition. Chapter 6 uses Cramer’s rule to derive all IS–LM multipliers. Chapter 28 uses the generalized Schur (QZ) decomposition — an extension of eigendecomposition — to solve linear DSGE models.
Next: Chapter 3 — Differential Equations in Continuous-Time Macro Models