Appendix A: Mathematical Formulas and Tables April 14, 2026
A.1 Differentiation Rules ¶ Basic rules. For differentiable functions f , g f, g f , g and constant c c c :
Rule Formula Constant ( c ) ′ = 0 (c)' = 0 ( c ) ′ = 0 Power ( x n ) ′ = n x n − 1 (x^n)' = nx^{n-1} ( x n ) ′ = n x n − 1 Sum ( f + g ) ′ = f ′ + g ′ (f+g)' = f' + g' ( f + g ) ′ = f ′ + g ′ Product ( f g ) ′ = f ′ g + f g ′ (fg)' = f'g + fg' ( f g ) ′ = f ′ g + f g ′ Quotient ( f / g ) ′ = ( f ′ g − f g ′ ) / g 2 (f/g)' = (f'g - fg')/g^2 ( f / g ) ′ = ( f ′ g − f g ′ ) / g 2 Chain ( f ( g ( x ) ) ) ′ = f ′ ( g ( x ) ) ⋅ g ′ ( x ) (f(g(x)))' = f'(g(x))\cdot g'(x) ( f ( g ( x )) ) ′ = f ′ ( g ( x )) ⋅ g ′ ( x ) Inverse If y = f − 1 ( x ) y = f^{-1}(x) y = f − 1 ( x ) : d y / d x = 1 / f ′ ( y ) dy/dx = 1/f'(y) d y / d x = 1/ f ′ ( y )
Common functions:
Function Derivative e x e^x e x e x e^x e x a x a^x a x a x ln a a^x\ln a a x ln a ln x \ln x ln x 1 / x 1/x 1/ x log a x \log_a x log a x 1 / ( x ln a ) 1/(x\ln a) 1/ ( x ln a ) sin x \sin x sin x cos x \cos x cos x cos x \cos x cos x − sin x -\sin x − sin x arctan x \arctan x arctan x 1 / ( 1 + x 2 ) 1/(1+x^2) 1/ ( 1 + x 2 )
Implicit differentiation. If F ( x , y ) = 0 F(x, y) = 0 F ( x , y ) = 0 defines y y y as a function of x x x : d y / d x = − F x / F y dy/dx = -F_x/F_y d y / d x = − F x / F y .
Logarithmic differentiation. For y = f ( x ) g ( x ) y = f(x)^{g(x)} y = f ( x ) g ( x ) : ln y = g ( x ) ln f ( x ) \ln y = g(x)\ln f(x) ln y = g ( x ) ln f ( x ) , then differentiate both sides.
Integrand Antiderivative x n x^n x n (n ≠ − 1 n\neq-1 n = − 1 )x n + 1 / ( n + 1 ) x^{n+1}/(n+1) x n + 1 / ( n + 1 ) 1 / x 1/x 1/ x $\ln e a x e^{ax} e a x e a x / a e^{ax}/a e a x / a ln x \ln x ln x x ln x − x x\ln x - x x ln x − x sin a x \sin ax sin a x − cos ( a x ) / a -\cos(ax)/a − cos ( a x ) / a cos a x \cos ax cos a x sin ( a x ) / a \sin(ax)/a sin ( a x ) / a 1 / ( 1 + x 2 ) 1/(1+x^2) 1/ ( 1 + x 2 ) arctan x \arctan x arctan x
Integration by parts: ∫ u d v = u v − ∫ v d u \int u\,dv = uv - \int v\,du ∫ u d v = uv − ∫ v d u .
Gaussian integral: ∫ − ∞ ∞ e − a x 2 d x = π / a \int_{-\infty}^\infty e^{-ax^2}dx = \sqrt{\pi/a} ∫ − ∞ ∞ e − a x 2 d x = π / a for a > 0 a > 0 a > 0 .
Present value integral: ∫ 0 ∞ e − ρ t f ( t ) d t = L { f } ( ρ ) \int_0^\infty e^{-\rho t}f(t)dt = \mathcal{L}\{f\}(\rho) ∫ 0 ∞ e − ρt f ( t ) d t = L { f } ( ρ ) (Laplace transform at ρ \rho ρ ).
A.3 Taylor Series Expansions ¶ f ( x ) = ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n (Taylor series around a ) f(x) = \sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n \quad \text{(Taylor series around }a\text{)} f ( x ) = n = 0 ∑ ∞ n ! f ( n ) ( a ) ( x − a ) n (Taylor series around a ) Standard expansions around 0:
Function Expansion e x e^x e x 1 + x + x 2 / 2 ! + x 3 / 3 ! + ⋯ 1 + x + x^2/2! + x^3/3! + \cdots 1 + x + x 2 /2 ! + x 3 /3 ! + ⋯ ln ( 1 + x ) \ln(1+x) ln ( 1 + x ) x − x 2 / 2 + x 3 / 3 − ⋯ x - x^2/2 + x^3/3 - \cdots x − x 2 /2 + x 3 /3 − ⋯ ($( 1 + x ) α (1+x)^\alpha ( 1 + x ) α 1 + α x + α ( α − 1 ) 2 x 2 + ⋯ 1 + \alpha x + \frac{\alpha(\alpha-1)}{2}x^2 + \cdots 1 + αx + 2 α ( α − 1 ) x 2 + ⋯ 1 / ( 1 − x ) 1/(1-x) 1/ ( 1 − x ) 1 + x + x 2 + x 3 + ⋯ 1 + x + x^2 + x^3 + \cdots 1 + x + x 2 + x 3 + ⋯ ($sin x \sin x sin x x − x 3 / 6 + x 5 / 120 − ⋯ x - x^3/6 + x^5/120 - \cdots x − x 3 /6 + x 5 /120 − ⋯ cos x \cos x cos x 1 − x 2 / 2 + x 4 / 24 − ⋯ 1 - x^2/2 + x^4/24 - \cdots 1 − x 2 /2 + x 4 /24 − ⋯
Key approximations (small x x x ): ln ( 1 + x ) ≈ x \ln(1+x) \approx x ln ( 1 + x ) ≈ x , ( 1 + x ) α ≈ 1 + α x (1+x)^\alpha \approx 1 + \alpha x ( 1 + x ) α ≈ 1 + αx , e x ≈ 1 + x + x 2 / 2 e^x \approx 1 + x + x^2/2 e x ≈ 1 + x + x 2 /2 .
A.4 Matrix Identities ¶ Woodbury identity: ( A + U C V ) − 1 = A − 1 − A − 1 U ( C − 1 + V A − 1 U ) − 1 V A − 1 (A + UCV)^{-1} = A^{-1} - A^{-1}U(C^{-1}+VA^{-1}U)^{-1}VA^{-1} ( A + U C V ) − 1 = A − 1 − A − 1 U ( C − 1 + V A − 1 U ) − 1 V A − 1 .
Sherman–Morrison: ( A + u v ′ ) − 1 = A − 1 − A − 1 u v ′ A − 1 1 + v ′ A − 1 u (A + \mathbf{u}\mathbf{v}')^{-1} = A^{-1} - \frac{A^{-1}\mathbf{u}\mathbf{v}'A^{-1}}{1+\mathbf{v}'A^{-1}\mathbf{u}} ( A + u v ′ ) − 1 = A − 1 − 1 + v ′ A − 1 u A − 1 u v ′ A − 1 .
Matrix determinant lemma: det ( A + u v ′ ) = ( 1 + v ′ A − 1 u ) det ( A ) \det(A + \mathbf{u}\mathbf{v}') = (1+\mathbf{v}'A^{-1}\mathbf{u})\det(A) det ( A + u v ′ ) = ( 1 + v ′ A − 1 u ) det ( A ) .
Trace and determinant: tr ( A B ) = tr ( B A ) \text{tr}(AB) = \text{tr}(BA) tr ( A B ) = tr ( B A ) . det ( A B ) = det ( A ) det ( B ) \det(AB) = \det(A)\det(B) det ( A B ) = det ( A ) det ( B ) . det ( A − 1 ) = 1 / det ( A ) \det(A^{-1}) = 1/\det(A) det ( A − 1 ) = 1/ det ( A ) .
Kronecker product: ( A ⊗ B ) ( C ⊗ D ) = ( A C ) ⊗ ( B D ) (A\otimes B)(C\otimes D) = (AC)\otimes(BD) ( A ⊗ B ) ( C ⊗ D ) = ( A C ) ⊗ ( B D ) . vec ( A X B ) = ( B ′ ⊗ A ) vec ( X ) \text{vec}(AXB) = (B'\otimes A)\text{vec}(X) vec ( A XB ) = ( B ′ ⊗ A ) vec ( X ) .
Differentiation: ∂ ( A x ) / ∂ x = A \partial(A\mathbf{x})/\partial\mathbf{x} = A ∂ ( A x ) / ∂ x = A . ∂ ( x ′ A x ) / ∂ x = ( A + A ′ ) x \partial(\mathbf{x}'A\mathbf{x})/\partial\mathbf{x} = (A+A')\mathbf{x} ∂ ( x ′ A x ) / ∂ x = ( A + A ′ ) x . ∂ ln det ( A ) / ∂ A = ( A − 1 ) ′ \partial\ln\det(A)/\partial A = (A^{-1})' ∂ ln det ( A ) / ∂ A = ( A − 1 ) ′ .
A.5 Special Functions in Macroeconomics ¶ Log-normal: If ln X ∼ N ( μ , σ 2 ) \ln X \sim \mathcal{N}(\mu,\sigma^2) ln X ∼ N ( μ , σ 2 ) , then X ∼ LogNormal ( μ , σ 2 ) X \sim \text{LogNormal}(\mu,\sigma^2) X ∼ LogNormal ( μ , σ 2 ) with E [ X ] = e μ + σ 2 / 2 \mathbb{E}[X] = e^{\mu+\sigma^2/2} E [ X ] = e μ + σ 2 /2 , Var [ X ] = ( e σ 2 − 1 ) e 2 μ + σ 2 \text{Var}[X] = (e^{\sigma^2}-1)e^{2\mu+\sigma^2} Var [ X ] = ( e σ 2 − 1 ) e 2 μ + σ 2 .
Gamma function: Γ ( n ) = ( n − 1 ) ! \Gamma(n) = (n-1)! Γ ( n ) = ( n − 1 )! for integer n n n ; Γ ( 1 / 2 ) = π \Gamma(1/2) = \sqrt{\pi} Γ ( 1/2 ) = π ; Γ ( n + 1 ) = n Γ ( n ) \Gamma(n+1) = n\Gamma(n) Γ ( n + 1 ) = n Γ ( n ) .
Normal CDF: Φ ( z ) = P ( Z ≤ z ) \Phi(z) = P(Z\leq z) Φ ( z ) = P ( Z ≤ z ) for Z ∼ N ( 0 , 1 ) Z\sim\mathcal{N}(0,1) Z ∼ N ( 0 , 1 ) ; Φ ( − z ) = 1 − Φ ( z ) \Phi(-z) = 1-\Phi(z) Φ ( − z ) = 1 − Φ ( z ) .
Appendix B: Linear Algebra Review ¶ B.1 Vector Spaces ¶ Definition. A vector space over R \mathbb{R} R is a set V V V with addition and scalar multiplication satisfying 8 axioms (closure, associativity, commutativity, identity, inverses, distributivity).
Basis and dimension. A set { v 1 , … , v n } \{\mathbf{v}_1,\ldots,\mathbf{v}_n\} { v 1 , … , v n } is a basis if it is linearly independent and spans V V V . The dimension dim V \dim V dim V is the cardinality of any basis.
Standard basis of R n \mathbb{R}^n R n : e i \mathbf{e}_i e i has 1 in position i i i and 0 elsewhere.
B.2 Matrix Operations ¶ Rank. rank ( A ) \text{rank}(A) rank ( A ) = number of linearly independent rows (= columns). rank ( A ) ≤ min ( m , n ) \text{rank}(A) \leq \min(m,n) rank ( A ) ≤ min ( m , n ) for A ∈ R m × n A\in\mathbb{R}^{m\times n} A ∈ R m × n . Full column rank: rank ( A ) = n \text{rank}(A) = n rank ( A ) = n (columns independent). Full row rank: rank ( A ) = m \text{rank}(A) = m rank ( A ) = m .
Null space. N ( A ) = { x : A x = 0 } \mathcal{N}(A) = \{\mathbf{x}: A\mathbf{x}=\mathbf{0}\} N ( A ) = { x : A x = 0 } . Dimension: n − rank ( A ) n - \text{rank}(A) n − rank ( A ) (rank-nullity theorem).
Norms. ∥ x ∥ 1 = ∑ ∣ x i ∣ \|\mathbf{x}\|_1 = \sum|x_i| ∥ x ∥ 1 = ∑ ∣ x i ∣ ; ∥ x ∥ 2 = ∑ x i 2 \|\mathbf{x}\|_2 = \sqrt{\sum x_i^2} ∥ x ∥ 2 = ∑ x i 2 ; ∥ x ∥ ∞ = max ∣ x i ∣ \|\mathbf{x}\|_\infty = \max|x_i| ∥ x ∥ ∞ = max ∣ x i ∣ . Matrix norm: ∥ A ∥ 2 = σ max ( A ) \|A\|_2 = \sigma_{\max}(A) ∥ A ∥ 2 = σ m a x ( A ) (largest singular value); ∥ A ∥ F = tr ( A ′ A ) \|A\|_F = \sqrt{\text{tr}(A'A)} ∥ A ∥ F = tr ( A ′ A ) (Frobenius).
B.3 Determinants and Cramer’s Rule ¶ For A ∈ R 2 × 2 A\in\mathbb{R}^{2\times2} A ∈ R 2 × 2 : det ( A ) = a 11 a 22 − a 12 a 21 \det(A) = a_{11}a_{22} - a_{12}a_{21} det ( A ) = a 11 a 22 − a 12 a 21 .
Cramer’s rule. For A x = b A\mathbf{x} = \mathbf{b} A x = b with det ( A ) ≠ 0 \det(A)\neq0 det ( A ) = 0 : x i = det ( A i ) / det ( A ) x_i = \det(A_i)/\det(A) x i = det ( A i ) / det ( A ) , where A i A_i A i is A A A with column i i i replaced by b \mathbf{b} b .
Properties. det ( A B ) = det ( A ) det ( B ) \det(AB) = \det(A)\det(B) det ( A B ) = det ( A ) det ( B ) . det ( A ′ ) = det ( A ) \det(A') = \det(A) det ( A ′ ) = det ( A ) . det ( α A ) = α n det ( A ) \det(\alpha A) = \alpha^n\det(A) det ( α A ) = α n det ( A ) (for n × n n\times n n × n ). det ( A ) = ∏ i λ i \det(A) = \prod_i\lambda_i det ( A ) = ∏ i λ i (product of eigenvalues).
B.4 Eigenvalues and Eigenvectors ¶ A v = λ v A\mathbf{v} = \lambda\mathbf{v} A v = λ v , v ≠ 0 \mathbf{v}\neq\mathbf{0} v = 0 . Eigenvalues are roots of det ( A − λ I ) = 0 \det(A-\lambda I) = 0 det ( A − λ I ) = 0 (characteristic polynomial).
Spectral decomposition (symmetric A = A ′ A=A' A = A ′ ): A = Q Λ Q ′ A = Q\Lambda Q' A = Q Λ Q ′ where Q Q Q is orthonormal and Λ = diag ( λ 1 , … , λ n ) \Lambda = \text{diag}(\lambda_1,\ldots,\lambda_n) Λ = diag ( λ 1 , … , λ n ) .
Stability: Discrete: A t → 0 A^t \to 0 A t → 0 iff ∣ λ i ∣ < 1 |\lambda_i| < 1 ∣ λ i ∣ < 1 for all i i i . Continuous: e A t → 0 e^{At} \to 0 e A t → 0 iff Re ( λ i ) < 0 \text{Re}(\lambda_i) < 0 Re ( λ i ) < 0 for all i i i .
Generalized eigenvalues. A v = λ B v A\mathbf{v} = \lambda B\mathbf{v} A v = λ B v has generalized eigenvalues λ i = T i i / S i i \lambda_i = T_{ii}/S_{ii} λ i = T ii / S ii from the QZ decomposition Q A Z = S QAZ = S Q A Z = S , Q B Z = T QBZ = T QBZ = T .
B.5 The Perron–Frobenius Theorem ¶ Theorem (Perron–Frobenius). Let A A A be a square matrix with all positive entries. Then:
A A A has a unique largest real eigenvalue λ 1 > 0 \lambda_1 > 0 λ 1 > 0 (the Perron root ).
The corresponding eigenvector v 1 \mathbf{v}_1 v 1 has all positive entries (Perron vector ).
∣ λ i ∣ < λ 1 |\lambda_i| < \lambda_1 ∣ λ i ∣ < λ 1 for all other eigenvalues.
Application in input-output analysis (Chapter 2). For the Leontief technical coefficient matrix A A A (with A i j ≥ 0 A_{ij} \geq 0 A ij ≥ 0 ), the Perron root λ 1 ( A ) < 1 \lambda_1(A) < 1 λ 1 ( A ) < 1 guarantees ( I − A ) − 1 (I-A)^{-1} ( I − A ) − 1 exists and is non-negative — the Leontief inverse has all non-negative entries (backward linkage multipliers are non-negative).
Application in Markov chains. For a stochastic matrix P P P (row sums = 1, all entries ≥ 0 \geq 0 ≥ 0 ), Perron–Frobenius guarantees a unique stationary distribution π \bm\pi π with π ′ P = π ′ \bm\pi'P = \bm\pi' π ′ P = π ′ and π i > 0 \pi_i > 0 π i > 0 for all i i i (ergodic chains).
B.6 QR and LU Decompositions ¶ LU decomposition. P A = L U PA = LU P A = LU : P P P permutation, L L L unit lower triangular, U U U upper triangular. Solves A x = b A\mathbf{x}=\mathbf{b} A x = b in O ( n 3 / 3 ) O(n^3/3) O ( n 3 /3 ) flops (Chapter 25).
QR decomposition. A = Q R A = QR A = QR : Q Q Q orthonormal (Q ′ Q = I Q'Q=I Q ′ Q = I ), R R R upper triangular. Used for numerically stable OLS (Chapter 25). κ ( R ) = κ ( A ) \kappa(R) = \kappa(A) κ ( R ) = κ ( A ) vs. κ ( A ′ A ) = κ ( A ) 2 \kappa(A'A) = \kappa(A)^2 κ ( A ′ A ) = κ ( A ) 2 for normal equations.
Cholesky. For positive definite A A A : A = L L ′ A = LL' A = L L ′ (L L L lower triangular). Fastest symmetric system solver. Used for sampling from multivariate normal (Chapter 26).
SVD. A = U Σ V ′ A = U\Sigma V' A = U Σ V ′ : U , V U, V U , V orthonormal, Σ = diag ( σ 1 , … , σ r , 0 , … ) \Sigma = \text{diag}(\sigma_1,\ldots,\sigma_r,0,\ldots) Σ = diag ( σ 1 , … , σ r , 0 , … ) . Rank = number of positive singular values. Used for numerical rank determination (Chapter 41).
Appendix C: Calculus Review ¶ C.1 Limits and Continuity ¶ Limit. lim x → a f ( x ) = L \lim_{x\to a}f(x) = L lim x → a f ( x ) = L : for every ε > 0 \varepsilon>0 ε > 0 there exists δ > 0 \delta>0 δ > 0 such that ∣ x − a ∣ < δ ⇒ ∣ f ( x ) − L ∣ < ε |x-a|<\delta \Rightarrow |f(x)-L|<\varepsilon ∣ x − a ∣ < δ ⇒ ∣ f ( x ) − L ∣ < ε .
L’Hôpital’s rule. If lim f = lim g = 0 \lim f = \lim g = 0 lim f = lim g = 0 (or ± ∞ \pm\infty ± ∞ ): lim f / g = lim f ′ / g ′ \lim f/g = \lim f'/g' lim f / g = lim f ′ / g ′ (when the right-hand limit exists).
Useful limits: lim x → 0 ( 1 + x ) 1 / x = e \lim_{x\to0}(1+x)^{1/x} = e lim x → 0 ( 1 + x ) 1/ x = e . lim n → ∞ ( 1 + r / n ) n = e r \lim_{n\to\infty}(1+r/n)^n = e^r lim n → ∞ ( 1 + r / n ) n = e r . lim x → 0 sin x x = 1 \lim_{x\to0}\frac{\sin x}{x} = 1 lim x → 0 x s i n x = 1 .
C.2 Multivariate Differentiation ¶ Partial derivative. ∂ f / ∂ x i \partial f/\partial x_i ∂ f / ∂ x i = derivative of f f f treating all x j x_j x j (j ≠ i j\neq i j = i ) as constants.
Gradient. ∇ f = ( ∂ f / ∂ x 1 , … , ∂ f / ∂ x n ) ′ ∈ R n \nabla f = (\partial f/\partial x_1, \ldots, \partial f/\partial x_n)' \in \mathbb{R}^n ∇ f = ( ∂ f / ∂ x 1 , … , ∂ f / ∂ x n ) ′ ∈ R n .
Hessian. H f = [ ∂ 2 f / ∂ x i ∂ x j ] H_f = [\partial^2f/\partial x_i\partial x_j] H f = [ ∂ 2 f / ∂ x i ∂ x j ] — symmetric n × n n\times n n × n matrix of second partial derivatives. H f ≻ 0 H_f \succ 0 H f ≻ 0 (positive definite) iff f f f is strictly convex.
Jacobian. For F : R n → R m F:\mathbb{R}^n\to\mathbb{R}^m F : R n → R m : J F = [ ∂ F i / ∂ x j ] J_F = [\partial F_i/\partial x_j] J F = [ ∂ F i / ∂ x j ] — the m × n m\times n m × n matrix of first partial derivatives.
Chain rule (vector form). For h = f ∘ g h = f\circ g h = f ∘ g (h : R m → R p h:\mathbb{R}^m\to\mathbb{R}^p h : R m → R p , g : R n → R m g:\mathbb{R}^n\to\mathbb{R}^m g : R n → R m , f : R m → R p f:\mathbb{R}^m\to\mathbb{R}^p f : R m → R p ): J h = J f ( g ( x ) ) ⋅ J g ( x ) J_h = J_f(g(\mathbf{x}))\cdot J_g(\mathbf{x}) J h = J f ( g ( x )) ⋅ J g ( x ) .
C.3 Optimization Conditions ¶ Unconstrained. FOC: ∇ f ( x ∗ ) = 0 \nabla f(\mathbf{x}^*) = \mathbf{0} ∇ f ( x ∗ ) = 0 . SOC (minimum): H f ( x ∗ ) ≻ 0 H_f(\mathbf{x}^*) \succ 0 H f ( x ∗ ) ≻ 0 .
Equality constraints (Lagrange). Maximize f ( x ) f(\mathbf{x}) f ( x ) s.t. g ( x ) = 0 g(\mathbf{x}) = 0 g ( x ) = 0 : Lagrangian L = f − λ g \mathcal{L} = f - \lambda g L = f − λ g ; FOCs: ∇ f = λ ∇ g \nabla f = \lambda\nabla g ∇ f = λ ∇ g .
Inequality constraints (KKT). Maximize f f f s.t. g j ( x ) ≤ 0 g_j(\mathbf{x}) \leq 0 g j ( x ) ≤ 0 : KKT conditions: ∇ f = ∑ j μ j ∇ g j \nabla f = \sum_j\mu_j\nabla g_j ∇ f = ∑ j μ j ∇ g j ; μ j ≥ 0 \mu_j \geq 0 μ j ≥ 0 ; μ j g j = 0 \mu_jg_j = 0 μ j g j = 0 (complementary slackness).
Envelope theorem. For V ( α ) = max x f ( x , α ) V(\alpha) = \max_x f(x,\alpha) V ( α ) = max x f ( x , α ) s.t. g ( x , α ) = 0 g(x,\alpha)=0 g ( x , α ) = 0 : d V / d α = ∂ L / ∂ α ∣ x = x ∗ ( α ) dV/d\alpha = \partial\mathcal{L}/\partial\alpha|_{x=x^*(\alpha)} d V / d α = ∂ L / ∂ α ∣ x = x ∗ ( α ) .
C.4 Integration ¶ Fundamental theorem of calculus. d d x ∫ a x f ( t ) d t = f ( x ) \frac{d}{dx}\int_a^x f(t)dt = f(x) d x d ∫ a x f ( t ) d t = f ( x ) . ∫ a b f ′ ( x ) d x = f ( b ) − f ( a ) \int_a^b f'(x)dx = f(b) - f(a) ∫ a b f ′ ( x ) d x = f ( b ) − f ( a ) .
Fubini’s theorem. For integrable f f f : ∫ ∫ f ( x , y ) d x d y = ∫ [ ∫ f ( x , y ) d x ] d y \int\!\int f(x,y)dxdy = \int\!\left[\int f(x,y)dx\right]dy ∫ ∫ f ( x , y ) d x d y = ∫ [ ∫ f ( x , y ) d x ] d y .
Change of variables. ∫ ϕ ( a ) ϕ ( b ) f ( x ) d x = ∫ a b f ( ϕ ( t ) ) ϕ ′ ( t ) d t \int_{\phi(a)}^{\phi(b)}f(x)dx = \int_a^b f(\phi(t))\phi'(t)dt ∫ ϕ ( a ) ϕ ( b ) f ( x ) d x = ∫ a b f ( ϕ ( t )) ϕ ′ ( t ) d t .
Leibniz rule. d d α ∫ a ( α ) b ( α ) f ( x , α ) d x = f ( b , α ) b ′ ( α ) − f ( a , α ) a ′ ( α ) + ∫ a b ∂ f ∂ α d x \frac{d}{d\alpha}\int_{a(\alpha)}^{b(\alpha)}f(x,\alpha)dx = f(b,\alpha)b'(\alpha) - f(a,\alpha)a'(\alpha) + \int_a^b\frac{\partial f}{\partial\alpha}dx d α d ∫ a ( α ) b ( α ) f ( x , α ) d x = f ( b , α ) b ′ ( α ) − f ( a , α ) a ′ ( α ) + ∫ a b ∂ α ∂ f d x .
Dominated convergence theorem. If f n → f f_n \to f f n → f pointwise and ∣ f n ∣ ≤ g |f_n| \leq g ∣ f n ∣ ≤ g (integrable), then ∫ f n → ∫ f \int f_n \to \int f ∫ f n → ∫ f .
Appendix D: Probability Distributions and Statistical Tables ¶ D.1 Core Distributions ¶ Normal Distribution: N ( μ , σ 2 ) \mathcal{N}(\mu, \sigma^2) N ( μ , σ 2 ) ¶ PDF: f ( x ) = 1 σ 2 π exp [ − ( x − μ ) 2 2 σ 2 ] f(x) = \frac{1}{\sigma\sqrt{2\pi}}\exp\!\left[-\frac{(x-\mu)^2}{2\sigma^2}\right] f ( x ) = σ 2 π 1 exp [ − 2 σ 2 ( x − μ ) 2 ] .
Mean: μ \mu μ . Variance: σ 2 \sigma^2 σ 2 . MGF: M ( t ) = e μ t + σ 2 t 2 / 2 M(t) = e^{\mu t + \sigma^2t^2/2} M ( t ) = e μ t + σ 2 t 2 /2 .
Standard normal N ( 0 , 1 ) \mathcal{N}(0,1) N ( 0 , 1 ) : CDF Φ ( z ) \Phi(z) Φ ( z ) . Key quantiles: Φ − 1 ( 0.025 ) = − 1.960 \Phi^{-1}(0.025) = -1.960 Φ − 1 ( 0.025 ) = − 1.960 , Φ − 1 ( 0.05 ) = − 1.645 \Phi^{-1}(0.05) = -1.645 Φ − 1 ( 0.05 ) = − 1.645 , Φ − 1 ( 0.10 ) = − 1.282 \Phi^{-1}(0.10) = -1.282 Φ − 1 ( 0.10 ) = − 1.282 .
Log-Normal Distribution: LogN ( μ , σ 2 ) \text{LogN}(\mu, \sigma^2) LogN ( μ , σ 2 ) ¶ X ∼ LogN ( μ , σ 2 ) X \sim \text{LogN}(\mu,\sigma^2) X ∼ LogN ( μ , σ 2 ) iff ln X ∼ N ( μ , σ 2 ) \ln X \sim \mathcal{N}(\mu,\sigma^2) ln X ∼ N ( μ , σ 2 ) .
Mean: e μ + σ 2 / 2 e^{\mu+\sigma^2/2} e μ + σ 2 /2 . Variance: ( e σ 2 − 1 ) e 2 μ + σ 2 (e^{\sigma^2}-1)e^{2\mu+\sigma^2} ( e σ 2 − 1 ) e 2 μ + σ 2 . Median: e μ e^\mu e μ .
Role in macroeconomics: Income and productivity shocks (A t = e z t A_t = e^{z_t} A t = e z t , z t ∼ N z_t \sim \mathcal{N} z t ∼ N ); asset prices; firm sizes (Chapter 33).
Gamma Distribution: Gamma ( α , β ) \text{Gamma}(\alpha, \beta) Gamma ( α , β ) ¶ PDF: f ( x ) = β α Γ ( α ) x α − 1 e − β x f(x) = \frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x} f ( x ) = Γ ( α ) β α x α − 1 e − β x , x > 0 x > 0 x > 0 .
Mean: α / β \alpha/\beta α / β . Variance: α / β 2 \alpha/\beta^2 α / β 2 .
Role: Prior for positive parameters (CRRA σ \sigma σ , shock persistence); the chi-squared is χ 2 ( k ) = Gamma ( k / 2 , 1 / 2 ) \chi^2(k) = \text{Gamma}(k/2, 1/2) χ 2 ( k ) = Gamma ( k /2 , 1/2 ) .
Beta Distribution: Beta ( α , β ) \text{Beta}(\alpha, \beta) Beta ( α , β ) ¶ PDF: f ( x ) = x α − 1 ( 1 − x ) β − 1 B ( α , β ) f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)} f ( x ) = B ( α , β ) x α − 1 ( 1 − x ) β − 1 , x ∈ ( 0 , 1 ) x\in(0,1) x ∈ ( 0 , 1 ) .
Mean: α / ( α + β ) \alpha/(\alpha+\beta) α / ( α + β ) . Variance: α β / [ ( α + β ) 2 ( α + β + 1 ) ] \alpha\beta/[(\alpha+\beta)^2(\alpha+\beta+1)] α β / [( α + β ) 2 ( α + β + 1 )] .
Role: Prior for parameters in [ 0 , 1 ] [0,1] [ 0 , 1 ] — Calvo probability θ \theta θ , AR persistence ρ \rho ρ , capital share α \alpha α .
Inverse-Gamma Distribution: IG ( α , β ) \text{IG}(\alpha, \beta) IG ( α , β ) ¶ X ∼ IG ( α , β ) X \sim \text{IG}(\alpha,\beta) X ∼ IG ( α , β ) iff 1 / X ∼ Gamma ( α , β ) 1/X \sim \text{Gamma}(\alpha,\beta) 1/ X ∼ Gamma ( α , β ) .
Mean: β / ( α − 1 ) \beta/(\alpha-1) β / ( α − 1 ) (α > 1 \alpha>1 α > 1 ). Variance: β 2 / [ ( α − 1 ) 2 ( α − 2 ) ] \beta^2/[(\alpha-1)^2(\alpha-2)] β 2 / [( α − 1 ) 2 ( α − 2 )] (α > 2 \alpha>2 α > 2 ).
Role: Prior for variance parameters (σ ε 2 \sigma^2_\varepsilon σ ε 2 ) — conjugate prior for the normal variance.
D.2 Key Statistical Tables ¶ Normal Distribution Quantiles ¶ p p p Φ − 1 ( p ) \Phi^{-1}(p) Φ − 1 ( p ) 0.90 1.282 0.95 1.645 0.975 1.960 0.99 2.326 0.995 2.576
ADF Critical Values (asymptotic, constant + trend) ¶ Significance No trend With trend 1% −3.43 −3.96 5% −2.86 −3.41 10% −2.57 −3.12
Johansen Trace Statistic Critical Values (5%) ¶ r r r (null: rank ≤ r \leq r ≤ r )n = 2 n=2 n = 2 n = 3 n=3 n = 3 n = 4 n=4 n = 4 0 15.5 29.7 47.2 1 3.8 15.4 29.7 2 — 3.8 15.4
D.3 Moment-Generating Functions ¶ Distribution MGF M ( t ) = E [ e t X ] M(t) = \mathbb{E}[e^{tX}] M ( t ) = E [ e tX ] N ( μ , σ 2 ) \mathcal{N}(\mu,\sigma^2) N ( μ , σ 2 ) e μ t + σ 2 t 2 / 2 e^{\mu t + \sigma^2t^2/2} e μ t + σ 2 t 2 /2 Bernoulli ( p ) \text{Bernoulli}(p) Bernoulli ( p ) 1 − p + p e t 1-p+pe^t 1 − p + p e t Poisson ( λ ) \text{Poisson}(\lambda) Poisson ( λ ) e λ ( e t − 1 ) e^{\lambda(e^t-1)} e λ ( e t − 1 ) Gamma ( α , β ) \text{Gamma}(\alpha,\beta) Gamma ( α , β ) ( 1 − t / β ) − α (1-t/\beta)^{-\alpha} ( 1 − t / β ) − α (t < β t<\beta t < β )Exponential ( λ ) \text{Exponential}(\lambda) Exponential ( λ ) λ / ( λ − t ) \lambda/(\lambda-t) λ / ( λ − t ) (t < λ t<\lambda t < λ )
Key property: E [ X k ] = M ( k ) ( 0 ) \mathbb{E}[X^k] = M^{(k)}(0) E [ X k ] = M ( k ) ( 0 ) (the k k k -th derivative of MGF at 0).