IEEE Floating Point Standard Arrays Numerics Random Numbers LCM (Linear Congruent Method) Testing RNG Nonuniform Distributions Non-Analytic Nonuniform Distributions Interpolation Linear Interpolation Fitting Basis Coefficients Polynomial fits Fourier Basis Integration Trapezoid Rule Simpson’s Rule Rescaling Integrals Differentiation Scaling Linear Algebra LU SVD Eigensystems Uses Principal Component Analysis (PCA) Dimensional Reduction Root-Finding Minimization Quadratic Fit Golden Section Search Multi-Dimensional FFTs ODEs PDEs Initial Condition Problems Boundary Condition Problems MCMC (Markov Chain Monte Carlo) Markov Chain Metropolis Hastings Burn-in and Proposal Distribution IEEE Floating Point Standard $\pm 1+ 2^{e-127}\Sigma_{i=0}^{22} f_{i} 2^{-(i-23)}$ e is the exponent number $f_{i}$ represents the floating bits Some edge cases: e = 0, $f \neq 0$ are called subnormal numbers, where “1” above is replaced with 0 and $e-127$ is replaced by $e-126$ e=0 and f=0 is a signed zero e=255, f=0 is a signed $\inf$ e=255, $f\neq = 0$ is NaN Arrays TL;DR if you want to go fast, then use numpy arrays, since they are more contiguous in memory (and hence cache friendly) compared to python lists You also don’t have to worry about type checking since Python needs to check if operations There are also some built-in functions that numpy provides which are faster than Python built-ins Numerics TL;DR Keep numbers from being too small and too large and you’re good $E[\delta (f(a))^{2}] \approx E[[\frac{\partial f}{\partial a}]^{2} \delta a^{2}]$ Assuming similar errors for each operations, you find that the round-off error scales as $\sqrt{N}$ Making finer time steps yields a better approximation, but the round-off error degrades this....