Thermodynamic equilibrium means that each object in thermal contact with each other have the same temperature
Ideal gases, as you cool them down, all converge to a single point, which defines absolute zero. For Kelvin,, the triple point of water defines the size of each step on the scale (273.16 K)
The change is energy can either come from work, or heat
$dE = E_{f}-E_{i}$
Energy is a function of state (path independent)
$\delta W$ and $\delta Q$ are path dependent
Can expand out $\delta W$ into more parts:
$dE = -p dV + \delta Q$, where the negative sign comes from the fact that the system does work on the outside environment
You don’t need to be restricted to pressure and volume. In general, you can have some “displacements” $\chi_{i}$ (ie. extensive quantities) and their conjugate forces $J_{i}$ (ie. intensive quantities) to get that $dW = \Sigma_{i} J_{i} d\chi_{i}$
Efficiency of an ideal heat engine is defined by $\eta = \frac{W}{Q_{H}} = \frac{Q_{H}-Q_{C}}{Q_{H}} \leq 1$
The figure of merit for an ideal refrigerator is $\omega = \frac{Q_{C}}{W} = \frac{Q_{C}}{Q_{H}-Q_{C}}$
Kelvin formulation of 2nd Law: No process is possible whose sole result is the complete conversion of heat into work
Clausius’s Statement: No process is possible whose sole result is the transfer of heat from a colder to a hotter body
The above two are equivalent: Can show that violation of one implies a violation of another via hooking up outputs of an ideal engine to an ideal refrigerator
Carnot’s Theorem: You can’t beat the Carnot engine in terms of efficiency
A Carnot cycle is defined by two isotherms at $T_{H}$ and $T_{C}$, and two adiabatic curves linking the isotherms
The efficiency of a Carnot engine is defined by $\eta = \frac{T_{H}-T_{C}}{T_{H}}$. This can be derived hooking up two different Carnot engines at 3 temperatures. The heat from one gets dumped to the next. The overall efficiency is then the product of the two, which implies that the efficiency is some ratio of temperatures
NOTE: When dealing with engines, try to keep all quantities positive (ie. a positive amount of heat leaves the hot reservoir, a positive amount of work leaves the engine, and a positive amount of heat enters the cold reservoir)
Clausius’s Theorem: For any cyclic transformation (reversible or not) $\oint \frac{dQ}{T} \leq 0$, where $dQ$ is the heat increment supplied to the system at temperature T
Imagine dividing the cycle into a series of infinitesimal portions, where the system receives energy from dQ and dW. Imagine that dQ gets funneled into some Carnot engine which is attached to some reservoir at a fixed temperature $T_{0}$
Since the sign of dQ is unspecified, the Carnot engine must be able to run in both directions. In order to do this, the engine must extract some heat $dQ_{R}$ from the fixed reservoir
This implies that $dQ_{R} = \frac{T_{0}}{T} dQ$
The net effect of this process is that some heat $Q_{R} = \oint dQ_{R}$ is extracted from the reservoir and converted to external work W. By the Kelvin formulation of the 2nd law, $Q_{R} = W \leq 0$ which implies $\oint \frac{dQ}{T} \leq 0$
For a reversible cycle, we know $\oint \frac{dQ_{rev}}{T} = 0$ This implies that for a reversible cycle, this integral is independent of the path. We can use this to define the entropy S: $S(B)-S(A) = \int_{A}^{B} \frac{dQ_{rev}}{T}$
Imagine that you make an irreversible change from A to B, but make a reversible regression from B to A: $\int_{A}^{B} \frac{dQ}{T}+ \int \frac{dQ_{rev}}{T} \leq 0$ which implies $\int_{A}^{B} \frac{dQ}{T} \leq S(B)-S(A)$
This is the statement that entropy always increases or stays the same
For a reversible (and/or quasi-static) process, we can write $dQ = TdS$, where T and S and conjugate variables
The energy gets extremized in equillibrium for an adiabatically isolated system
If you are in an out of equillibrium system that is not adiabatically isolated and subject to external work, then you can define other thermodynamic potentials which are extremized in equillibrium
If there is no heat exchange, and the system comes to mechanical equilibrium subject to a constant external force, then enthalpy is the appropriate potential
Can be though of as minimizing the energy, plut the work from the external agent
$H = E- \vec{J}\cdot \vec{x}$
$dH = dE -d(J\cdot x)$ (Just the legendre transform w.r.t. J and x)
What if the number of particles in the system changes? we can define the chemical work as $dW = \vec{\mu} \cdot d\vec{N}$, where each species has it’s own chemical potential
We can introduce this work by simply adding this work to a potential
Random variable x has a set of outcome S (can be continuous or discrete)
An event is a subset of outcomes from S, and is assigned a probability
Probabilities must obey the following rules
They must be greater than or equal to 0
They must be additive if the events subsets are disjoint
They must be normalizable (ie. probability of S should be 1)
a cumulative probability function (CPF) denoted P(x) is the probability of an outcome with any value less than x
P(x) must be a monotonically increasing function of x
$P(-\infty) = 0$ and $P(\infty) = 1$
a probability density function (PDF) is defined by $p(x) = \frac{dP(x)}{dx}$
This must satisfy $\int p(x) dx = 1$
The expectation value of any function F(x) of the random variable is $<F(x)> = \int_{-\infty}^{\infty} dx p(x) F(x)$
Easily extendable to multiple dimensions
The moments of a PDF are expectation values for power of the random variable
The nth moment is $m_{n} = <x^{n}> = \int dx p(x) x^{n}$
The characteristic function is the generator of moments of the distribution. It’s just the Fourier transform of the PDF
$\tilde{p}(k) = \int dx p(x) exp(-ikx)$
The PDF can be recovered from the characteristic function via the inverse Fourier transform $p(x) = \frac{1}{2\pi} \int dk \tilde{p}(k) exp(ikx)$
The moments can be calculated by expanding $\tilde{p}(k)$ in powers of k: $\tilde{p}(k) = \Sigma_{n=0}^{\infty} \frac{(-ik)^{n}}{n!}<x^{n}>$
Moments of the PDF around any $x_{0}$ can also be generated by substituting x for $x-x_{0}$ in the moment generation
The cumulant generation function is the logarithm of the characteristic function
You can generate cumulants of the distribution by expansions in powers of x: $ln \tilde{p}(k) = \Sigma_{n=1}^{\infty} \frac{(-ik)^{n}}{n!}<x^{n}>_{c}$
Moments can be related to cumulants by using $\ln(1+\epsilon) = \Sigma_{n=1}^{\infty} (-1)^{\epsilon+1} \frac{\epsilon^{n}}{n}$ to expand out $\ln\tilde{p}(k)$
A Joint PDF is the probability density of many variables. If all variables are independent, this is just the product of the individual pdfs
Stirling’s approximation for N! holds for very large N. This states that
$ln(N!) \approx N ln N - N + \frac{1}{2}ln(2\pi N)$
Characteristic function is $\tilde{p}(k) = (p_{A}e^{-ik}+p_{B})^{N}$
Poisson Distribution
This is the limit of the binomial distribution. We want to find the probability of observing N decays in time interval T. Subdivide the interval into $N = \frac{T}{dt} » 1$
In each interval, the chance of an event occuring os $p_{A} = \alpha dt$ and the chance of no event in the interval is $p_{B} = 1-\alpha dt$
This implies the characteristic function is $\tilde{p}(k) = (p_{A}exp(-ik)+p_{B})^{N} = lim_{dt\rightarrow 0}(1+\alpha dt (e^{-ik}-1))^{\frac{T}{dt}} = exp(\alpha(e^{-ik}-1)T)$
Taking the inverse Fourier transform yields the Poisson PDF: $p(x) = \int_{-\infty}^{\infty} \frac{dk}{2\pi} e^{ikx} \Sigma_{M=0}^{\infty} \frac{(\alpha T)^{M}}{M!}e^{-ikM}$ (using power series for the exponential)
Using the identity $\int_{-\infty}^{\infty} \frac{dk}{2\pi} exp(ik(x-M)) = \delta(x-M)$, which implies $p_{\alpha,T}(M) = e^{-\alpha T}\frac{(\alpha T)^{M}}{M!}$
Concisely, Liouville’s Theorem states that $\rho$ behaves like an incompressible fluid
The proof is as follows:
Imagine that you Taylor expand out all the q and p’s as a function of time
The volume of phase space before is simply $\Pi_{i} dq_{i}dp_{i}$
The volume of phase space after is then the dot product of the Taylor expanded momenta and position. You can use Hamilton’s equations to eliminate the time dependent portion of the expansion, which implies that the phase space volume is unchanged
A consequence of this is that $\frac{d\rho}{dt} = \frac{\partial \rho}{\partial t}+\Sigma_{\alpha=1}^{3N} (\frac{\partial \rho}{\partial p_{\alpha}}\frac{dp_{\alpha}}{dt}+\frac{\partial \rho}{\partial q_{\alpha}}\frac{dq_{\alpha}}{dt}) = 0$
Alternatively: $\frac{\partial \rho }{\partial t} = -\{\rho, H\}$, where $\{\}$ is the Poisson bracket
Another consequence of this is that $\frac{d<O>}{dt} = <\{O,H\}>$
straightforward to prove (integration by parts, and using Hamilton equations gives it)
If a density has reached equillibrium, this implies that it’s independent of time. This implies that $\{\rho_{eq},H\}= 0$
If $\rho_{eq}$ is a function of the Hamiltonian, then this equation holds
This implies that $\rho$ is constant of surfaces of constant energy in phase space
This is the basic assumption of statistical mechanics! You have equal probabilities at constant energies
For multiple particle, you can generate the s-particle density via $f_{s}(p_{1}…q_{s},t) = \frac{N!}{(N-s)!}\int \Pi_{i=s+1}^{N} dV_{i} \rho(p,q,t) = \frac{N!}{(N-s)!}\rho_{s}(p_{1},…,q_{s},t)$
For the one particle case, this can be interpreted as the probability that any one of the N particles has the specified p and q values (easily generalized to s particles)
The normalization is $\frac{N!}{(N-s)!}$ (comes from permutations of s particles amongst N indistinguishable particles) and $p_{s}$ is the unconditional PDF for the coordinates of s particles
Can imagine a Hamiltonian which is $H = \Sigma_{i=1}^{N} (\frac{p_{i}^{2}}{2m}+U(q_{i}))+\frac{1}{2}\Sigma_{i,j=1}^{N} V(q_{i}-q_{j})$ where U is some external potential and V is a two-body interaction
You can write the time evolution of $\rho_{s}$ as $\frac{\partial \rho_{s}}{\partial t} = -\int \Pi_{i=s+1}^{N} dV_{i} \{\rho, H_{s}+H_{N-s}+H^{’}\}$
If you take the Poisson bracket of each term, you find that $\frac{\partial f_{s}}{\partial t} = -\{H_{s}, f_{s}\} = \Sigma_{n=1}^{s} \int dV_{s+1} \frac{\partial V(q_{n}-q_{s+1})}{\partial q_{n}}\frac{\partial f_{s+1}}{\partial p_{n}}$
Note the hierarchy here: $\frac{\partial f_{s}}{\partial t}$ depends on $\frac{\partial f_{s+1}}{\partial t}$
To terminate the hierarchy, we need to be able to neglect some terms. Since all terms have units of inverse time, we can seperate terms according to time scale.
$\frac{1}{\tau_{U}}\approx \frac{\partial U}{\partial q}\frac{\partial}{\partial p}$ is some extrinsic time scale, which can be made arbitrarily small by increasing the system size. On the order of 1E-5 sec for length scales of 1E-3 m ($\tau \approx \frac{d}{v}$)
$\frac{1}{\tau_{c}}\approx \frac{\partial V}{\partial q}\frac{\partial }{\partial p}$ has some collision time, on the order of 1E-12 sec
Longe range interactions are harder to get a collision time
There are also collision terms which depend on $f_{s+1}$
$\frac{1}{\tau_{x}} \approx \int dV \frac{\partial V}{\partial q}\frac{\partial}{\partial p}\frac{f_{s+1}}{f_{s}}$ where $\tau_{x}$ is the mean free time
With the dilute gas approximation, you can truncate the BBGKY hierarchy by setting the RHS of the 2nd equation to 0
It’s also reasonable to expect that at very large distances, the particles become independent (ie. $f_{2}(p_{1},q_{1},p_{2},q_{2},t) \rightarrow f_{1}(p_{1},q_{1},t)f_{1}(p_{2},q_{2},t)$)
The final closed form of Boltzmann’s equation is $\frac{\partial f_{1}}{\partial t}-\{H,f_{1}\} = \int d \Gamma_{2} d\Omega \frac{|p_{2}-p_{1}|}{m} \frac{d\sigma}{d\Omega}[f_{1}(\Gamma_{1},t)f_{1}(\Gamma_{2},t)-f_{1}(\Gamma_{1},t)f_{1}(\Gamma_{2},t)]$
The Theorem: If $f_{1}(\vec{p},\vec{q},t)$ satisfies the Boltzmann equation, then $\frac{dH}{dt} \leq 0$, where $H(t) = \int d^{3}\vec{p}d^{3}\vec{q} f_{1} ln f_{1}$
H is very closely related to the information content of a one particle PDF
The proof is straight forward
Take the time derivative of H, and use approximation $\ln (f_{1})+1 \approx \ln f_{1}$
Use Boltzmann equation
Use some integration by parts to eliminate the streaming terms (ie. $\ln f_{1}(\frac{\partial U}{\partial \vec{q_{1}}}\cdot\frac{\partial f_{1}}{\partial p_{1}}-\frac{\vec{p_{1}}}{m}\cdot \frac{\partial f_{1}}{\partial \vec{q_{1}}})$)
Observe that the equation must hold if you swap particle 1 for particle 2. Do the swap, then average the two expressions
Make the change of integration variables from the initiators of the collision $(p_{1},p_{2},b)$ to the products of the collision $(p_{1}’,p_{2}’,b’)$
We know that the Jacobian is unitary from time symmetry
We realize that we can swap the initiators and the products. Do this swap, then average the two equations again
The final results is something like $\frac{dH}{dt} = -\frac{1}{4}\int d\Gamma x$ where x is some strictly positive integrand. Hence $\frac{dH}{dt}\leq 0$
This is a big deal, since it gives a microscopic explanation to how entropy arises. Make the definition $S = -k_{b} H(T)$, or equivalently, on a microscopic scale: $\sigma(\Gamma,t) = -k_{b} \ln f_{1}(\Gamma,t)$
The microstates are defined by points in phase space, and their time evolution is governed by the Hamiltonian. Since the Hamiltonian is energy conserving, all micro-states are confined to a constant energy surface in phase space
This implies that all points on the surface are mutually accessible, which gives the central postulate of stat mech $p(E,x) (\mu) = \frac{1}{\Omega(E,x)}* \delta(H(\mu)-E)$ where $\mu$ enumerate the microstates
ie. on the surface, all microstates are equally likely. If you are off the surface, you can’t access it
The normalization $\Omega$ is the area of the surface of the constant energy E in phase space. Typically, we define $E-\Delta \leq H(\mu) \leq E+\Delta$, where $\Delta$ is uncertainty in the energy. This allows us to have the normalization be a spherical shell
The entropy of this uniform probability distribution is $S = k_{b} \ln \Omega$
The overall allowed phase space is the product of individual ones (assuming independent systems)
Another consequence of this is that in the microcanonical ensemble, $dE=0$ since we are prescribing a fixed energy
$\Omega$ is the number of ways of choosing $N_{1}$ excited levels among the available N atoms. This is just ${N}\choose{N_{1}}$
Entropy is just $S = k_{b} \ln \frac{N!}{(N-N_{1})!N_{1}!}$
You can use Stirling’s formula to approximate $\ln N! \approx N \ln N -N$
You can then get the temperature from $\frac{\partial S}{\partial E} = \frac{1}{T}$, and invert this to get the erngy as a function of temperature
You can figure out the occupancy of a particular level by imagining a a small subsystem that is decreased by the energy of that state, with particle number that is one less (ie. $p_{n_{1}} = \frac{\Omega(E-n_{1}\epsilon, N-1)}{\Omega(E,N)}$)
The Hamiltonian of this system is $H = \Sigma_{i=1}^{N} \frac{p_{i}^{2}}{2m}+U(q_{i})$ where U is some external potential. For now, assume that U=0
Assuming some fixed E, $\Omega$ is just
the spatial components ($V^{N}$, since each particle can be anywhere in the volume)
times momentum components (this is just the surface of a hypersphere of dimension 3N and radius $\Sigma_{i=1}^{N} p_{i}^{2} = \sqrt{2mE}$)
The area of a hypersphere is given by $A_{d} = S_{d}R^{d-1}$
Can calculate $S_{d}$ via $I_{d} =( \int_{\infty}^{\infty} dx e^{-x^{2}})^{d}$
This equals $ \pi^{\frac{d}{2}}$
This integral is the product of d one-dimensional gaussians. This is spherically symmetric. Making that change in variable ($dV_{d} = S_{d}R^{d-1}dR$), then make another change of variable $y=R^{2}$, then use the integral form of the Gamma function $\int_{0}^{\infty} dy y^{\frac{d}{2}-1}e^{-y} = (\frac{d}{2}-1)!$
This implies that $S_{d} = \frac{2\pi^{\frac{d}{2}}}{(\frac{d}{2}-1)!}$
Hence $\Omega = V^{N} \frac{2\pi^{\frac{d}{2}}}{(\frac{d}{2}-1)!} (2mE)^{\frac{3N-1}{2}}\Delta R$ where $\delta R$ is the thickness of the hypersphere
The entropy (after Sterling approximation) is then $S = Nk_{b} \ln (V (\frac{4\pi e m E}{3N})^{\frac{3}{2}})$
$\frac{1}{T} = \frac{3}{2}\frac{Nk_{b}}{E}$
Can recover ideal gas law from $\frac{P}{V} = \frac{\partial S}{\partial V}$
Can calculate the probability of finding a particle of momentum $\vec{p_{1}}$ via $p(\vec{p_{1}}) = \frac{V \Omega(E-\frac{p_{1}^{2}}{2m},V,N-1)}{\Omega(E,V,C)}$
After some algebra and Sterling’s approximation, you get the normalized Maxwell-Boltzmann distribution
$p(p_{1}) = (\frac{3N}{4\pi m E})^{\frac{3}{2}}exp(\frac{-3N}{2}\frac{p_{1}^{2}}{2mE})$, where $E = \frac{3Nk_{b}T}{2}$
After mixing $S_{f} = N_{1}k_{b} \ln(V_{1}+V_{2})+N_{2}k_{b} \ln(V_{1}+V_{2})$
Which means the change in entropy is $\Delta S = -N k_{b} (\frac{N_{1}}{N}\ln\frac{V_{1}}{V}+\frac{N_{2}}{N}\ln\frac{V_{2}}{V})$
One problem with this is that if you are “mixing” two gases. Imagine that you have a partition separating the gas. Removing the partition shouldn’t change the entropy, but the above argument shows that you have some entropy change
The resolution to this is that with indistinguishable particles, you are over counting the number of states by a factor of N!. The above entropy works for for distinguishable particles
Instead of fixing the energy E and varying the temperature T, you fix the temperature T and vary the energy E
Consider two systems: one which is the system you care about (S) and another which is sufficiently large such that its temperature is not changed by interactions with S (call this system R)
From a microcanonical ensemble POV, the joint probability of micro-states ($\mu_{S}\otimes \mu_{R}$) satisfied $p(\mu_{S}\otimes \mu_{R}) = \frac{1}{\Omega} \delta_{E,E_{tot}}$ where $E=H_{s}(\mu_{s})+H_{R}(\mu_{R})$
The unconditional probability for S just marginalizes over all of the energies of R
If $\mu_{s}$ is specified, we can view S as some microcanonical ensemble of the reservoir with energy $E_{tot}-H_{S}$
All of the above implies $p(\mu_{s}) = \frac{\Omega_{R}(E_{tot}-H_{S}(\mu_{s}))}{\Omega_{S\otimes R}(E_{tot})}$
Since we assume that the energy of S is much smaller than the energy of R: $S_{R}(E_{tot} -H_{s}) \approx S_{R}(E_{tot}) -H_{s}\frac{\partial S_{R}}{\partial E_{R}} = S_{R}(E_{tot})-\frac{H_{S}}{T}$
The above yields that $P(\mu,T) = \frac{e^{-\beta H(\mu)}}{Z}$ where Z is the normalization $Z = \Sigma_{\mu} e^{-\beta H(\mu)}$
This Z is called the partition function
We can make the association $F = -k_{b}T \ln Z(T,x)$
Suppose that Z is a function of $\beta$ (ie. T) and position (x)
We now allow chemical work on the system. In an analogous manner to deriving the canonical ensemble, we can give the probabilities of being in a particular energy
$p(E) = \frac{exp(\beta \mu N - \beta H(E))}{Q}$ where $Q=\Sigma_{E}exp(\beta \mu N(E)-\beta H(E))$
You can get the average number of particles via $\frac{\partial}{\partial (\beta \mu)}\ln Q$ and the variance ($\frac{\partial^{2}}{\partial (\beta\mu)^{2}}\ln Q$)
You can define a grand canonical potential $\mathbb{G }(T,\mu, x) = E-TS-\mu N = -k_{b}T \ln Q$ in an analogous way to the canonical ensemble
For each thermodynamical potential, there is an associated ensemble where that potential is constant
For microcanonical, dE = 0
For canonical, dF = 0
There are similar ensembles for H and G. In any case, you can define $X = -\beta \ln Z$ where X is the conserved potential. Simply add on $\mu N$ to the exponential to recover the associate grand potentials
Imagine a general Hamiltonian $H = \Sigma_{i=1}^{N} \frac{p_{i}^{2}}{2m}+U(q_1,… q_N)$
We can write the total partition function as $Z(T,V,N) = \frac{1}{N!} \int \Pi_{i=1}^{N} (\frac{d^{3}p_{i}d^{3}q_{i}}{h^{3}}) exp(-\beta \Sigma_{i} \frac{p_i^{2}}{2m}) exp(-\beta U(q_{1}, q_{n})) = Z_{0}(T,V,N)<exp(-\beta U)>^{0}$
where $Z_{0} = (\frac{V}{\lambda^{3}})^{N}\frac{1}{N!}$ and $\lambda = \sqrt{\frac{h^{2}}{2\pi m k_{b} T}}$
and $<O>^{0}$ is the expectation value of O computed with with probability distribution of the non-interacting system
We can write the above in terms of the cumulants
$\ln Z = \ln Z_{0} + \Sigma_{i=1}^{\infty} <U^{i}>_{c}^{0}$
Considering that the $q_{i}$ are uniformly and independently distributed within the box V, we have the moments $<U^i>^{0} = \int \Pi_{j}^{N} \frac{d^{3}q_{j}}{V} U(q_{1},…q_{N})^{i}$
Phase transitions are characterized by discontinuities in various state functions, and correspond to singularities in the partition functions
Start with the same integral for the partition functions of an dilute gas $Z = \int \frac{\Pi_{i=1}^{N} d^{3}p_{i}d^{3}q_{i}}{N! h^{3N}} exp(-\beta\Sigma_{i}^{N} \frac{p_{i}^{2}}{2m} -\beta \Sigma_{i<j} V(q_{i}-q_{j}))$
For a non-ideal gas
particles take up some space
There is some potential between each particle
We can approximate the potential via an average attraction energy, where we assume some uniform density $n = \frac{N}{V}$
Can take the log to turn the product into a sum, then use the (very handwavy) argument that on average $i=\frac{N}{2}$, and there are N terms, so you have $\Pi_{i=0}^{N-1} \ln(Vi-\Omega)\approx N \ln(V-\frac{N\Omega}{2})$. Invert the log to get the final result
Hence, the total partition function is $Z \approx \frac{(V-\frac{N\Omega}{2})^{N}}{N! \lambda^{3N}} exp(\frac{\beta u N^{2}}{2V})$
You can rederive the Van-derWals equation of state by finding the free energy, then using $P = -\frac{\partial F}{\partial V}$
In a P,T plot, there is a line of coexistence along with two phases of matter can coexist. This line terminates at the so called “critical point”
The critical point can be found by taking the equation of state ($P = P(N,V,T)$), setting it’s first derivative and second derivatives to 0
The first derivative constraint comes from the fact that the critical point is the limit of flat coexistence portion of the isotherms
The second derivative constraint comes from stability reasons
Solve this system of equations for $P_{c},T_{c},V_{c}$
One can use these critical values to rescale the equation of state into a material independent equation of state (ie $P_{r} = \frac{P}{P_{c}}$ and similar variables)
The Hamiltonian for each molecule of n atoms is $H = \Sigma_{i=1}^{n} \frac{p_{i}^{2}}{2m}+V(q)$
If the atoms have different masses, you can rescale the coordinates by scaling $\vec{q_{i}}$ by $\sqrt{\frac{m_{i}}{m}}$ and scaling $\vec{p_{i}}$ by $\sqrt{\frac{m}{m_{i}}}$
Ignoring the interactions between molecules, you can define the partition function as $Z(N) = \frac{1}{N!} (\int \Pi_{i=1}^{N} \frac{d^{3}p d^{3}q}{h^{3}} exp(-\beta \Sigma_{i=1}^{n} \frac{p_{i}^{2}}{2m}-\beta V))^{N}$
If the temperatures are smaller than than the dissociation energies ($\approx 1E4 K$), then there are only small deformations. The procedure to find the contributions of the deformations to the single particle partition function goes like:
Find the equilibrium positions by minimizing V
Do a small perturbation of $\vec{q_{i}} = \vec{q_{i*}} +\vec{u}$ around these equilibrium positions. This expand V around the minimum via $V+ V_{*}+\frac{1}{2}\Sigma_{i,j=1}^{n}\Sigma_{\alpha,\beta = 1}^{3} \frac{\partial ^{2} V}{\partial q_{i,\alpha} \partial q_{j,\beta}} u_{i,\alpha} u_{j,\beta}$
The second derivative matrix is a 3nx3n positive definite, which means that you can diagonalize it and do a change of basis from $u_{i}$ to $u_{s}$. This allows you to write the deformation Hamiltonian as $H_{1} = V_{*} + \Sigma_{s=1}^{3n} (\frac{p_{s}^{2}}{2m} + \frac{K_{s}}{2} u_{s}^{2})$
What is the average energy of each molecule? This is the expectation value of the deformation hamiltonian. Fro equipartition, there are 3n quadratic momentum degrees of freedom and $m\leq 3n$ number of modes with non-zero $K_{s}$
Some of these eigenmodes are forced to be 0 from symmetries
Translational symmetry: the original potential is invariant under translation by c. This means that no energy is stored in the center of mass coordinate $\vec{Q} = \Sigma_{\alpha} \frac{\vec{q_{\alpha}}}{n}$, which means the 3 $K_{trans}$ associated with this coordinate are 0
Rotational symmetry: There is no potential energy associated with rotation. There can be at most 3 degrees of freedom in rotations. Some molecule shapes will have less (a rod will have 2, since rotation along the axis is not unique)
This implies that $m = 3n-3-r$ (assuming no unique molecule shape). These are the vibrational modes
If you calculate $\gamma = \frac{C_{p}}{C_{v}}$ for various gases and compare to the experimental values, you get an overestimation. This gets solved if you quantize the vibrational modes
The vibrational partition function becomes $Z_{vib} = \Sigma_{n=0}^{\infty} exp(-\beta \hbar \omega (n+\frac{1}{2})) = \frac{e^{\frac{-\beta \hbar \omega}{2}}}{1-e^{-\beta \hbar \omega}}$
The expectation value of the energy is then $E_{vib} = -\frac{\partial \ln Z}{\partial \beta} = \frac{\hbar \omega}{2} + \hbar \omega \frac{e^{-\beta \hbar \omega}}{1-e^{-\beta \hbar \omega}}$
The classical Hamiltonian is $H_{rot} = \frac{\vec{L^{2}}}{2I}$. The quantized version of angular momentum is $L^{2} = \hbar^{2} l(l+1)$
Hence, the partition function is $Z = \Sigma_{l=0}^{\infty} exp(-\frac{\beta \hbar^{2} l(l+1)}{2I}) (2l+1)$, where the 2l+1 comes from the degeneracy of the m quantum number
Defining some characteristic function $\theta_{rot} = \frac{\hbar^{2}}{2I k_{b}}$, the partition function can be rewritten as $Z = \Sigma_{l=0}^{\infty} exp(-\frac{\theta_{rot} l(l+1)}{T}) (2l+1)$
Doing this sum in hard, but in high and low temperatures limits, you can approximate the sum as an integral and only keep the first few terms in the sum respectively
Imagine EM waves with wave-number $\vec{k}$ and two polarizations $\alpha$. The Hamiltonian for the EM field can then be written as a sum of harmonic oscillators
There is no limit on the size of the Brillouin zone for k, so you can have an arbitrarily large number in the sum. This causes the ultraviolet catastrophe
In more details: you can have an arbitrary number of high frequency modes and each of those modes has an energy of $k_{b}T$ associated with it. This implies there is an infinite amount of energy
This is solved by quantizing the allowed value of the EM energy
$H = \Sigma_{k,\alpha} = \hbar c k (n_{\alpha}(k) + \frac{1}{2})$ where $n_{\alpha}(k) = 0,1,2,…$
The internal energy can be calculated as per usual
$E = <H> = \Sigma_{k,\alpha} \hbar c k (\frac{1}{2}+\frac{e^{-\beta \hbar c k}}{1-e^{-\beta \hbar c k}}) = VE_{0} + \frac{2V}{(2\pi)^3}\int d^{3}\vec{k}\frac{\hbar c k}{e^{\beta \hbar c k}-1}$
$VE_{0}$ is the infinite zero point energy, but only energy differences can be measured, so this cancels out
The integral can be calculated by making the change of variable $x = \beta \hbar c k$ and using the identity $\int_{0}^{\infty} \frac{dx x}{e^{x}-1} = \frac{\pi^{2}}{6}$
$\rho = \Sigma_{\alpha} \Pi_{i}^{N} \delta^{3}(\vec{q_{i}}-\vec{q_{i}(t)}) \delta^{3}(\vec{p_{i}}-\vec{p_{i}(t)}_{\alpha})$ is the ensemble density
We can then write the expectation value of O as $\bar{O} = \Sigma_{m,n}<n|\rho|m> <m|O|n> = tr(\rho O)$
We can define the density matrix as $<n|\rho(t)|m> = \Sigma_{\alpha} p_{\alpha} <n|\Phi_{\alpha}(t)><\Phi_{\alpha}(t)|m>$ or alternatively: $\rho(t) = \Sigma_{\alpha} p_{\alpha} | \Phi_{\alpha}(t)>< \Phi_{\alpha}(t)|$
The density matrix has some nice properties
$tr(\rho) = 1$ due to normalization of states
$\rho$ is Hermitian
$\rho$ is positive definite
Being in equillibrium demands that $\frac{\partial \rho}{\partial t} = 0$, which implies via Heisenberg equation of motion that $i\hbar \frac{\partial \rho}{\partial t} = [H,\rho]$
Given the above, we can define some ensembles:
Micro: $\rho(E) = \frac{\delta(H-E)}{\Omega(E)}$. This allows the density matrix to obey the constraint of a fixed energy E
Canonical: A fixed temperature $T = \frac{1}{k_{b}\beta}$ can be assigned by being in contact with a reservoir
$\rho(\beta) = \frac{exp(-\beta H)}{Z(\beta)}$, and from the normalization condition on $\rho$, we have that $Z = tr(e^{-\beta H})$ (remember, Z is a scalar, not an operator!)
Grand Canonical: Similarly, if we vary the particle number, we have that the grand canocial ensemble is $Q = tr(exp(-\beta H \beta \mu_{i}N_{i}))$
If we have N particles, we have N! permutations P which form a group $S_{N}$.
We can label the identity permutation by the ascending positive integers (ie. if N=3, the identity is (1,2,3))
We can define the parity of a permutations as $(-1)^{P} = \pm 1$ with the plus being associated with an even number of exchanges from ascending order, and minus being associated with an odd number of exchanges
Aside: Draw lines connecting the initial and final locations of each integer. The parity is (-1) raised to the number of intersections of these lines
Two types of particles:
Bosons, which are even under permutation: $P|\Psi(1,…N)> = |\Psi(1,…N)>$
Fermions, which are odd under permutation: $P|\Psi(1,…N)> = (-1)^{P}|\Psi(1,…N)>$
Suppose that we have a Hamiltonian of N non-interacting particles in a box of volume V. This gives $H = \Sigma_{\alpha=1}^{N} (-\frac{\hbar^{2}}{2m}\nabla^{2}_{\alpha})$
If we are to make the particle indistinguishable, then depending on their nature, only a subspace of the distinguishable Fock space
We can bundle both cases like $|\vec{k} >= \frac{1}{\sqrt{N_{\eta}}} \Sigma_{P} \eta^{P} P | \vec{k}>$ where $\eta$ is 1 for bosons and -1 for fermions
Each state is uniquely specified by a set of occupation numbers ${n_{\vec{k}}}$ such that $\Sigma_{\vec{k}} n_{\vec{k}} = N$
For fermions, $|\vec{k}> = 0$ unless $n_{k}$ is 0 or 1
For bosons, any k may be repeated $n_{k}$ times
The normalization for both fermions and bosons is N!
For fermions, $n_{\vec{k}}$ can be 0 or 1, while bosons can range from 0 to $\infty$. In either case, we have
$ln Q_{\eta} = -\eta \Sigma_{k} \ln (1-\eta exp(\beta\mu-\beta\Epsilon(k)))$ with $\eta=-1$ for fermions and $\eta=+1$ for bosons
Just do the sum of two terms for fermions, and for bosons, do the infinite geometric series
The average occupation number of a state of energy $\Epsilon(k)$ is given by $<n_{\vec{k}}> = -\frac{\partial \ln Q_{\eta}}{\partial (\beta \Epsilon(\vec{k}))} = \frac{1}{z^{-1}exp(\beta \Epsilon(\vec{k}))-\eta}$
The average particle number and internal energy are then just $N_{\eta} = \Sigma_{\vec{k}} <n_{\vec{k}}>$ and $E_{\eta} = \Sigma_{\vec{k}} \Epsilon(\vec{k}) <n_{\vec{k}}>_{\eta}$
Both of these quantities can be extended to a continuum spectrum via the substitution $\Sigma_{\vec{k}} \rightarrow \int d^{3}k \frac{1}{(2\pi)^{3}}$
Fermi occupation is $<n_{\vec{k}}> = \frac{1}{exp(\beta(\Epsilon(k)-\mu))+1}$
At T=0, $\mu=\epsilon_{F}$ or the Fermi energy
At T=0, all one-particle states of energy less than $\epsilon_{F}$ are occupied. For an ideal gas, we can $\Epsilon(k) = \frac{\hbar^{2}k_{F}^{2}}{2m}$ where $k_{F}$ is the Fermi wavenumber $k_{F}$
Using the above definition for average N, you can define the number density as $n = \frac{N}{V}$. For an ideal gas, we have that $\Epsilon(k) = \frac{\hbar^{2}}{2m}(\frac{6\pi^{2}n}{g})^{\frac{2}{3}}$ wher g is the spin degeneracy
The function $f_{m}^{\eta}(z) = \frac{1}{(m-1)!}\int_{0}^{\infty} \frac{dx x^{m-1}}{z^{-1}e^{x}-\eta}$ shows up a lot when dealing with degenerate gases.
Namely, the number density is proportional to this function.
$\eta = \pm 1 $ for bosons and fermions respectively and $z = exp(\beta \mu)$
$f_{m}^{\eta}(z) = \Sigma_{\alpha=1}^{\infty} \eta^{\alpha+1} \frac{z^{\alpha}}{\alpha^{m}}$ in the high temperature (small z), low density limit
There is a recursion relationship for this function: $\frac{d}{dz}f_{m}^{\eta} = \frac{1}{z} f_{m-1}^{\eta}(z)$
In the low T limit, we have that z approaches unity. Hence, we need to find $f_{+}^{1}(1)$ in order to calculate the maximum number density $n_{max}$
For m<1, the integral diverges, while for m>1, the integral is finite
Hence, this sets an upper bound on the density of excited states.
For instance, in a bosonic, non-relativistic gas, we have $n \leq \frac{g}{\lambda^{3}}f_{\frac{3}{2}}^{+}(1)$ where $\lambda = \frac{h}{\sqrt{2\pi m k_{b}T}}$
The critical temperature $T_{c}$ is when the above is at equality. For $T<T_{C}$, z gets pinned to 1. The remaining density $n_{0} = n-n_{max}$ occupied the lowest energy state