Logistics

  • Book(s)
    • Landau and Lifshitz
    • Kardar
    • Pathria

Thermodynamics

0th Law

  • Thermodynamic equilibrium means that each object in thermal contact with each other have the same temperature
    • Ideal gases, as you cool them down, all converge to a single point, which defines absolute zero. For Kelvin,, the triple point of water defines the size of each step on the scale (273.16 K)

1st Law

  • $dE = \delta W + \delta Q$
    • The change is energy can either come from work, or heat
    • $dE = E_{f}-E_{i}$
      • Energy is a function of state (path independent)
    • $\delta W$ and $\delta Q$ are path dependent
    • Can expand out $\delta W$ into more parts:
      • $dE = -p dV + \delta Q$, where the negative sign comes from the fact that the system does work on the outside environment
        • You don’t need to be restricted to pressure and volume. In general, you can have some “displacements” $\chi_{i}$ (ie. extensive quantities) and their conjugate forces $J_{i}$ (ie. intensive quantities) to get that $dW = \Sigma_{i} J_{i} d\chi_{i}$

Response Functions

  • Response function characterize the macroscopic behavior of a system
  • Heat Capacity:
    • $C_{V} = (\frac{dQ}{dT})_{V}$
    • $C_{P} = (\frac{dQ}{dT})_{P}$
      • This is always large than $C_{V}$ since some of the heat is used up in the work done in changes in volume
  • Force constants
    • the isothermal compressibility of gas : $\kappa_{T} = -(\frac{\partial V}{\partial P})_{T}/V $
    • the susceptibility of a magnet $\chi_{T} = (\frac{\partial M}{\partial B})_{T}/V$
  • Thermal Responses:
    • Things like the expansitivity of a gas, given by $\alpha_{p} = \frac{\partial V}{\partial T}|_{P}/V$

2nd Law

  • Efficiency of an ideal heat engine is defined by $\eta = \frac{W}{Q_{H}} = \frac{Q_{H}-Q_{C}}{Q_{H}} \leq 1$
  • The figure of merit for an ideal refrigerator is $\omega = \frac{Q_{C}}{W} = \frac{Q_{C}}{Q_{H}-Q_{C}}$
  • Kelvin formulation of 2nd Law: No process is possible whose sole result is the complete conversion of heat into work
  • Clausius’s Statement: No process is possible whose sole result is the transfer of heat from a colder to a hotter body
    • The above two are equivalent: Can show that violation of one implies a violation of another via hooking up outputs of an ideal engine to an ideal refrigerator
  • Carnot’s Theorem: You can’t beat the Carnot engine in terms of efficiency
    • A Carnot cycle is defined by two isotherms at $T_{H}$ and $T_{C}$, and two adiabatic curves linking the isotherms
    • The efficiency of a Carnot engine is defined by $\eta = \frac{T_{H}-T_{C}}{T_{H}}$. This can be derived hooking up two different Carnot engines at 3 temperatures. The heat from one gets dumped to the next. The overall efficiency is then the product of the two, which implies that the efficiency is some ratio of temperatures
      • $1-\eta = \frac{Q_{2}}{Q_{1}} = \frac{T_{2}}{T_{1}}$

Entropy

  • Clausius’s Theorem: For any cyclic transformation (reversible or not) $\oint \frac{dQ}{T} \leq 0$, where $dQ$ is the heat increment supplied to the system at temperature T
    • Imagine dividing the cycle into a series of infinitesimal portions, where the system receives energy from dQ and dW. Imagine that dQ gets funneled into some Carnot engine which is attached to some reservoir at a fixed temperature $T_{0}$
    • Since the sign of dQ is unspecified, the Carnot engine must be able to run in both directions. In order to do this, the engine must extract some heat $dQ_{R}$ from the fixed reservoir
    • This implies that $dQ_{R} = \frac{T_{0}}{T} dQ$
    • The net effect of this process is that some heat $Q_{R} = \oint dQ_{R}$ is extracted from the reservoir and converted to external work W. By the Kelvin formulation of the 2nd law, $Q_{R} = W \leq 0$ which implies $\oint \frac{dQ}{T} \leq 0$
  • For a reversible cycle, we know $\oint \frac{dQ_{rev}}{T} = 0$ This implies that for a reversible cycle, this integral is independent of the path. We can use this to define the entropy S: $S(B)-S(A) = \int_{A}^{B} \frac{dQ_{rev}}{T}$
  • Imagine that you make an irreversible change from A to B, but make a reversible regression from B to A: $\int_{A}^{B} \frac{dQ}{T}+ \int \frac{dQ_{rev}}{T} \leq 0$ which implies $\int_{A}^{B} \frac{dQ}{T} \leq S(B)-S(A)$
    • This is the statement that entropy always increases or stays the same
  • For a reversible (and/or quasi-static) process, we can write $dQ = TdS$, where T and S and conjugate variables
  • Equillibrium exists when entropy is maximized!

3rd Law

  • Entropy of a closed system in thermodynamic equillibrium approaches a constant value as temperature approaches absolute 0 (typically 0)
  • This also implies that heat capacities and thermal expansivities must go to 0 as T approaches 0

Thermodynamic Potentials

  • The energy gets extremized in equillibrium for an adiabatically isolated system
  • If you are in an out of equillibrium system that is not adiabatically isolated and subject to external work, then you can define other thermodynamic potentials which are extremized in equillibrium

Enthalpy

  • If there is no heat exchange, and the system comes to mechanical equilibrium subject to a constant external force, then enthalpy is the appropriate potential
    • Can be though of as minimizing the energy, plut the work from the external agent
  • $H = E- \vec{J}\cdot \vec{x}$
  • $dH = dE -d(J\cdot x)$ (Just the legendre transform w.r.t. J and x)

Helmhotz Free Energy

  • For isothermal transformations in the absence of mechanical work (dW=0)
  • F = E-TS (Legendre transform w.r.t. T and S)

Gibbs Free Energy

  • For isothermal transformations involving mechanical work at constant external force
  • $G = E-TS-J\cdot x$
    • Legendre transform in both T,S and J,x

Grand Potential

  • What if the number of particles in the system changes? we can define the chemical work as $dW = \vec{\mu} \cdot d\vec{N}$, where each species has it’s own chemical potential
  • We can introduce this work by simply adding this work to a potential

Probability

Definitions

  • Random variable x has a set of outcome S (can be continuous or discrete)
  • An event is a subset of outcomes from S, and is assigned a probability
  • Probabilities must obey the following rules
    • They must be greater than or equal to 0
    • They must be additive if the events subsets are disjoint
    • They must be normalizable (ie. probability of S should be 1)
  • a cumulative probability function (CPF) denoted P(x) is the probability of an outcome with any value less than x
    • P(x) must be a monotonically increasing function of x
    • $P(-\infty) = 0$ and $P(\infty) = 1$
  • a probability density function (PDF) is defined by $p(x) = \frac{dP(x)}{dx}$
    • This must satisfy $\int p(x) dx = 1$
  • The expectation value of any function F(x) of the random variable is $<F(x)> = \int_{-\infty}^{\infty} dx p(x) F(x)$
    • Easily extendable to multiple dimensions
  • The moments of a PDF are expectation values for power of the random variable
    • The nth moment is $m_{n} = <x^{n}> = \int dx p(x) x^{n}$
  • The characteristic function is the generator of moments of the distribution. It’s just the Fourier transform of the PDF
    • $\tilde{p}(k) = \int dx p(x) exp(-ikx)$
    • The PDF can be recovered from the characteristic function via the inverse Fourier transform $p(x) = \frac{1}{2\pi} \int dk \tilde{p}(k) exp(ikx)$
    • The moments can be calculated by expanding $\tilde{p}(k)$ in powers of k: $\tilde{p}(k) = \Sigma_{n=0}^{\infty} \frac{(-ik)^{n}}{n!}<x^{n}>$
    • Moments of the PDF around any $x_{0}$ can also be generated by substituting x for $x-x_{0}$ in the moment generation
  • The cumulant generation function is the logarithm of the characteristic function
    • You can generate cumulants of the distribution by expansions in powers of x: $ln \tilde{p}(k) = \Sigma_{n=1}^{\infty} \frac{(-ik)^{n}}{n!}<x^{n}>_{c}$
    • Moments can be related to cumulants by using $\ln(1+\epsilon) = \Sigma_{n=1}^{\infty} (-1)^{\epsilon+1} \frac{\epsilon^{n}}{n}$ to expand out $\ln\tilde{p}(k)$
  • A Joint PDF is the probability density of many variables. If all variables are independent, this is just the product of the individual pdfs
  • Stirling’s approximation for N! holds for very large N. This states that
    • $ln(N!) \approx N ln N - N + \frac{1}{2}ln(2\pi N)$

Important Probability Distributions

  • Gaussian: $p(x) = \frac{1}{\sqrt{2\pi \sigma^{2}}} exp(- \frac{(x-\lambda)^{2}}{2\sigma^{2}})$
    • The characteristic function is $\tilde{p}(k) = exp(-ik\lambda-\frac{k^{2}\sigma^{2}}{2})$
    • Only the first two cumulants are non-zero
    • Multidimensional version is $p(x) = \frac{1}{\sqrt{(2\pi)^{N}det(C)}}exp(\frac{-1}{2}\Sigma_{mn}C_{mn}^{-1}(x_m-\lambda_{m})(x_{n}-\lambda_{n}))$
      • C is the a positive definite symmetric matrix
  • Binomial: $p_{N}(N_{A}) = \binom{N}{N_{A}} p_{A}^{N}p_{B}^{N-N_{A}}$
    • Characteristic function is $\tilde{p}(k) = (p_{A}e^{-ik}+p_{B})^{N}$
  • Poisson Distribution
    • This is the limit of the binomial distribution. We want to find the probability of observing N decays in time interval T. Subdivide the interval into $N = \frac{T}{dt} » 1$
      • In each interval, the chance of an event occuring os $p_{A} = \alpha dt$ and the chance of no event in the interval is $p_{B} = 1-\alpha dt$
      • This implies the characteristic function is $\tilde{p}(k) = (p_{A}exp(-ik)+p_{B})^{N} = lim_{dt\rightarrow 0}(1+\alpha dt (e^{-ik}-1))^{\frac{T}{dt}} = exp(\alpha(e^{-ik}-1)T)$
      • Taking the inverse Fourier transform yields the Poisson PDF: $p(x) = \int_{-\infty}^{\infty} \frac{dk}{2\pi} e^{ikx} \Sigma_{M=0}^{\infty} \frac{(\alpha T)^{M}}{M!}e^{-ikM}$ (using power series for the exponential)
      • Using the identity $\int_{-\infty}^{\infty} \frac{dk}{2\pi} exp(ik(x-M)) = \delta(x-M)$, which implies $p_{\alpha,T}(M) = e^{-\alpha T}\frac{(\alpha T)^{M}}{M!}$

Kinetic Theory of Gases

  • We want to be able to connect macroscopic properties of systems to the microscopic picture of particles
  • Imagine a 6N dimensional phase space, where time evolution of each particle is governed by Hamilton’s equations
    • $\dot{q_{i}} = \frac{\partial H}{\partial p_{i}}$
    • $-\dot{p_{i}} = \frac{\partial H}{\partial q_{i}}$
  • We can consider some probability density $\rho(p,q,t)$ which describes the density of particles in that region of phase space
  • We can compute macroscopic values for functions $O(p,q)$ via $<O> = \int \Gamma \rho(p,q,t) O(p,q)$

Liouville’s Theorem

  • Concisely, Liouville’s Theorem states that $\rho$ behaves like an incompressible fluid
  • The proof is as follows:
    • Imagine that you Taylor expand out all the q and p’s as a function of time
    • The volume of phase space before is simply $\Pi_{i} dq_{i}dp_{i}$
    • The volume of phase space after is then the dot product of the Taylor expanded momeenta and position. You can use Hamilton’s equations to eliminate the time dependent portion of the expansion, which implies that the phase space volume is unchanged
  • A consequence of this is that $\frac{d\rho}{dt} = \frac{\partial \rho}{\partial t}+\Sigma_{\alpha=1}^{3N} (\frac{\partial \rho}{\partial p_{\alpha}}\frac{dp_{\alpha}}{dt}+\frac{\partial \rho}{\partial q_{\alpha}}\frac{dq_{\alpha}}{dt}) = 0$
    • Alternatively: $\frac{\partial \rho }{\partial t} = -\{\rho, H\}$, where $\{\}$ is the Poisson bracket
  • Another consequence of this is that $\frac{d<O>}{dt} = <\{A,H\}>$
    • straightforward to prove (integration by parts, and using Hamilton equations gives it straightforwardly)
  • If a density has reached equillibrium, this implies that it’s independent of time. This implies that $\{\rho_{eq},H\}= 0$
    • If $\rho_{eq}$ is a function of the Hamiltonian, then this equation holds
    • This implies that $\rho$ is constant of surfaces of constant energy in phase space
      • This is the basic assumption of statistical mechanics! You have equal probabilities at constant energies

Bogoliubov-Born-Green-Kirkwood-Yvon Hierarchy (BBGKY)

  • For multiple particle, you can generate the s-particle density via $f_{s}(p_{1}…q_{s},t) = \frac{N!}{(N-s)!}\int \Pi_{i=s+1}^{N} dV_{i} \rho(p,q,t) = \frac{N!}{(N-s)!}\rho_{s}(p_{1},…,q_{s},t)$
    • The normalization is $\frac{N!}{(N-s)!}$ and $p_{s}$ is the unconditional PDF for the coordinates of s particles
  • Can imagine a Hamiltonian which is $H = \Sigma_{i=1}^{N} (\frac{p_{i}^{2}}{2m}+U(q_{i}))+\frac{1}{2}\Sigma_{i,j=1}^{N} V(q_{i}-q_{j})$ where U is some external potential and V is a two-body interaction
  • Imagine splitting this hamiltonian into 3 parts
    • $H_{s} = \Sigma_{n=1}^{s}(\frac{p_{n}^{2}}{2m}+U(q_{n}))+\frac{1}{2}\Sigma_{n,m=1}^{N} V(q_{n}-q_{m})$
    • $H_{N-s} = \Sigma_{i=s+1}^{N}(\frac{p_{i}^{2}}{2m}+U(q_{i}))+\frac{1}{2}\Sigma_{i,j=s+1}^{N} V(q_{i}-q_{j})$
    • $H^{’} = \Sigma_{n=1}^{s}\Sigma_{i=s+1}^{N} V(q_{n}-q_{i})$
  • You can write the time evolution of $\rho_{s}$ as $\frac{\partial \rho_{s}}{\partial t} = -\int \Pi_{i=s+1}^{N} dV_{i} \{\rho, H_{s}+H_{N-s}+H^{’}\}$
    • If you take the Poisson bracket of each term, you find that $\frac{\partial f_{s}}{\partial t} = -\{H_{s}, f_{s}\} = \Sigma_{n=1}^{s} \int dV_{s+1} \frac{\partial V(q_{n}-q_{s+1})}{\partial q_{n}}\frac{\partial f_{s+1}}{\partial p_{n}}$
      • Note the hierarchy here: $\frac{\partial f_{s}}{\partial t}$ depends on $\frac{\partial f_{s+1}}{\partial t}$
  • To terminate the hierarchy, we need to be able to neglect some terms. Since all terms have units of inverse time, we can seperate terms according to time scale.
    • $\frac{1}{\tau_{U}}\approx \frac{\partial U}{\partial q}\frac{\partial}{\partial p}$ is some extrinsic time scale, which can be made arbitrarily small by increasing the system size. On the order of 1E-5 sec for length scales of 1E-3 m ($\tau \approx \frac{d}{v}$)
    • $\frac{1}{\tau_{c}}\approx \frac{\partial V}{\partial q}\frac{\partial }{\partial p}$ has some collision time, on the order of 1E-12 sec
      • Longe range interactions are harder to get a collision time
    • There are also collision terms which depend on $f_{s+1}$
      • $\frac{1}{\tau_{x}} \approx \int dV \frac{\partial V}{\partial q}\frac{\partial}{\partial p}\frac{f_{s+1}}{f_{s}}$ where $\tau_{x}$ is the mean free time
      • $\tau_{x} \approx \frac{\tau_{c}}{nd^{3}}$

Boltzmann Equation

  • The Boltzmann equation assumes that the density is much smaller than 1. This allows us to drop the mean free time terms from the equation.
  • The first two equations in the BBGKY hierarchy are
    • $(\frac{\partial}{\partial t} - \frac{\partial U}{\partial q} \frac{\partial}{\partial p }+\frac{p_{1}}{m}\frac{\partial}{\partial q}) f_{1} = \int dV_{2} \frac{\partial V}{\partial q_{1}}\frac{\partial f_{2}}{\partial p_{1}}$
    • $(\frac{\partial}{\partial t}-\frac{\partial U}{\partial q_{1}}\frac{\partial }{\partial p_{1}}-\frac{\partial U}{\partial q_{2}}\frac{\partial}{\partial p_{2}}+\frac{p_{1}}{m}\frac{\partial}{\partial q_{1}}+\frac{p_{2}}{m}\frac{\partial}{\partial q_{2}}-\frac{\partial V(q_{1}-q_{2})}{\partial q_{1}}(\frac{\partial}{\partial p_{1}}-\frac{\partial}{\partial p_{2}}))f_{2} = \int dV_{3} (\frac{\partial V(q_{1}-q_{3})}{\partial q_{1}}\frac{\partial}{\partial p_{1}}+\frac{\partial V(q_{2}-q_{3})}{\partial q_{2}}\frac{\partial}{\partial p_{2}}) f_{3}$
  • With the dilute gas approximation, you can truncate the BBGKY hierarchy by setting the RHS of the 2nd equation to 0
  • It’s also reasonable to expect that at very large distances, the particles become independent (ie. $f_{2}(p_{1},q_{1},p_{2},q_{2},t) \rightarrow f_{1}(p_{1},q_{1},t)f_{1}(p_{2},q_{2},t)$)
  • The final closed form of Boltzmann’s equation is $(\frac{\partial}{\partial t} - \frac{\partial U}{\partial q} \frac{\partial}{\partial p }+\frac{p_{1}}{m}\frac{\partial}{\partial q}) f_{1} = -\int d^{3}\vec{p}d^{2}\Omega |\frac{\partial \sigma}{\partial \Omega}||v_{1}-v_{2}| (f_{1}(p_{1},q_{1},t)f_{1}(p_{2},q_{1},t)-f_{1}(p_{1}’,q_{1},t)f_{1}(p_{2}’,q_{1},t))$

H-Theorem

  • The Theorem: If $f_{1}(\vec{p},\vec{q},t)$ satisfies the Boltzmann equation, then $\frac{dH}{dt} \leq 0$, where $H(t) \int d^{3}\vec{p}d^{3}\vec{q} f_{1} ln f_{1}$
    • H is very closely related to the information content of a one particle PDF
  • The proof is straight forward
    • Take the time derivative of H, and use approximation $ln (f_{1})+1 \approx ln f_{1}$
    • Use Boltzmann equation
    • Use some integration by parts to elimnate some streaming terms
    • Observe that the equation must if you swap particle 1 for particle 2. Do the swap, then average the two epxressions
    • Make the change of integration variables from the initiators of the collision $(p_{1},p_{2},b)$ to the products of the collision $(p_{1}’,p_{2}’,b’)$
      • We know that the Jacobian is unitary from time symmetry
    • We realize that we can swap the initiators and the products. Do this swap, then average the two equations again
  • The final results is something like $\frac{dH}{dt} = -\frac{1}{4}\int d\Gamma x$ where x is some strictly positive integrand. Hence $\frac{dH}{dt}\leq 0$

Classical Statistical Mechanics

Microcanonical Ensemble

  • The microstates are defined by points in phase space, and their time evolution is governed by the Hamiltonian. Since the Hamiltonian is energy conserving, all micro-states are confined to a constant energy surface in phase space
    • This implies that all points on the surface are mutually accessible, which gives the central postulate of stat mech $p(E,x) (\mu) = \frac{1}{\Omega(E,x)}* \delta(H(\mu)-E)$ where $\mu$ enumerate the microstates
      • ie. on the surface, all microstates are equally likely. If you are off the surface, you can’t access it
      • The normalization $\Omega$ is the area of the surface of the constant energy E in phase space. Typically, we define $E-\Delta \leq H(\mu) \leq E+\Delta$, where $\Delta$ is uncertainty in the energy. This allows us to have the normalization be a spherical shell
      • The entropy if this uniform probability distribution is $S = k_{b} ln \Omega$
        • The overall allowed phase space is the product of individual ones (assuming independent systems)

Two State System Example

  • Consider N impurity atoms trapped in a solid matrix. Each impurity can either have energy 0 or $\epsilon$
  • $H = \epsilon\Sigma_{i=1}^{N} n_{i} = \epsilon N_{1}$ where $N_{1}$ is the total number of excited impurities
  • $p(n_{i}) = \frac{1}{\Omega}\delta_{\epsilon,\Sigma_{i}n_{e}E}$
  • $\Omega$ is the number of ways of choosing $N_{1}$ excited levels among the available N atoms. This is just ${N}\choose{N_{1}}$
  • Entropy is just $S = k_{b} \ln N$
  • You can use Stirling’s formula to approximate $\ln N! \approx N \ln N -N$
  • You can then get the temperature from $\frac{\partial S}{\partial E} = \frac{1}{T}$, and invert this to get the erngy as a function of temperature
  • You can figure out the occupancy of a particular level by imagining a a small subsystem that is decreased by the energy of that state, with particle number that is one less (ie. $p_{n_{1}} = \frac{\Omega(E-n_{1}\epsilon, N-1)}{\Omega(E,N)}$)

0th Law

  • $\frac{\partial S}{\partial E} = \frac{1}{T}$
    • Think of this as a statement of thermal equilibrium

1st Law

  • $dE = T dS + \vec{J}\cdot d\vec{x}$
    • Alternatively, you can write $\frac{\partial S}{\partial x_{i}} = - \frac{J_{i}}{T}$

2nd Law

  • $\delta S = (\frac{1}{T_{1}}-\frac{1}{T_{2}})\delta E_{1} \geq 0$
  • This is only stable if $\frac{\partial^{2}S_{1}}{\partial E_{1}^{2}}+\frac{\partial^{2}S_{2}}{\partial E_{2}^{2}} \leq 0$

Ideal Gas

  • The Hamiltonian of this system is $H = \Sigma_{i=1}^{N} \frac{p_{i}^{2}}{2m}+U(q_{i})$ where U is some external potential. For now, assume that U=0
  • Assuming some fixed E, $\Omega$ is just
    • the spatial components ($V^{N}$, since each particle can be anywhere in the volume)
    • times momentum components (this is just the surface of a hypersphere of dimension 3N and radius $\Sigma_{i=1}^{N} p_{i}^{2} = \sqrt{2mE}$)
      • The area of a hypersphere is given by $A_{d} = S_{d}R^{d-1}$
      • Can calculate $S_{d}$ via $I_{d} =( \int_{\infty}^{\infty} dx e^{-x^{2}})^{d}$
        • This equals $ \pi^{\frac{d}{2}}$
        • This integral is the product of d one-dimensional gaussians. This is spherically symmetric. Making that change in variable ($dV_{d} = S_{d}R^{d-1}dR$), then make another change of variable $y=R^{2}$, then use the integral form of the Gamma function $\int_{0}^{\infty} dy y^{\frac{d}{2}-1}e^{-y} = (\frac{d}{2}-1)!$
        • This implies that $S_{d} = \frac{2\pi^{\frac{d}{2}}}{(\frac{d}{2}-1)!}$
      • Hence $\Omega = V^{N} \frac{2\pi^{\frac{d}{2}}}{(\frac{d}{2}-1)!} (2mE)^{\frac{3N-1}{2}}\Delta R$ where $\delta R$ is the thickness of the hypersphere
  • The entropy (after Sterling approximation) is then $S = Nk_{b} \ln (V (\frac{4\pi e m E}{3N})^{\frac{3}{2}})$
    • $\frac{1}{T} = \frac{3}{2}\frac{Nk_{b}}{E}$
    • Can recover ideal gas law from $\frac{P}{V} = \frac{\partial S}{\partial V}$
  • Can calculate the probability of finding a particle of momentum $\vec{p_{1}}$ via $p(\vec{p_{1}}) = \frac{V \Omega(E-\frac{p_{1}^{2}}{2m},V,N-1)}{\Omega(E,V,C)}$
    • After some algebra and Sterling’s approximation, you get the normalized Maxwell-Boltzmann distribution
      • $p(p_{1}) = (\frac{3N}{4\pi m E})^{\frac{3}{2}}exp(\frac{-3N}{2}\frac{p_{1}^{2}}{2mE})$, where $E = \frac{3Nk_{b}T}{2}$

Mixing Entropy

  • The problem with the above ideal gas entropy is that it’s not extensive! (ie. $S(\lambda E, \lambda V, \lambda N) = \lambda S $)
  • What happens if you mix two gases together?
    • Initially, $S_{i} = N_{1}k_{b} \ln(V_{1})+N_{2}k_{b} \ln(V_{2})$
    • After mixing $S_{f} = N_{1}k_{b} \ln(V_{1}+V_{2})+N_{2}k_{b} \ln(V_{1}+V_{2})$
    • Which means the change in entropy is $\Delta S = -N k_{b} (\frac{N_{1}}{N}\ln\frac{V_{1}}{V}+\frac{N_{2}}{N}\ln\frac{V_{2}}{V})$
  • One problem with this is that if you are “mixing” two gases. Imagine that you have a partition separating the gas. Removing the partition shouldn’t change the entropy, but the above argument shows that you have some entropy change
    • The resolution to this is that with indistinguishable particles, you are over counting the number of states by a factor of N!. The above entropy works for for distinguishable particles