Introduction and Review

1-1 Introduction

"Statistical mechanics is the branch of physics which studies macroscopic systems from a microscopic or molecular point of view"

1-2 Classical Mechanics

Newtonian Approach

<center>

<br>

<tex>\frac

Unknown macro: {d vec p}
Unknown macro: {dt}

= \dot p</tex>

<br>

<tex>\frac

Unknown macro: {dt}

= \vec F</tex>

<br>

</center>

Example 1

Equation of motion of body in a gravitational field

Example 2

Simple Harmonic Oscillator

Example 3

Two-dimensional motion of a body under coulombic attraction

Lagrangian Approach

There formulations of classical mechanics are independent of the coordinate system employed. Introduce a function that is equal to the difference between the kinetic energy and the potential energy.v

<center>

<br>

<tex> \frac

Unknown macro: {d}

\left ( \frac

Unknown macro: {partial L}
Unknown macro: {partial dot x}

\right ) = \frac

Unknown macro: { partial L}
Unknown macro: {partial x}

</tex>

<br>

</center>

Easier to write down an expression for the potential energy than to recognize all the forces acting on a system. To completely specify solutions, six initial conditions are needed, which with the Lagrange's equations completely determine the future and past trajectory of the system.

Example 3'

Equations obtained in a much more straightforward way.

Hamiltonian Approach

This approach is more convenient from a theoretical point of view, particularly in quantum mechanics and statistical mechanics. Define generalized momentum by the equation below.

<center>

<br>

<tex>p_j = \frac

{\partia \dot q_j</tex>

<br>

<tex>j = 1, 2, 3, ..., 3N</tex>

<br>

</center>

Define the Hamiltonian function for a system of one particle.

<center>

<br>

<tex>H \left ( p_1, p_2, p_3, q_1, q_2, q_3 \right ) = \sum_

Unknown macro: {j = 1}

^3 p_j \dot q_j - L \left ( \dot q_1, \dot q_2, \dot q_3, q_1, q_2, q_3 \right )</tex>

<br>

<tex>H = K + U</tex>

</center>

Below are Hamilton's equations of motion

<center>

<br>

<tex>\frac

Unknown macro: { partial H}
Unknown macro: { partial p_j }

= \dot q_j</tex>

<br>

<tex>\frac

Unknown macro: { partial q_j }

= - \dot p_j</tex>

<br>

<tex>j = 1, 2, ..., 3N</tex>

<br>

</center>

The Hamiltonian is the total energy of the system which is usually the prime quantity in quantum and statistical mechanics.

1-3 Quantum Mechanics

The prescription given by classical mechanics had to be modified to include the principle of uncertainty. The modification resulted in the development of quantum mechanics. The uncertainty principle dictates that <tex>\Psi \left ( \vec q, t \right )</tex> is the most complete description of the system that can be described. A central problem of quantum mechanics is the calculation of <tex>\Psi \left ( \vec q, t \right )</tex>. The wave function is given as the solution of the Schrodinger equation. The Hamiltonian operator is below

<center>

<br>

<tex>H = - \frac

Unknown macro: {hbar^2}
Unknown macro: {2m}

\Delta^2 + U(x, y, z)</tex>

<br>

</center>

There will be many <tex>\Psi</tex>'s and <tex>E</tex>'s that satisfy the Schrodinger equation. The application of boundary conditions often limits the values of <tex>E_j</tex> to only discrete values. Examples are below

<center>

<br>

<tex>\mbox

Unknown macro: {particle in one-dimensional well}

</tex>

<br>

<tex>H = - \frac

Unknown macro: {2m}

\Delta^2 </tex>

<br>

<tex>\epsilon_n = \frac

Unknown macro: {h^2 n^2}
Unknown macro: {8 m a^2}

</tex>

<br>

<tex>\mbox

Unknown macro: {simple harmonic oscillator}

</tex>

<br>

<tex>H = - \frac

Unknown macro: {hbar^2}

\Delta^2 + \frac

Unknown macro: {1}
Unknown macro: {2}

kx^2 </tex>

<br>

<tex>\epsilon_n = \left ( n + \frac

Unknown macro: {2}

\right ) \hbar \omega</tex>

<br>

<tex>\mbox

Unknown macro: {rigid rotor}

</tex>

<br>

<tex>H = - \frac

Unknown macro: {hbar^2}
Unknown macro: {2I}

\frac

Unknown macro: {1}
Unknown macro: {sin theta}

\frac

Unknown macro: {partial}
Unknown macro: {partial theta}

\left ( \sin \theta \frac

Unknown macro: {partial theta}

\right ) + \frac

{\sin^2 \theta \frac

Unknown macro: {partial^2}
Unknown macro: {partial phi^2}

</tex>

<br>

<tex>\frac

Unknown macro: { J ( J + 1 ) hbar^2}

</tex>

<br>

<tex>J = 0, 1, 2, ...</tex>

<br>

</center>

The number of eigenfunctions having the same energy is called the degeneracy of the system. Derive an expression of degeneracy of energy states in a three-dimensional infinite well. Refer to class notes. The degeneracy is very large at room temperature. Extend to an <tex>N</tex>-particle system.

<p>
</p>

Often the Hamiltonian of a many-body system can be written either exactly or approximately as a summation of one-particle or few-particle Hamiltonians. The energy of the entire system is the sum of the energies of the individual particles if they do not interact. Allow to reduce a many-body problem to a one-body problem if the interactions are weak enough to ignore. In some cases, the interactions can be too strong to ignore, but it is possible to formally or mathematically write the Hamiltonian in the form below. This leads to defining quasi-particles like phonons and photons.

<center>

<br>

<tex>H \Psi = \left ( \epsilon_

Unknown macro: {alpha}

+ \epsilon_

Unknown macro: {beta}

+ ... \right ) \Psi_

\Psi_

Unknown macro: {beta}

\Psi_...

Unknown macro: {gamma}

</tex>

<br>

</center>

Let P_

Unknown macro: {12}

be an operator that exchanges two identical particles.

<center>

<br>

<tex>P_

\Psi (1, 2, 3, ..., N) = \Psi (2, 1, 3, ..., N)</tex>

<br>

<tex>\mbox

Unknown macro: {bosons}

</tex>

<br>

<tex>P_

Unknown macro: {12}

\Psi (1, 2, 3, ..., N) = \Psi (1, 2, 3, ..., N)</tex>

<br>

<tex>\mbox

Unknown macro: {fermions}

</tex>

<br>

<tex>P_

\Psi (1, 2, 3, ..., N) = - \Psi (1, 2, 3, ..., N)</tex>

<br>

</center>

The wavefunction of bosons is symmetric, and the wavefunction of fermions is antisymmetric.

1-4 Thermodynamics

Pressure-volume work

<center>

<br>

<tex>w = \int_A^B p dV</tex>

<br>

</center>

Heat absorbed by the system from the surroundings during the change of the system from state <tex>A</tex> to state <tex>B</tex>.

<center>

<br>

<tex>q = \int_A^B \delta q</tex>

<br>

</center>

The first law of thermodynamics states that even though these two quantities depend on the path taken from state <tex>A</tex> to <tex>B</tex>, the difference does not. The internal energy is a function of the two states <tex>A</tex> and <tex>B</tex>.

<center>

<br>

<tex>\Delta E = E_B - E_A</tex>

<br>

</center>

The first law of thermodynamics is nothing but a statement of the law of conservation of energy.

<p>
</p>

A reversible change is one in which the driving force is infinitesimal.

<p>
</p>

The second law can be stated a number of different ways. There is a quantity <tex>S</tex>, called entropy, which is a state function. In an irreversible process, the entropy of the system and its surroundings increases. In a reversible process, the entropy of the system and its surroundings remains constant.

<center>

<br>

<tex>\Delta S = \int_A^B \frac{dq_{rev}}

Unknown macro: {T}

</tex>

<br>

<tex>\Delta S > \int_A^B \frac

Unknown macro: {dq}

</tex>

<br>

</center>

The third law of thermodynamics states that if the entropy of each element in some crystalline state be taken as zero at the absolute zero of temperature, every substance has a finite positive entropy, but at the absolute zero of termperature, the entropy may become zero, and does become so in the case of perfect crystalline substances.

<center>

<br>

<tex>S - S_o = \int_0^T \frac{dq_{rev}}

Unknown macro: {T}

</tex>

<br>

<tex>S_o = 0</tex>

<br>

</center>

The energy is a "natural" function entropy and volume. A more useful pair may be temperature and volume or temperature and pressure. Consider Legendre transforms. Specify a curve with the intercepts of the tangent lines with the <tex>y</tex>-axis. The function \psi (p) is the Legendre transform of <tex>y</tex>. It is completely equivalent to <tex>y</tex>, but considers <tex>p</tex> to be the independent variable instead of <tex>x</tex>.

<center>

<br>

<tex>\psi (p ) = y - px</tex>

<br>

<tex>y = y</tex>

<br>

<tex>x = x(p)</tex>

<br>

</center>

Seek a function of temperature and volume that is completely equivalent to energy. This is the Hemholtz free energy. The condition for equilibrium at constant temperature and volume is that the Hemholtz free energy assume its minimum value.

<p>
</p>

The enthalpy is the thermodynamic state function whose natural variables are entropy and pressure.

<p>
</p>

Temperature and pressure are the natural variables of the Gibbs free energy. Generalize the expression of \phi (p).

<center>

<br>

<tex>\phi (p) = y - \sum_j p_j x_j</tex>

<br>

<tex>p_j = \frac

Unknown macro: {partial y}
Unknown macro: {partial x_j}

</tex>

<br>

</center>

Consider the chemical potential

<center>

<br>

<tex>\mu_j = \left ( \frac

Unknown macro: {partial E}
Unknown macro: {partial N_j}

\right )_

Unknown macro: {S, V, ...}

</tex>

<br>

<tex>\mu_j = \left ( \frac

Unknown macro: {partial H}

\right )_

Unknown macro: {S, p, ...}

</tex>

<br>

<tex>\mu_j = \left ( \frac

Unknown macro: {partial A}
Unknown macro: {partial N_j}

\right )_

Unknown macro: {V, T, ...}

</tex>

<br>

<tex>\mu_j = \left ( \frac

Unknown macro: {partial G}

\right )_

Unknown macro: {p, T, ...}

</tex>

<br>

</center>

Homogeneous of order <tex>n</tex>.

<p>
</p>

Gibbs-Duhem equation. The relation below is true at constant temperature and pressure.

<center>

<br>

<tex>\sum_j N_j d \mu_)j = 0</tex>

<br>

</center>

Consider the reaction below

<center>

<br>

<tex>\mu_A A + \mu_B B + ... \leftrightarrows \mu_D D + \mu_E E + ...</tex>

<br>

</center>

At equilibrium, <tex>G</tex> must be a minimum with respect to <tex>\lambda</tex>, the extent of the reaction.

<center>

<br>

<tex>\sum_j \mu_j \mu_j = \mu_D D + \mu_E E + ... - \mu_A A - \mu_B B - ...</tex>

<br>

<tex>\sum_j \mu_j \mu_j = 0 </tex>

<br>

</center>

Consider the reaction <tex>\mu_A A + \mu_B B \leftrightarrows \mu_C C + \mu_D D</tex>.

<center>

<br>

<tex>G - G^0 = \int_

Unknown macro: {p_0}

^p V dp</tex>

<br>

<tex>G - G^0 = \int_

^p \frac

Unknown macro: {NkT}
Unknown macro: {p}

dp</tex>

<br>

<tex>G - G^0 = NkT \ln \frac

Unknown macro: {p_0}

</tex>

<br>

<tex> \mbox

Unknown macro: {N = 1}

</tex>

<br>

<tex>\mu_j(T, p) = \mu_j^0 (T) + RT \ln \frac

Unknown macro: {p_j}

{p_{0j}}</tex>

<br>

<tex>\Delta \mu = \Delta \mu^0 + RT \ln \frac{ \left (p_C' \right )^

Unknown macro: {vc}

\left (p_D' \right ){vD}}{\left (p_A' \right )

Unknown macro: {vA}

\left (p_B' \right )^{vB}}</tex>

</center>

If <tex>\Delta \mu^0</tex> is less than zero and the terms in the natural logarithm are equal greater than one, the conversion of reactants in their standard states to products in their standard states proceeds spontaneously. In general, <tex>\Delta \mu</tex> and the relation above determine the extent of a chemical reaction.

1-5 Mathematics

Probability Distributions

Let <tex>u</tex> be a variable that can assume <tex>M</tex> discrete values <tex>u_1</tex>, <tex>u_2</tex>, ..., <tex>u_M</tex> with probabilities p(u_1), p(u_2), ..., p(u_M). The averave value of <tex>u</tex> is below.

<center>

<br>

<tex>\bar u = \frac{\sum_

Unknown macro: {j=1}

^M u_j p(u_j)}{\sum_

^M p(u_j)</tex>

<br>

</center>

The summation of the denominator must be equal to one. The mean of any function <tex>u</tex>, <tex>f(u)</tex>, is given by the expression below.

<center>

<br>

<tex>\bar

Unknown macro: {f(u)}

= \sum_

Unknown macro: {j=1}

^M f(u_j) p(u_j) </tex>

<br>

</center>

If <tex>f(u) = u^m</tex>, <tex>\bar

</tex> is called the <tex>m</tex>th moment of the distribution <tex>p(u)</tex>. If <tex>f(u) = (u-\bar u)^m</tex>, <tex>\bar

Unknown macro: {f(u)}

</tex> is called the <tex>m</tex>th central moment of the distribution, the <tex>m</tex>th moment about the mean. The mean of (u-\bar u)^2</tex> us called the variance, and is a measure of the spread of the distribution. The square root of the variance is the standard deviation.

<p>
</p>

The Poisson distribution is a useful discrete distribution.

<center>

<br>

<tex>P(m) = \frac{a^m e^{-a}}}

Unknown macro: {m!}

</tex>

<br>

<tex>m = 0, 1, 2, ...</tex>

<br>

</center>

The mean of any function where <tex>U</tex> is continuous rather than discrete.

<center>

<br>

<tex>\bar

= \int f(u) p(u) du</tex>

<br>

</center>

Most important continuous probability distribution is the Gaussian distribution.

<center>

<br>

<tex>p = \frac

Unknown macro: {1}

{(2 \pi \sigma 2){\frac

{2}} \exp left { - \frac

Unknown macro: {(x - bar x) ^2}
Unknown macro: {2 sigma^2}

\right }</tex>

<br>

<tex>- \infty \le x \le \infty</tex>

<br>

</center>

Stirling's Approximation

The asymptotic approximation to <tex>\ln N!</tex> is called Stirling's approximation.

<center>

<br>

<tex>\ln N! = \sum_

Unknown macro: {m=1}

^N \ln m</tex>

<br>

<tex>\ln N! \approx \int_1^N \ln x dx</tex>

<br>

<tex>\int_1^N \ln x dx = N \ln N - N</tex>

<br>

</center>

Binomial and Multinomial Distribution

Problem of determining how many ways it is possible to divide <tex>N</tex> distinguishable systems into groups susch that there <tex>n_1</tex> systems in the first group, <tex>n_2</tex> systems in the second group, such that <tex>n_1 + n_2 + ... = N</tex>. To solve, calculate the number of permutations of <tex>N</tex> distinguishable objects, or the number of possible ways to order <tex>N</tex> distinguishable objects. Next calculate the number of ways of dividing <tex>N</tex> distinguishable objects into two groups, one group containing <tex>N_1</tex> objects and the other containing the remaining <tex>N-N_1</tex>. The expression of the total number is below.

<center>

<br>

<tex>N(N-1)...(N-N_1+1) \times (N-N_1)! = \frac

Unknown macro: {N!}
Unknown macro: {left ( N-N_1 right )! times left ( N - N_1 right )!}

</tex>

<br>

<tex> N(N-1)...(N-N_1+1) \times (N-N_1)! = N! </tex>

<br>

</center>

But this overcounted the result drastically. Below is the desired result.

<center>

<br>

<tex>\frac

Unknown macro: {N_1! left (N - N_1 right )! }

= \frac

Unknown macro: {N!}
Unknown macro: {N_1! N_2!}

</tex>

<br>

</center>

Generalization to the division of <tex>N</tex> into <tex>r</tex> groups.

<center>

<br>

<tex>\frac

Unknown macro: {N_1! N_2! ... N_r!}

= \frac

Unknown macro: {N!}

{\Pi_

Unknown macro: {j=1}

^r N_j!}</tex>

<br>

</center>

Method of Lagrange Multipliers

Maximize a function of several (or many) variables when the variables are connected by other equations. This is handled by the method of Lagrange undetermined multipliers. An example of an equation is below.

<center>

<br>

<tex>\left ( \frac

Unknown macro: {partial f}

\right )_0 - \lambda \left ( \frac

Unknown macro: {partial g}
Unknown macro: {partial x_j}

\right )_0 = 0</tex>

<br>

</center>

Binomial Distribution of Large Numbers

Observation concerns the shape of the multinomial coefficient as a function of the <tex>N_j</tex>'s, as the <tex>N_j</tex>'s become very large. Find the value of <tex>N_1</tex> for which the function below reaches a maximum value.

<center>

<br>

<tex>f(N_1) = \frac

Unknown macro: {N!}
Unknown macro: {N_1!}

\left (N - N_1 \right )!</tex>

<br>

</center>

The function can be written in the form of a Gaussian curve.

<center>

<br>

<tex>f \left (N_1 \right ) = f \left (N_1^* \right ) \exp \left { - \frac

Unknown macro: {2 left (N_1 - N_1^* right )^2}
Unknown macro: {N}

\right }</tex>

<br>

</center>

The binomial coefficient peaks very strongly at the point <tex>N_1 = N_2 = N/2</tex>

Maximum Term Method

Under appropriate conditions the logarithm of a summation is essentially equal to the logarithm of the maximum term in the summation.

  • No labels