The State of a System

The need for statistical physics1

We're going to be concerned with real systems involving astronomically large numbers of constituent degrees of freedom (e.g., $10^{23}$). Clearly the phase space has a dimensionality of the order of $N$ (the number of particles) itself, and any attempt to describe the detailed dynamics of the system is hopeless. Statistical methods are therefore needed. One might think that this is because of a human limitation (‘`can’t solve $10^{23}$ equations of motion — classical or quantum mechanical — in any reasonable time even with the largest computer, etc.") — but actually it's more than that. The point is that even if one could solve ‘`$10^{23}$ equations of motion", one does not have in readily accessible form the quantities of physical interest in such systems. These quantities are macroscopic, `average’, objects such as the pressure of a gas, the internal energy it has, the amount of energy on can extract from it to do useful work under prescribed conditions such as the temperature, volume, etc. These quantities of physical interest are in any case ‘averaged’ or mean values, and one HAS to use statistical methods to extract them. How convenient, then, if one can do so without going through the step of solving the equations of motion for the constituent degrees of freedom in detail?. This is the point of statistical physics (at least of equilibrium statistical mechanics.).

The price one has to pay in order to by-pass the detailed dynamics is the introduction of an additional postulate over and above the dynamical equations (whether the latter are classical i.e., Newton's equations, or quantum mechanical, i.e., Schroedinger's equation is immaterial). It turns out that practically one single additional postulate will do, and even this is quite a ‘natural’ one to make.

This fundamental postulate is called “the postulate of equal a priori probabilities for an isolated system in thermal equilibrium”, and it is at the root of equilibrium statistical mechanics - from which thermodynamics, for instance, follows. We have a long way to go before we get to this, so let's take it step by step, with a number of examples along the way.

The strategy

In a nutshell, here's the basic point. "When a system with an extremely large number of degrees of freedom in in thermal (or thermodynamic) equilibrium, and the system is isolated from its surroundings — no energy or matter exchanged between system and environment — all the accessible microstates of the system are attained with equal probability." We realize that we have yet to define what is meant by "thermal equilibrium", ‘`accessible microstate", etc. This will be done in due course. But first let’s observe that if $10$ possibilities, are equally probable, and these are independent possibilities, then the absolute probability of any one of the possibilities is just $1/10$. Thus the above postulate gives us a rule to calculate probabilities. And why do we need such a rule? Because one can find average values either by taking an arithmetic mean of $n$ observations under identical conditions, and letting $n\rightarrow\infty$, or by computing a weighted sum over the possible values, using the corresponding probabilities as the weight factors. The latter is the sensible way to provided we can find the probabilities concerned — which is precisely what statistical mechanics does.

Thus, if an isolated system in thermal equilibrium has a certain number $X$ of accessible microstates (these terms to be defined as yet), the a priori probability of the system being in any one of those states is simply $1/X$. The task thus reduces in principle to finding the number $X$. This is the motivation behind our laying so much emphasis on the counting of possible states in what follows.

Of course, a system may not be isolated, even if it is in thermal equilibrium. For example — and this is an fact the most common situation — it may be maintained at a constant temperature by being placed in thermal contact with a heat bath or reservoir or ‘atmosphere’. What are the rules for calculating probabilities then? We shall work this out later on — the answer follows as a corollary of the fundamental postulate above.

For the present, therefore, we need to learn to count the number of possible states of a system. So we begin with the simplest example a single particle moving freely in a box of volume $V$.

Particle in a Box

The state of a classical particle moving freely inside a box of volume $V$, with perfectly reflecting rigid walls, is specified by the position coordinates (say $x,y,z$) and momentum components (say $p_x,p_y,p_z$) of the particle at any given instant of time. It is thus a point in the six-dimensional phase space. However, the number of points in a continuous space is infinite, in fact, uncountably infinite, and so the number of possible states appears to be infinite. Its reciprocal, the a priori probability we desire, would be zero. The resolution of this impasse comes, actually, from quantum mechanics — which we believe is the ‘correct’ mechanics of elementary particles, classical mechanics being an approximation to it under certain circumstances. We don't want to treat the particle quantum mechanically as yet; but there is a simple way of incorporating the effects of QM in our counting of possible states of the particle. We know from the uncertainty principle of Heisenberg that the position component $x$ and the conjugate momentum component $p_x$ of the particle cannot be simultaneously determined with arbitrarily high precision: the product of uncertainties, $(\Delta x)(\Delta p_x)$ must be at least of the order of Planck's constant $h$ in any state of the particle. The question of specifying a precise point in the ($x$, $p_x$) plane to describe the state of the particle therefore does not arise,. Similarly, the uncertainty products $(\Delta y)(\Delta p_y)$ and $(\Delta z)(\Delta p_z)$ must each be at least of the order of Planck's constant $h$. Thus, owing to this fundamental property of Nature called the uncertainty principle, the state of the particle cannot be specified by a mathematical point in phase space as suggested by classical mechanics. Rather, the best we can do is to represent it by a ‘volume element’ in phase space of size $(h)(h)(h) = h^3$. No finer resolution within this basic cell is possible. Therefore: if the system has a volume $Q$ of phase space accessible to it under prescribed conditions, the number of states available to it is not infinite (the number of points in $Q$), but rather $Q/h^3$. The generalization to an arbitrary system with if degrees of freedom is immediate. The phase space is $2f$ dimensional — with $f$ generalized coordinates and $f$ generalized momenta. The ‘fuzziness’ in each conjugate pair $(q_k,p_k)$ is given by $(\Delta q_k)(\Delta p_k)\sim h$ where $k = 1,2,\ldots,f$. Hence, if $Q$ is the total volume in phase space available to the system under the prescribed circumstance, the number of states available to the system is $Q/h^f$. (Observe that this ratio is a dimensionless quantity, as it should be!) In general, $Q$ is so large compared to $h^f$ that the above ratio is very large, and so we needn't lose sleep over whether the ratio is exactly equal to an integer or not.

Of course, there may arise situations in which the number of states available to a system is a rather small number, in which case the above ‘semi-classical’ arguments won't hold good. For example, a system at the absolute zero of temperature is definitely in its ground state or lowest energy state; and if this is unique, the number of states available to the system is just unity.

Now, back to our particle in a box of volume $V$. We must specify the total energy of the particle. Suppose this is $\varepsilon$. In practice, we may be able to determine the energy only to within an accuracy $\delta \varepsilon$. The volume in phase space available to the particle is then

(1)
\begin{align} \int\!\!\int\!\!\int_{\text{inside box}} \int\!\!\int\!\!\int_{\varepsilon\leq \text{Energy}\leq \varepsilon + \delta \varepsilon}\ dx\ dy\ dz\ dp_x\ dp_y\ dp_z \end{align}

But the particle is free inside the box, so that its energy is entirely kinetic, given by $(p_x^2+p_y^2+p_z^2)/(2m) =p^2/(2m)$, where $p$ is the magnitude of its momentum. So the integrations over the coordinates may be done at once. Also, it's obvious that we should use spherical polar coordinates in momentum space, because the region of integration depends only on the magnitude of the momentum (and the integrand is unity, a constant). Doing the angular integrations over the direction of the momentum, we are left with

(2)
\begin{align} (V)(4\pi)\int_{\sqrt{2m\varepsilon}}^{\sqrt{2m(\varepsilon+\delta \varepsilon)}}\!\!p^2dp = \frac{4\pi V}{3}\ \Big[ \{2m(\varepsilon+\delta \varepsilon)\}^{3/2} - \{2m\varepsilon\}^{3/2}\Big] \simeq 2\pi V (2m)^{3/2}\varepsilon^{1/2}\delta\varepsilon \end{align}

The number of states of the particle in the energy range $(\varepsilon,\varepsilon+\delta \varepsilon)$ is therefore given by

(3)
\begin{align} \omega(\varepsilon)= 2\pi V (2m)^{3/2}\varepsilon^{1/2}\delta\varepsilon/h^3 \ . \end{align}

The semi-classical density of states

The factor multiplying $\delta\varepsilon$ in the above gives the number of states per unit energy interval about the energy value. It is called the density of states at $\varepsilon$, and denoted by $\rho(\varepsilon)$. There's an alternative (and of course equivalent) way of looking at the above. Suppose $\phi(\varepsilon)$ is the number of states of the system with energies less than or equal to some given value $\varepsilon$. Then $\omega(\varepsilon)$ is, by definition, equal to

(4)
\begin{align} \omega(\varepsilon)=\phi(\varepsilon+\delta\varepsilon) - \phi(\varepsilon) \equiv \delta \phi(\varepsilon)\ . \end{align}

Hence, $\delta \phi(\varepsilon)=\rho(\varepsilon)\delta \varepsilon$, so that in the limit,

(5)
\begin{align} \quad \rho(\varepsilon)= \frac{d\phi(\varepsilon)}{d\varepsilon}\quad. \end{align}

This is a very useful formula for computing the density of states. The crucial point is that the density of states of a free particle in three dimension grows with its energy like $\varepsilon^{1/2}$. This $\varepsilon^{1/2}$ dependence of the density of states of a free particle will play a crucial role in the behaviour of the ideal gas, as we shall see later on in the course. For completeness, we provide the semi-classical expression for $\phi(\varepsilon)$ for a particle of mass $m$ moving in cubical box of side $L$.

(6)
\begin{align} \phi_{\rm s.c.}(\varepsilon) = \frac{4\pi}{3} \left(\frac{2mL^3\varepsilon}{h^2}\right)^{3/2} = \frac{\pi}{6} (\varepsilon/\varepsilon_0)^{3/2}\,, \end{align}

where $\varepsilon_0 \equiv h^2/8mL^2$ and the subscript s.c. is used to denote the fact that it is a semi-classical computation as opposed to an exact quantum computation that we will next do.

The quantum density of states

This fact remains valid in a fully quantum mechanical treatment, In QM, the state of the particle is not specifiable by the values of its coordinates and momentum components at any given instant of time. Rather, it is specified by the wavefunction corresponding to each definite energy level, labelled by a set of quantum numbers. The set of quantum numbers plays the role of the degrees of freedom in the classical counterpart. For the particle in a box (in three dimensions), the energy levels are characterised by three quantum numbers $n_1$, $n_2$ and $n_3$, each of which can take on the values $1,2,\ldots$ ad inf. The wavefunction corresponding to the state $(n_1,n_2,n_3)$ is given (for a cubical box of side $L$) by

(7)
\begin{align} \psi_{n_1,n_2,n_3}(x,y,z) = (2/L)^{3/2} \sin (n_1x/L) \sin (n_2y/L) \sin (n_3x/L) \end{align}

The energy corresponding to this state turns out to be

(8)
\begin{align} E(n_1,n_2,n_3) = \Big(h^2/8mL^2\Big)\ \Big(n_1^2+n_2^2+n_3^2\Big). = \varepsilon_0 \ \Big(n_1^2+n_2^2+n_3^2\Big)\,. \end{align}

To find $\phi_{\rm q}(\varepsilon)$ (the subscript q here denotes quantum), the number of states with energies $\leq \varepsilon$, we have therefore to count the number of positive integral lattice points in the $(n_1,n_2,n_3)$ space contained within a sphere of $(\textrm{radius})^2 = (\varepsilon/\varepsilon_0)$. Since the radius is thus proportional to $\varepsilon^{1/2}$, its volume is proportional to $\varepsilon^{3/2}$. But, since successive values of $n_1$ (or $n_2$,or $n_3$) differ by unity, this volume is essentially the number of lattice points desire. Hence $\phi(\varepsilon)\sim \varepsilon^{3/2}$, so that the density of states $\rho(\varepsilon)\sim \varepsilon^{1/2}$, as before. We note that the number of states available actually increases in a discrete manner with the energy; the concept of smooth density of states (a continuous function like $\varepsilon^{1/2}$) is actually meaningful only for sufficiently large values of $\varepsilon$, when the number of levels increases so rapidly that one can speak of an essentially continuous distribution of states with the energy. Below, in the two plots, we plot $\phi_{\rm q}(\varepsilon)$, $\phi_{\rm s.c.}(\varepsilon)$ versus $\varepsilon/\varepsilon_0$ as well the the percentage error in the semi-classical expression with respect to the exact quantum expression.

fullplot200.jpg

Figure 1: The plot focuses on large values of $\varepsilon/\varepsilon_0$ (up to $40000$) where one can observe the good agreement between the two expressions. The error is of the order of $1.1\%$ at $40000$.

fullplot5.jpg

Figure 2: The second plot focuses on small values of $\varepsilon/\varepsilon_0$ (up to $30$). In this plot, one can see the non-smooth nature of $\phi_{\rm q}$ and the clear mismatch with the semi-classical expression. The error is as high as $485\%$ and falls to about $43\%$ at $30$. This mismatch is generic and hence the semi-classical expression is valid only at large quantum numbers.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License