































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The Boltzmann Equation and The Quantum Boltzmann Equation, Transport with Collisions.
Typology: Lecture notes
1 / 39
This page cannot be seen from the preview
Don't miss anything!
The purpose of this section is to lay down the foundations of kinetic theory, starting from the Hamiltonian description of 10 23 particles, and ending with the Navier-Stokes equation of fluid dynamics. Our main tool in this task will be the Boltzmann equation. This will allow us to provide derivations of the transport properties that we sketched in the previous section, but without the more egregious inconsistencies that crept into our previous attempt. But, perhaps more importantly, the Boltzmann equation will also shed light on the deep issue of how irreversibility arises from time-reversible classical mechanics.
2.1 From Liouville to BBGKY
Our starting point is simply the Hamiltonian dynamics for N identical point particles. Of course, as usual in statistical mechanics, here is N ridiculously large: N ⇠ O(10 23 ) or something similar. We will take the Hamiltonian to be of the form
2 m
i=
~p (^) i^2 +
i=
V (~r (^) i ) +
i<j
U (~r (^) i ~rj ) (2.1)
The Hamiltonian contains an external force F~ = rV that acts equally on all parti- cles. There are also two-body interactions between particles, captured by the potential energy U (~ri ~r (^) j ). At some point in our analysis (around Section 2.2.3) we will need to assume that this potential is short-ranged, meaning that U (r) ⇡ 0 for r d where, as in the last Section, d is the atomic distance scale.
Hamilton’s equations are @~pi @t
@~ri
@~ri @t
@~pi
Our interest in this section will be in the evolution of a probability distribution, f (~r (^) i , ~p (^) i ; t) over the 6N dimensional phase space. This function tells us the proba- bility that the system will be found in the vicinity of the point (~r (^) i , ~p (^) i ). As with all probabilities, the function is normalized as
Z dV f (~r (^) i , ~p (^) i ; t) = 1 with dV =
i=
d 3 r (^) i d 3 p (^) i
Furthermore, because probability is locally conserved, it must obey a continuity equa- tion: any change of probability in one part of phase space must be compensated by
a flow into neighbouring regions. But now we’re thinking in terms of phase space, the “r” term in the continuity equation includes both @/@~ri and @/@~pi and, corre- spondingly, the velocity vector in phase space is (~r˙ (^) i , ~p˙ (^) i ). The continuity equation of the probability distribution is then
@f @t
@~ri
~r˙ (^) i f
@~pi
~p˙ (^) i f
where we’re using the convention that we sum over the repeated index i = 1,... , N. But, using Hamilton’s equations (2.2), this becomes
@f @t
@~ri
@~pi f
@~pi
@~ri f
@f @t
@f @~ri
@~pi
@f @~pi
@~ri
This final equation is the Liouville’s equation. It is the statement that probability doesn’t change as you follow it along any trajectory in phase space, as is seen by writing the Liouville equation as a total derivative,
df dt
@f @t
@f @~ri · ~r˙ (^) i + @f @~pi · ~p˙ (^) i = 0
To get a feel for how probability distributions evolve, one often evokes the closely related Liouville’s theorem^2. This is the statement that if you follow some region of phase space under Hamiltonian evolution, then its shape can change but its volume remains the same. This means that the probability distribution on phase space acts like an incompressible fluid. Suppose, for example, that it’s a constant, f , over some region of phase space and zero everywhere else. Then the distribution can’t spread out over a larger volume, lowering its value. Instead, it must always be f over some region of phase space. The shape and position of this region can change, but not its volume.
The Liouville equation is often written using the Poisson bracket,
{A, B} ⌘
@~ri
@~pi
@~pi
@~ri
With this notation, Liouville’s equation becomes simply
@f @t = {H, f }
(^2) A fuller discussion of Hamiltonian mechanics and Liouville’s theorem can be found in Section 4 of the classical dynamics notes: http://www.damtp.cam.ac.uk/user/tong/dynamics.html.
to limit our ambition. We’ll focus not on the probability distribution for all N parti- cles but instead on the one-particle distribution function. This captures the expected number of parting lying at some point (~r, ~p). It is defined by
f 1 (~r, p~; t) = N
i=
d 3 r (^) i d 3 p (^) i f (~r, ~r 2 ,... , ~rN , ~p, ~p 2 ,... p~ (^) N ; t)
Although we seem to have singled out the first particle for special treatment in the above expression, this isn’t really the case since all N of our particles are identical. This is also reflected in the factor N which sits out front which ensures that f 1 is normalized as Z d 3 rd 3 p f 1 (~r, ~p; t) = N (2.6)
For many purposes, the function f 1 is all we really need to know about a system. In particular, it captures many of the properties that we met in the previous chapter. For example, the average density of particles in real space is simply
n(~r; t) =
d 3 p f 1 (~r, ~p; t) (2.7)
The average velocity of particles is
~u(~r; t) =
d 3 p ~p m f 1 (~r, ~p; t) (2.8)
and the energy flux is
E^ ~(~r; t) =
d 3 p ~p m E(~p)f 1 (~r, p~; t) (2.9)
where we usually take E(~p) = p 2 / 2 m. All of these quantities (or at least close relations) will be discussed in some detail in Section 2.4.
Ideally we’d like to derive an equation governing f 1. To see how it changes with time, we can simply calculate:
@f (^1) @t
i=
d 3 r (^) i d 3 p (^) i
@f @t
i=
d 3 r (^) i d 3 p (^) i {H, f }
Using the Hamiltonian given in (2.1), this becomes
@f (^1) @t
i=
d 3 r (^) i d 3 p (^) i
j=
~p (^) j m
@f @~rj
j=
@~rj
@f @~pj
j=
k<l
@U (~r (^) k ~r (^) l ) @~rj
@f @~pj
Now, whenever j = 2,... N , we can always integrate by parts to move the derivatives away from f and onto the other terms. And, in each case, the result is simply zero because when the derivative is with respect to ~r (^) j , the other terms depend only on ~p (^) i and vice-versa. We’re left only with the terms that involve derivatives with respect to ~r 1 and ~p 1 because we can’t integrate these by parts. Let’s revert to our previous notation and call ~r 1 ⌘ ~r and ~p 1 ⌘ p~. We have
@f (^1) @t
i=
d 3 r (^) i d 3 p (^) i
~p m
@f @~r
@V (~r) @~r
@f @~p
k=
@U (~r ~rk ) @~r
@f @~p
= {H 1 , f 1 } + N
i=
d 3 r (^) i d 3 p (^) i
k=
@U (~r ~rk ) @~r
@f @~p
where we have defined the one-particle Hamiltonian
H 1 = p 2 2 m
Notice that H 1 includes the external force V acting on the particle, but it knows nothing about the interaction with the other particles. All of that information is included in the last term with U (~r ~r (^) k ). We see that the evolution of the one-particle distribution function is described by a Liouville-like equation, together with an extra term. We write @f (^1) @t
= {H 1 , f 1 } +
@f (^1) @t
coll
The first term is sometimes referred to as the streaming term. It tells you how the particles move in the absence of collisions. The second term, known as the collision integral, is given by the second term in (2.10). In fact, because all particles are the same, each of the (N 1) terms in
k=2 in (2.10) are identical and we can write ✓ @f (^1) @t
coll
d 3 r 2 d 3 p (^2) @U (~r ~r 2 ) @~r
@~p
i=
d 3 r (^) i d 3 p (^) i f (~r, ~r 2 ,... , ~p, ~p 2 ,... ; t)
But now we’ve got something of a problem. The collision integral can’t be expressed in terms of the one-particle distribution function. And that’s not really surprising. As the name suggests, the collision integral captures the interactions – or collisions – of one particle with another. Yet f 1 contains no information about where any of the other particles are in relation to the first. However some of that information is contained in the two-particle distribution function,
f 2 (~r 1 , ~r 2 , ~p 1 , ~p 2 ; t) ⌘ N (N 1)
i=
d 3 r (^) i d 3 p (^) i f (~r 1 , ~r 2 ,... , ~p 1 , p~ 2 ,... ; t)
However, there is an advantage is working with the hierarchy of equations (2.14) because they isolate the interesting, simple variables, namely f 1 and other lower f (^) n. This means that the equations are in a form that is ripe to start implementing various approximations. Given a particular problem, we can decide which terms are important and, ideally, which terms are so small that they can be ignored, truncating the hierarchy to something manageable. Exactly how you do this depends on the problem at hand. Here we explain the simplest, and most useful, of these truncations: the Boltzmann equation.
2.2 The Boltzmann Equation
“Elegance is for tailors” Ludwig Boltzmann
In this section, we explain how to write down a closed equation for f 1 alone. This will be the famous Boltzmann equation. The main idea that we will use is that there are two time scales in the problem. One is the time between collisions, ⌧ , known as the scattering time or relaxation time. The second is the collision time, ⌧ (^) coll , which is roughly the time it takes for the process of collision between particles to occur. In situations where
⌧ ⌧ (^) coll (2.15)
we should expect that, for much of the time, f 1 simply follows its Hamiltonian evolution with occasional perturbations by the collisions. This, for example, is what happens for the dilute gas. And this is the regime we will work in from now on.
At this stage, there is a right way and a less-right way to proceed. The right way is to derive the Boltzmann equation starting from the BBGKY hierarchy. And we will do this in Section 2.2.3. However, as we shall see, it’s a little fiddly. So instead we’ll start by taking the less-right option which has the advantage of getting the same answer but in a much easier fashion. This option is to simply guess what form the Boltzmann equation has to take.
2.2.1 Motivating the Boltzmann Equation
We’ve already caught our first glimpse of the Boltzmann equation in (2.12),
@f (^1) @t = {H 1 , f 1 } +
@f (^1) @t
coll
But, of course, we don’t yet have an expression for the collision integral in terms of f 1. It’s clear from the definition (2.13) that the second term represents the change in
momenta due to two-particle scattering. When ⌧ ⌧ (^) coll , the collisions occur occasion- ally, but abruptly. The collision integral should reflect the rate at which these collisions occur.
Suppose that our particle sits at (~r, ~p) in phase space and collides with another particle at (~r, ~p 2 ). Note that we’re assuming here that collisions are local in space so that the two particles sit at the same point. These particles can collide and emerge with momenta ~p 10 and ~p 20. We’ll define the rate for this process to occur to be
Rate = !(p,~ ~p 2 |~p 10 , ~p 20 ) f 2 (~r, ~r, ~p, ~p 2 ) d 3 p 2 d 3 p 01 d 3 p 02 (2.17)
(Here we’ve dropped the explicit t dependence of f 2 only to keep the notation down). The scattering function! contains the information about the dynamics of the process. It looks as if this is a new quantity which we’ve introduced into the game. But, using standard classical mechanics techniques, one can compute! for a given inter-atomic potential U (~r). (It is related to the di↵erential cross-section; we will explain how to do this when we do things better in Section 2.2.3). For now, note that the rate is proportional to the two-body distribution function f 2 since this tells us the chance that two particles originally sit in (~r, ~p) and (~r, ~p 2 ).
We’d like to focus on the distribution of particles with some specified momentum ~p. Two particles with momenta ~p and ~p 2 can be transformed in two particles with momenta ~p 10 and ~p 20. Since both momenta and energy are conserved in the collision, we have
~p + ~p 2 = ~p 10 + ~p 20 (2.18) p 2 + p 22 = p 01 2 + p 02 2 (2.19)
There is actually an assumption that is hiding in these equations. In general, we’re considering particles in an external potential V. This provides a force on the particles which, in principle, could mean that the momentum and kinetic energy of the particles is not the same before and after the collision. To eliminate this possibility, we will assume that the potential only varies appreciably over macroscopic distance scales, so that it can be neglected on the scale of atomic collisions. This, of course, is entirely reasonable for most external potentials such as gravity or electric fields. Then (2.18) and (2.19) continue to hold.
While collisions can deflect particles out of a state with momentum ~p and into a di↵erent momentum, they can also deflect particles into a state with momentum ~p.
at what we’ve actually assumed. Looking at (2.21), we can see that we have taken the rate of collisions to be proportional to f 2 (~r, ~r, ~p 1 , p~ 2 ) where p 1 and p 2 are the momenta of the particles before the collision. That means that if we substitute (2.22) into (2.21), we are really assuming that the velocities are uncorrelated before the collision. And that sounds quite reasonable: you could imagine that during the collision process, the velocities between two particles become correlated. But there is then a long time, ⌧ , before one of these particles undergoes another collision. Moreover, this next collision is typically with a completely di↵erent particle and it seems entirely plausible that the velocity of this new particle has nothing to do with the velocity of the first. Nonethe- less, the fact that we’ve assumed that velocities are uncorrelated before the collision rather than after has, rather slyly, introduced an arrow of time into the game. And this has dramatic implications which we will see in Section 2.3 where we derive the H-theorem.
Finally, we may write down a closed expression for the evolution of the one-particle distribution function given by
@f (^1) @t = {H 1 , f 1 } +
@f (^1) @t
coll
with the collision integral ✓ @f (^1) @t
coll
d 3 p 2 d 3 p 01 d 3 p 02 !(~p 10 , ~p 20 |~p, ~p 2 )
h f 1 (~r, ~p 10 )f 1 (~r, ~p 20 ) f 1 (~r, ~p)f 1 (~r, ~p 2 )
i (2.24)
This is the Boltzmann equation. It’s not an easy equation to solve! It’s a di↵erential equation on the left, an integral on the right, and non-linear. You may not be surprised to hear that exact solutions are not that easy to come by. We’ll see what we can do.
2.2.2 Equilibrium and Detailed Balance
Let’s start our exploration of the Boltzmann equation by revisiting the question of the equilibrium distribution obeying @f eq^ /@t = 0. We already know that {f, H 1 } = 0 if f is given by any function of the energy or, indeed any function that Poisson commutes with H. For clarity, let’s restrict to the case with vanishing external force, so V (r) = 0. Then, if we look at the Liouville equation alone, any function of momentum is an equilibrium distribution. But what about the contribution from the collision integral?
One obvious way to make the collision integral vanish is to find a distribution which obeys the detailed balance condition,
f 1 eq (~r, ~p 10 )f 1 eq (~r, ~p 20 ) = f 1 eq (~r, ~p)f 1 eq (~r, ~p 2 ) (2.25)
In fact, it’s more useful to write this as
log(f 1 eq (~r, ~p 10 )) + log(f 1 eq (~r, ~p 20 )) = log(f 1 eq (~r, ~p)) + log(f 1 eq (~r, ~p 2 )) (2.26)
How can we ensure that this is true for all momenta? The momenta on the right are those before the collision; on the left they are those after the collision. From the form of (2.26), it’s clear that the sum of log f 1 eq must be the same before and after the collision: in other words, this sum must be conserved during the collision. But we know what things are conserved during collisions: momentum and energy as shown in (2.18) and (2.19) respectively. This means that we should take
log(f 1 eq (~r, ~p)) = (μ E(~p) + ~u · ~p ) (2.27)
where E(p) = p 2 / 2 m for non-relativistic particles and μ, and ~u are all constants. We’ll adjust the constant μ to ensure that the overall normalization of f 1 obeys (2.6). Then, writing ~p = m~v, we have
f 1 eq (~r, ~p) =
2 ⇡m
e m(~v ~u)^ (^2) / 2 (2.28)
which reproduces the Maxwell-Boltzmann distribution if we identify with the inverse temperature. Here ~u allows for the possibility of an overall drift velocity. We learn that the addition of the collision term to the Liouville equation forces us to sit in the Boltzmann distribution at equilibrium.
There is a comment to make here that will play an important role in Section 2.4. If we forget about the streaming term {H 1 , f 1 } then there is a much larger class of solutions to the requirement of detailed balance (2.25). These solutions are again of the form (2.27), but now with the constants μ, and ~u promoted to functions of space and time. In other words, we can have
f 1 local (~r, ~p; t) = n(~r, t)