

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Notes on solving the ground state of a particle in a box using the drunkard motion simulation. The hamiltonian of the system, the strategy to simulate the drunkard motion, and the modification of the gaussian function to account for the boundary effect. The document also introduces a variant method to treat branching and discusses the advantages and disadvantages of the scheme.
Typology: Study notes
1 / 2
This page cannot be seen from the preview
Don't miss anything!
Notes: Particle in a Box By Random Walks — Phys690 HW5 (Spring ’07)
Shiwei Zhang
The goal is to solve for the ground state of a particle in a box. We set ¯h = m = 1.
The Hamiltonian is H = − 12 d
2 dx^2 +^ V^ (x), where^ V^ (x) is 0 when^ x^ is on (−^1 ,^ 1) and^ ∞ otherwise. Note that this is the drunkard problem, which we studied earlier, in disguise. As we have shown, the solution to
−
∂t
ψ(x, t) = Hψ(x, t)
leads to the ground state of H at large t: ψ(x, t → ∞) → ψ 0 (x). In the drunkard problem, the probability distribution of the drunkard at large time is a smooth function which vanishes at the two bars. According to the argument above, this distribution is the ground-state wave function of the quantum-mechanical problem of a particle in a box! Our strategy to solve the quantum mechanical problem is therefore to simulate the motion of many drunkards in order to obtain their distribution at large time. Of course, because of trapping at the bars, there is a finite probability of losing the drunkard. We would have to multiply ψ(x, t) by a constant to make up for that loss, so as to ensure that the normalization of ψ(x, t) stays a constant. Now we need to figure out how to simulate the drunkard motion. The zeroth-order description is that it is just diffusion (Gaussian). But boundary effect must be properly accounted for. In the lattice version the drunkard either “lands” exactly at the bar, or not. In the continuum version, however, the situation is less straightforward: there is always a finite probability that the drunkard will land at the bar (and hence be absorbed). Therefore we need to modify the Gaussian to take this effect into consideration. Below is a more formal description. We need to repeatedly operate exp(−τ H) on an initial wave function. (τ is small.) Recall that exp(−τ H) ≡ g(x, x′) is called the short-time Green’s function. It can be approximated by:
g(x, x′) =
g 0 (x, x′) − g 0 (x, Ix′), if x & x′^ inside box 0 , otherwise
where g 0 is the free-particle Green’s function: g 0 (x, x′) = 1/
2 πτ exp[−(x − x′)^2 / 2 τ ] , and Ix′^ denotes the mirror image of x′^ with respect to the closer side of the box. It is easily verified that g(x, x′) indeed: (i) satisfies −∂g/∂τ = Hg, (ii) is δ(x − x′) at τ = 0 and, (iii) (approximately) satisfies the boundary condition at the sides. Note that, except for near the two sides, g(x, x′) is essentially the free Green’s function g 0 (x, x′).
We want to propagate
ψ(x) = eτ E^0
g(x, x′)ψ(x′) dx′. (2)
The constant eτ E^0 ensures normalization when ψ is the ground-state wave function, i.e., when the propagation has reached equilibrium. We can rewrite Eq. (2) as:
ψ(x) = eτ E^0
K(x, x′) g 0 (x, x′) ψ(x′) dx′, (3)
where
K(x, x′) =
g 0 (x,Ix′^ ) g 0 (x,x′) ≡^1 −^ P^ (x, x
′), if x′ (^) and x inside box
0 , otherwise
Inside the integral in Eq. (3), from right to left, the three factors: (a) ψ(x′) can be viewed as probability density for x′, (b) g 0 (x, x′) can be viewed as probability density for x conditional on x′^ and, (c) K(x, x′) is between 0 and 1 and can be used as a probability.
As a variant of the method we discussed in class, we introduce another way to treat branching. We use a fixed number of walkers to represent ψ. The three steps in the help page on ‘step’ correspond to (a), (b), and (c) in the above paragraph. In the last step, we accept x with probability K (note that exp(-imgdist) in the code is P in Eq. (4)). If we reject, we must start all over again from (a). We do not normalize exp(-imgdist), because g(x, x′) (or K) is not normalized with respect to x. The key is to keep randomly picking a walker out of the current pool until we have obtained exactly the desired number of new walkers. The advantage is that no ET is needed; the disadvantage is that this scheme introduces a systematic bias in the final result that becomes more pronounced as the number of walkers n wlks decreases.