6. Quantum Mechanics: Part II#

6.1. The Schrödinger Wave Equation#

In November 1925, Schrödinger was presenting on de Broglie’s wave theory for particles when Peter Debye suggested that there should be a wave equation. Within a few weeks, Schrödinger found a suitable equation based on what he knew about geometrical and wave optics.

Note

Schrödinger’s equation permits wave-like solutions even though it is not strictly a wave equation in a mathematical sense (see Eqn. (5.8)). It is more analogous to a diffusion equation with a diffusion coefficient \(\beta^2 = h/2m\) (Mita 2021).

Newton’s laws govern the motion of particles, where we need a similar set of equations to describe the wave motion of particles. A wave equation is needed that is dependent on the potential field that the particle experiences. The wave function \(\Psi\) is operated on by the field and allows us to calculate the physical observables (e.g., position, energy, momentum, etc.). Note that we will not be able to determine the exact position and momentum simultaneously due to the uncertainty principle.

The Schrödinger wave equation in its time-dependent form for a particle moving in a potential \(V\) in one dimension is

(6.1)#\[i\hbar \frac{\partial \Psi(x,t)}{\partial t} = \left[-\frac{\hbar^2}{2m} \frac{\partial }{\partial x^2} + V\right]\Psi(x,t),\]

which is a complex partial differential equation (PDE). Both the potential \(V\) and the wave function \(\Psi\) may be functions of space and time (i.e., \(V(x,t)\) and \(\Psi(x,t)\)). The extension of Eqn. (6.1) into three dimensions is straightforward,

\[\begin{align*} i\hbar \frac{\partial \Psi(x,y,z,t)}{\partial t} &= \left[-\frac{\hbar^2}{2m} \left( \frac{\partial }{\partial x^2} + \frac{\partial }{\partial y^2}+ \frac{\partial }{\partial z^2}\right) + V(x,y,z,t)\right]\Psi(x,y,z,t), \\ &= \left[-\frac{\hbar^2}{2m} \nabla^2 + V(x,y,z,t)\right]\Psi(x,y,z,t), \end{align*}\]

but this introduction is restricted to the one-dimensional form until Section 6.5.

Exercise 6.1

The wave equation must be linear so that we can use the superposition principle to form wave packets comprised of two or more waves. Prove that the wave function in Eqn. (6.1) is linear by using the addition of two waves by using two waves (\(\Psi_1\) and \(\Psi_2\)) with constant coefficients (\(a\) and \(b\)) as

\[ \Psi(x,t) = a\Psi_1(x,t) + b\Psi_2(x,t), \]

which satisfies the Schrödinger equation.

To apply the Schrödinger equation, we need to know the partial derivatives of \(\Psi\) with respect to time \(t\) and position \(x\). These derivatives are

\[\begin{align*} \frac{\partial \Psi}{\partial t} &= a\frac{\partial \Psi_1}{\partial t} + b\frac{\partial \Psi_1}{\partial t}, \\ \frac{\partial \Psi}{\partial x} &= a\frac{\partial \Psi_1}{\partial x} + b\frac{\partial \Psi_1}{\partial x}, \\ \frac{\partial^2 \Psi}{\partial x^2} &= a\frac{\partial^2 \Psi_1}{\partial x^2} + b\frac{\partial^2 \Psi_1}{\partial x^2}. \end{align*}\]

Substituting these derivatives into Eqn. (6.1) produces

\[ i\hbar \left(a\frac{\partial \Psi_1}{\partial t} + b\frac{\partial \Psi_1}{\partial t} \right) = -\frac{\hbar^2}{2m}\left(a\frac{\partial^2 \Psi_1}{\partial x^2} + b\frac{\partial^2 \Psi_1}{\partial x^2}\right) + V\left(a\Psi_1 +b\Psi_2 \right). \]

Collecting all the \(\Psi_1\) terms to the left-hand side and the \(\Psi_2\) terms to the right-hand side gives

\[ a \left(i\hbar\frac{\partial \Psi_1}{\partial t} + \frac{\hbar^2}{2m}\frac{\partial^2 \Psi_1}{\partial x^2} - V\Psi_1 \right) = -b \left(i\hbar\frac{\partial \Psi_2}{\partial t} + \frac{\hbar^2}{2m}\frac{\partial^2 \Psi_2}{\partial x^2} - V\Psi_2 \right). \]

If \(\Psi_1\) and \(\Psi_2\) satisfy Eqn. (6.1) individually, then the quantities in parentheses are identically zero. Therefore \(\Psi\) is also a solution.

A wave can be characterized by a wave number \(k\) and angular frequency \(\omega\) traveling in the \(+x\) direction by

(6.2)#\[\Psi(x,t) = A \sin\left(kx-\omega t + \phi\right).\]

However, Eqn. (6.2) could be generalized to be a complex function with both sines and cosines. A more general form of a wave function is

\[ \Psi(x,t) = Ae^{i(kx-\omega t)} = A\left[\cos(kx-\omega t) + i \sin(kx-\omega t)\right], \]

where the coefficient \(A\) can be complex. The complex exponential \(e^{i(kx-\omega t)}\) is an acceptable solution to the time-dependent Schrödinger wave equation, but not all functions of \(\sin(kx-\omega t)\) and \(\cos(kx-\omega t)\) are solutions.

Exercise 6.2

(a) Show that \(\Psi_A(x,t) = Ae^{i(kx-\omega t)}\) satisfies the time-dependent Schrödinger wave equation.

(b) Determine whether \(\Psi_B(x,t) = A\sin(kx-\omega t)\) is an acceptable solution to the time-dependent Schrödinger wave equation.

(a) Similar to the previous example, we need to take the appropriate partial derivatives of the wave function \(\Psi_A\) with respect to time as

\[\begin{align*} \frac{\partial \Psi_A}{\partial t} &= \frac{\partial }{\partial t}\left(Ae^{i(kx-\omega t)}\right), \\ &= -i\omega Ae^{i(kx-\omega t)} = -i\omega \Psi_A, \end{align*}\]

and with respect to space as

\[\begin{align*} \frac{\partial \Psi_A}{\partial x} &= \frac{\partial }{\partial x}\left(Ae^{i(kx-\omega t)}\right), \\ &= ik\left(Ae^{i(kx-\omega t)}\right) = ik \Psi_A, \\ \frac{\partial^2 \Psi_A}{\partial x^2} &= ik\frac{\partial }{\partial x}\left(Ae^{i(kx-\omega t)}\right),\\ &= i^2k^2 \left(Ae^{i(kx-\omega t)}\right) = -k^2 \Psi_A. \end{align*}\]

Now we insert the results into Eqn. (6.1) to get

\[\begin{align*} i\hbar \left(-i\omega \Psi_A \right) &= -\frac{\hbar^2}{2m} \left(-k^2\Psi_A\right) + V\Psi_A, \\ 0 &= \left(\hbar\omega -\frac{\hbar^2k^2}{2m} - V \right)\Psi_A. \end{align*}\]

If we use \(E=hf=\hbar \omega\) and \(p = \hbar k\), we obtain

\[ \left(E - \frac{p^2}{2m} - V\right)\Psi_A = 0, \]

which is zero in our nonrelativistic formulation, because \(E = K + V = p^2/(2m) + V\). Thus \(\Psi_A\) appears to be an acceptable solution.

(b) Similar to part a, we need to take the appropriate partial derivatives of the wave function \(\Psi_B\) with respect to time as

\[\begin{align*} \frac{\partial \Psi_B}{\partial t} &= \frac{\partial }{\partial t}\left(A\sin(kx-\omega t) \right), \\ &= -\omega A\cos(kx-\omega t), \end{align*}\]

and with respect to space as

\[\begin{align*} \frac{\partial \Psi_B}{\partial x} &= \frac{\partial }{\partial x}\left(A\sin(kx-\omega t)\right), \\ &= kA\cos(kx-\omega t), \\ \frac{\partial^2 \Psi_B}{\partial x^2} &= ik\frac{\partial }{\partial x}\left(kA\cos(kx-\omega t)\right),\\ &= -k^2 A\sin(kx-\omega t) = -k^2 \Psi_B. \end{align*}\]

Now we insert the results into Eqn. (6.1) to get

\[\begin{align*} i\hbar \left(-\omega A\cos(kx-\omega t) \right) &= -\frac{\hbar^2}{2m} \left(-k^2\Psi_B\right) + V\Psi_B, \\ &= \left(\frac{\hbar^2k^2}{2m} + V\right) \Psi_B. \end{align*}\]

The left-hand side of the above result is complex where the right-hand side is real. Therefore, \(\Psi_B\) is not an acceptable wave function because Eqn. (6.1) is not satisfied for all \(x\) and \(t\). The function \(\Psi_B\) is a solution to the classical wave equation.

6.1.1. Normalization and Probability#

The probability \(P(x)\ dx\) of a particle between \(x\) and \(x+dx\) is given as

\[ P(x)\ dx = |\Psi(x,t)|^2\ dx = \Psi^*(x,t)\Psi(x,t)\ dx, \]

where the complex conjugate operator \(^*\) negates the complex parts of a function (i.e., \(i\rightarrow -i\)) and leaves the real parts unchanged. The probability of a particle existing between two points (\(x_1\) and \(x_2\)) is given by

(6.3)#\[\begin{align} P = \int_{x_1}^{x_2} \Psi^*\Psi dx. \end{align}\]

If the wave function represents the probability of a particle existing somewhere, then the sum of all probability intervals must equal unity, or

(6.4)#\[P(\text{all space}) = \int_{-\infty}^\infty \Psi^*\Psi dx = 1,\]

which is called normalization.

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt
from myst_nb import glue

def Psi_alpha(x,A):
    #calculate the wave function Ae^{-|x|/alpha}; alpha is a global variable
    #A = normalization constant
    return A*np.exp(-np.abs(x)/alpha)

fs = 'medium'
lw = 2
col = (218/256., 26/256., 50/256.)

alpha = 1. #scale of the plot 
norm_A = 1./alpha
x_rng = np.arange(-5*alpha,5*alpha,0.01)
x_a = np.arange(0,1,0.01)
x_b = np.arange(1,2,0.01)

fig = plt.figure(figsize=(6,3),dpi=150)
ax = fig.add_subplot(111)

Psi_all = Psi_alpha(x_rng,norm_A)
Psi_a = Psi_alpha(x_a,norm_A)
Psi_b = Psi_alpha(x_b,norm_A)
ax.plot(x_rng,Psi_all,'-',color=col,lw=lw,label='$\Psi(x) = Ae^{-\\frac{|x|}{\\alpha}}$')
ax.fill_between(x_a,Psi_a,color='gray',alpha=0.5)
ax.fill_between(x_b,Psi_b,color='b',alpha=0.75)
ax.axvline(0,color='k',linestyle='-',lw=lw)
ax.legend(loc='best',fontsize=fs)

ax.set_ylim(0,1.1*norm_A)
ax.set_yticks([0,1])
ax.set_yticklabels(['0','A'])
ax.set_xticks(range(-5,6))
ax.set_xlim(-5,5)
ax.set_xlabel("Position ($\\alpha$)",fontsize=fs)

glue("psi_fig", fig, display=False);
../_images/7478ff5ab6329f6eff3104048f559dd1098947b2f3781b8b43ec1c2b68ef2762.png
../_images/7478ff5ab6329f6eff3104048f559dd1098947b2f3781b8b43ec1c2b68ef2762.png

Fig. 6.1 The wave function \(\Psi(x) = Ae^{-\frac{|x|}{\alpha}}\) is plotted as a function of \(x\). The area under the curve represents the probability to find the particle between \(x_1\) and \(x_2\). Note that the wave function is symmetric about \(x=0\).#

Exercise 6.3

Consider a wave packet formed by using the wave function \(\Psi(x) = Ae^{-\frac{|x|}{\alpha}}\), where \(A\) is a constant to be determined by normalization (Fig. 6.1). Normalize this wave function and find the probabilities of the particle being between (a) \(0\) and \(\alpha\) (gray area in Fig. 6.1 ) and (b) \(\alpha\) and \(2\alpha\) (blue area in Fig. 6.1).

To normalize the wave function \(\Psi\), we use Eqn. (6.4) by

\[\begin{align*} \int_{-\infty}^\infty \left(Ae^{-\frac{|x|}{\alpha}} \right) \left(Ae^{-\frac{|x|}{\alpha}}\right)\ dx &= 1, \\ \int_{-\infty}^\infty A^2e^{-\frac{2|x|}{\alpha}} dx &= 1. \end{align*}\]

Because the wave function is symmetric about \(x=0\), we can evaluate only the positive interval \((0\leq x \leq \infty\)), multiply by \(2\), and drop the absolute value signs on \(|x|\) to get

\[\begin{align*} 2\int_0^\infty A^2e^{-\frac{2x}{\alpha}} dx &= -\frac{2A^2\alpha}{2}e^{-\frac{2x}{\alpha}}\biggr\rvert_0^\infty, \\ &= -\alpha A^2 \left(e^{-\infty} - e^0 \right) = \alpha A^2 = 1. \end{align*}\]

The coefficient \(A = \alpha^{-1/2}\), and the normalized wave function \(\Psi\) is

\[ \Psi(x) = \frac{1}{\sqrt{\alpha}}e^{-\frac{|x|}{\alpha}}. \]

We use the above result to determine the probability of the particle existing within each interval. Since the intervals are on the positive \(x\)-axis, we drop the absolute value signs on \(|x|\) to get

\[\begin{align*} P_a &= \frac{1}{\alpha} \int_0^\alpha e^{-\frac{2x}{\alpha}} dx, \\ &= -\frac{1}{2}e^{-\frac{2x}{\alpha}}\biggr\rvert_0^\alpha, \\ &= -\frac{1}{2}\left(e^{-2}-e^0\right) \approx 0.432. \end{align*}\]

The probability in the other interval is

\[\begin{align*} P_b &= \frac{1}{\alpha} \int_0^\alpha e^{-\frac{2x}{\alpha}} dx, \\ &= -\frac{1}{2}e^{-\frac{2x}{\alpha}}\biggr\rvert_\alpha^{2\alpha}, \\ &= -\frac{1}{2}\left(e^{-4}-e^{-2}\right) \approx 0.059. \end{align*}\]

The particle is much more likely to exist in interval (a) than in interval (b). This is expected given the shape of the wave function.

The wave function \(e^{i(kx-\omega t)}\) represents a “free” particle under zero net force (constant \(V\)) moving along the \(x\) axis. There is a problem with this wave function if we try to normalize it. The normalization integral diverges to infinity!

This occurs because there is a finite probability of the particle to exist anywhere along the \(x\) axis and our sum is trying to add up an infinite number of small boxes containing a finite probability. The only other possibility is a zero probability, but that is not an interesting physical result.

Because this wave function has a precise \(k\) and \(\omega\), it represents a particle with a definite energy and momentum. From the uncertainty principle, \(\Delta E = 0\) and \(\Delta p = 0\), which implies that \(\Delta t = \infty\) and \(\Delta x = \infty\). We cannot know where the particle is at any time. We can still use such wave functions, if we restrict the particle to certain positions in space, such as in a box or in an atom. We can also form wave packets from such functions to localize the particle.

6.1.2. Properties of Valid Wave Functions#

There are certain properties (i.e., boundary conditions) that an acceptable wave function \(\Psi\) must:

  1. be finite everywhere to avoid infinite probabilities.

  2. be single valued to avoid multiple values of the probability.

  3. be continuous for finite potentials. Its spatial derivative \(\partial \Psi/\partial x\) must also be continuous. This is required due to the second-order derivative term in the Schrödinger equation. (There are exceptions to this rule for infinite potentials).

  4. must approach zero as \(x \rightarrow \pm \infty\) to normalize the wave functions.

Solutions for \(\Psi\) that do not satisfy these properties do not generally correspond to physically realizable circumstances.

6.1.3. Time-Independent Schrödinger Wave Equation#

In many cases, the potential will not depend explicitly on time (i.e., time-independent). The dependence on time and position can then be separated in the Schödinger equation. Let \(\Psi(x,t) = \psi(x)f(t)\), and insert into Eqn. (6.1) to find

\[ i\hbar \psi(x)\frac{\partial f(t)}{\partial t} = -\frac{\hbar^2 f(t)}{2m}\frac{\partial^2 \psi(x)}{\partial x^2} + V(x)\psi(x)f(t). \]

Dividing by \(\psi(x) f(t)\), we get

(6.5)#\[\begin{align} \frac{i\hbar}{f(t)} \frac{d f(t)}{dt} = -\frac{\hbar^2}{2m}\frac{d^2 \psi(x)}{d x^2} + V(x). \end{align}\]

Notice that the left-hand side depends only on time, while the right-hand side depends only on spatial coordinates. This allows us to change the partial derivatives to ordinary derivatives because each side depends only on one variable. Each side must be equal to a constant because one variable may change independently of the other. Let’s call this constant \(B\) and set it equal to the left-hand side to get

\[\begin{align*} \frac{i\hbar}{f}\frac{df}{dt} &= B, \\ i\hbar \int \frac{df}{f} &= \int B\ dt. \end{align*}\]

We can integrate both sides and find

\[ i\hbar \ln f = Bt + C, \]

which includes the integration constant \(C\) that we may choose to be 0. Then,

\[ \ln f = \frac{Bt}{i\hbar}, \]

and we determine \(f\) to be

(6.6)#\[\begin{align} f(t) = e^{\frac{Bt}{i\hbar}} = e^{-\frac{iBt}{\hbar}}. \end{align}\]

If we let \(B = \hbar \omega = E\), then \(f(t) = e^{-i\omega t} = e^{-\frac{iEt}{\hbar}}\). We can now write the time-independent Schödinger equation as

(6.7)#\[-\frac{\hbar^2}{2m} \left[\frac{d^2}{dx^2} + V(x)\right] \psi(x) = E\psi(x).\]

The wave function \(\Psi(x,t)\) becomes

(6.8)#\[\Psi(x,t) = \psi(x)e^{-i\omega t}.\]

Many important results can be obtained when only the spatial part of the wave function \(\psi(x)\) is needed.

The probability density used the complex conjugate operator, which now becomes useful. Using Eqn. (6.7), we have

(6.9)#\[\begin{align} \Psi^*\Psi &= \psi^2(x)\left(e^{i\omega t}e^{-i\omega t}\right), \\ &= \psi^2(x), \end{align}\]

where the probability distributions are constant in time. If the probability distribution represents a wave, then we have the phenomenon of standing waves. In quantum mechanics, we say the system is in a stationary state.

Exercise 6.4

Consider a metal in which there are free electrons, and the potential is zero (e.g., a conductor). What mathematical form does the wave function \(\psi(x)\) take?

Using Eqn. (6.7), we let \(V(x)=0\) to get

\[ -\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2} = E\psi(x), \]

and using Eqn. (5.35), we find

\[ \frac{d^2\psi}{dx^2} = -\frac{2mE}{\hbar^2}\psi = k^2\psi.\]

The differential equation \(d^2\psi/dx^2 = -k^2\psi\) has appeared several times in calculus and introductory physics (e.g., pendulum and simple harmonic motion).

If the energy \(E>0\), then \(k^2\) is real and the wave function solution is

\[ \psi(x) = A\sin kx + B\cos kx.\]

Otherwise (\(E<0\)), then \(k^2\) is imaginary and the wave function solution is

\[ \psi(x) = Ce^{ikx}.\]

6.1.4. Comparison of Classical and Quantum Mechanics#

Newton’s second law (\(\vec{F} = m\vec{a}\)) and Schrödinger’s equation are both differential equations that are postulated to explain observed behavior, and experiments show that they are successful. Newton’s second law can be derived from the Schrödinger equation. Schrödinger’s equation is more fundamental. Classical mechanics only appears to be more precise because it deals with macroscopic phenomena, where the underlying uncertainties are just too small to be significant.

An interesting parallel between classical and quantum mechanics lies in considering ray and wave optics. Throughout the 18th century, scientists argued whether wave or ray optics was more fundamental (Newton favored ray optics). In the early 19th century, it was shown that wave optics was need to explain diffraction and interference. Ray optics is a good approximation as long as the wavelength \(\lambda\) is much smaller than the dimensions of the apertures because rays of light are characteristic of particle-like behavior. To describe interference phenomena, wave optics is required.

For macroscopic objects, the de Broglie wavelength is so small that wave behavior is not apparent. Eventually the wave descriptions and quantum mechanics were required to understand all the data. Classical mechanics is a good macroscopic approximation, but there is only one correct theory, quantum mechanics, as far as we know.

6.2. Expectation Values#

The wave equation formalism must measure physical quantities (e.g., position, momentum, and energy). Consider a measurement of the the position \(x\) of a particular system. If we make three measurements of the position, we are likely to obtain three different results due to uncertainty. If our method of measurement is inherently accurate, then there is some physical significance to the average of our measure values of \(x\).

The precision of our result improves with additional measurement. In quantum mechanics, we use wave functions to calculate the expected result of the average of many measurements of a given quantity. Measurements are made through integration, which returns the average value (recall the mean value theorem from calculus). We call this result the expectation value, where the expectation value of \(x\) is \(\langle x \rangle\). *Physical observables are found the the epxectation value. The expectation value mus be real (i.e., not complex) because the results of measurements are real.

Consider a particle that is constrained to move along the \(x\)-axis. If we make many measurements of the particle, we may observe the particle at the position \(x_1\) for a number \(N_1\) times, the position \(x_2\) for \(N_2\) times, and so forth. The average value of \(x\) (i.e., \(\bar{x}\) or \(x_{\rm av}\)) is then

\[ \bar{x} = \frac{N_1 x_1 + N_2 x_2 + \cdots}{N_1 + N_2 + \cdots} = \frac{\sum_i N_i x_i}{\sum_i N_i}, \]

from a series of discrete measurements. For continuous variables, we use the probability \(P(x,t)\) of observing the particle at a particular \(x\). Then the average value of \(x\) is

(6.10)#\[\begin{align} \bar{x} = \frac{\int_{-\infty}^\infty xP(x) dx}{\int_{-\infty}^\infty P(x) dx}. \end{align}\]

In quantum mechanics, we use the probability distribution (i.e., \(P(x)dx = \Psi^*(x,t)\Psi(x,t) dx\)) to determine the average or expectation value. The expectation value \(\langle x \rangle\) is given by

(6.11)#\[\begin{align} \langle x \rangle = \frac{\int_{-\infty}^\infty x\Psi^*(x,t)\Psi(x,t) dx}{\int_{-\infty}^\infty \Psi^*(x,t)\Psi(x,t) dx}. \end{align}\]

The denominator is the normalization equation. If the wave function is normalized, then the denominator is equal to unity. For a normalized wave function \(\Psi\), the expectation value is then given by

(6.12)#\[\begin{align} \langle x \rangle = \int_{-\infty}^\infty x\Psi^*(x,t)\Psi(x,t) dx. \end{align}\]

The same general procedure is applicable to determining the expectation value of any function \(g(x)\) for a normalized wave function \(\Psi(x,t)\) by

(6.13)#\[\begin{align} \langle g(x) \rangle = \int_{-\infty}^\infty \Psi^*(x,t)g(x)\Psi(x,t) dx. \end{align}\]

Any knowledge we might have of the simultaneous value of the position \(x\) and the momentum \(p\) must be consistent with the uncertainty principle. To find the expectation value of \(p\), we need to represent \(p\) in terms of \(x\) and \(t\). Consider the wave function of the free particle, \(\Psi(x,t) = e^{i(kx-\omega t)}\). The spatial derivative is

\[ \frac{\partial \Psi}{\partial x} = \frac{\partial}{\partial x}\left[ e^{i(kx-\omega t)}\right] = ike^{i(kx-\omega t)} = ik\Psi. \]

Since \(k = p/\hbar\), this becomes

\[ \frac{\partial \Psi}{\partial x} = \frac{i}{\hbar}p\Psi, \]

or

\[ p\left[\Psi(x,t)\right] = -i\hbar \frac{\partial \Psi}{\partial x}. \]

An operator is a mathematical operation that transforms one function into another. Coffee is the operator that transforms ideas into knowledge! An operator, denoted by \(\hat{Q}\), transforms the function \(f(x)\) by \(\hat{Q}f(x) = g(x)\). The momentum operator \(\hat{p}\) is then represented by

(6.14)#\[\begin{align} \hat{p} = -i\hbar \frac{\partial}{\partial x}, \end{align}\]

where the \(\hat{\ }\) symbol (i.e., “hat”) sign over the letter indicates that it is an operator.

The momentum operator is not unique, where each of the physical observables has an associated operator that used to find the observable’s expectation value. To compute the expectation value of some physical observable \(Q\), the operator \(\hat{Q}\) must operate on \(\Psi\) before calculating the probability to get

(6.15)#\[\begin{align} \langle Q \rangle = \int_{-\infty}^\infty \Psi^*(x,t)\hat{Q}\Psi(x,t) dx. \end{align}\]

The expectation value of the momentum \(p\) becomes

(6.16)#\[\begin{align} \langle p \rangle &= \int_{-\infty}^\infty \Psi^*(x,t)\hat{p}\Psi(x,t) dx, \\ &=-i\hbar \int_{-\infty}^\infty \Psi^*(x,t)\frac{\partial \Psi(x,t)}{\partial x} dx. \end{align}\]

The position \(x\) is its own operator (i.e., \(\hat{x} = x\)). Operators for observables of both \(x\) and \(p\) can be constructed from \(\hat{x}\) and \(\hat{p}\).

The energy operator \(\hat{E}\) can be constructed from the time derivative of the free-particle wave function,

\[ \frac{\partial \Psi}{\partial t} = \frac{\partial}{\partial t}\left[ e^{i(kx-\omega t)}\right] = -i\omega \Psi. \]

Substituting \(\omega = E/\hbar\) and rearranging, we find

(6.17)#\[\begin{split}E\left[\Psi(x,t)\right] &= i\hbar \frac{\partial \Psi}{\partial t}, \\ \hat{E}\Psi(x,t) &= i\hbar \frac{\partial }{\partial t}\Psi(x,t).\end{split}\]

The energy operator \(\hat{E}\) is used to find the expectation value \(\langle E \rangle\) of the energy by

(6.18)#\[\begin{align} \langle E \rangle &= \int_{-\infty}^\infty \Psi^*(x,t)\hat{E}\Psi(x,t) dx, \\ &=i\hbar \int_{-\infty}^\infty \Psi^*(x,t)\frac{\partial \Psi(x,t)}{\partial t} dx. \end{align}\]

The application of operators are general results and not limited to just free-particle wave functions.

Exercise 6.5

Use the momentum and energy operators with the conservation of energy to produce the Schrödinger wave equation.

The conservation of energy tells us how to calculate the total energy

\[ E = K + V = \frac{p^2}{2m} + V. \]

We replace \(p\) with the operator and apply to the wave function \(\Psi\), which gives

\[\begin{align*} \left[ \frac{1}{2m}\hat{p}^2 + V \right]\Psi &= \frac{-i\hbar}{2m}\frac{\partial}{\partial x}\left[-i\hbar\frac{\partial \Psi}{\partial x} \right] + V\Psi, \\ &= -\frac{\hbar^2}{2m} \frac{\partial^2 \Psi}{\partial x^2} + V\Psi. \end{align*}\]

Notice that \(\hat{p}^2\) results in applying \(\hat{p}\) twice. Using the application of the energy operator from Eqn. (6.17), we obtain

\[\begin{align*} \hat{E}\Psi &= \left[ \frac{1}{2m}\hat{p}^2 + V \right]\Psi, \\ i\hbar \frac{\partial }{\partial t}\Psi(x,t) &= -\frac{\hbar^2}{2m} \frac{\partial^2 \Psi}{\partial x^2} + V\Psi, \end{align*}\]

which is the time-dependent Schrödinger wave equation. This is not a derivation, but only a verification of the consistency of the definition.

6.3. Infinite Square-Well Potential#

The simplest system consists of a particle trapped in a well with infinitely hard walls that the particle cannot penetrate (i.e., particle-in-a-box). This scenario was explored in Sect. 5.8, but now we present the full quantum mechanical solution.

The potential that gives rise to an infinite square well with a width \(L\) is given by

(6.19)#\[\begin{align} V(x) = \begin{cases} \infty &\qquad x\leq 0, x\geq L \\ 0 &\qquad 0<x<L. \end{cases} \end{align}\]

The particle is constrained to move only between \(x=0\) and \(x=L\), where the particle experiences no forces. The infinite square-well potential can approximate many physical situations, such as the energy levels of simple atomic and nuclear systems.

If we insert the infinite square-well potential into the the time-independent Schödinger equation (Eqn. (6.7)),

  • the only possible solution for the region \(x=\leq 0\) or \(x\geq L\) is the wave function \(\psi(x) = 0\) because \(V = \infty\). There is zero probability for the particle to exist in this region.

  • the region \(0<x<L\) permits a nonzero solution because \(V=0\). The probability for the particle to exist in this region is 1.

Using \(V=0\) in Eqn. Eqn. (6.7) produces

\[ \frac{d^2 \psi}{dx^2} = -\frac{2mE}{\hbar^2}\psi = -k^2 \psi, \]

where the wave number \(k = \sqrt{2mE/\hbar^2}\). A suitable solution to this equation is

(6.20)#\[\begin{align} \psi(x) = A\sin kx + B \cos kx, \end{align}\]

where \(A\) and \(B\) are constants used to normalize the wave function. The wave function must be continuous at the boundaries, which means that \(\psi(0)=\psi(L) = 0\). Let’s apply these boundary conditions as

(6.21)#\[\begin{align} \psi(0) &= A \sin (0) + B \cos(0) = B, \\ \psi(L) &= A \sin (kL) + B \cos(kL). \end{align}\]

For \(\psi(0) = 0\), this implies that \(B=0\). To apply \(\psi(L) = 0\), then \(A\sin(kL) = 0\) and \(A=0\) is a trivial solution. We must have

(6.22)#\[\begin{align} kL = n\pi \end{align}\]

for \(\sin(kL) = 0\) and \(n\) is a positive integer. The wave function is now

(6.23)#\[\begin{align} \psi_n(x) = A \sin \left(\frac{n\pi}{L} x \right) \quad (n=1,\ 2,\ 3,\ \ldots). \end{align}\]

The property that \(d\psi /dx\) must also be continuous so that we can normalize the wave function. The normalization condition is

\[ P = \int_{-\infty}^\infty \psi_n^*(x)\psi_n(x) dx = 1, \]

where substitution of the wave function yields

\[ A^2 \int_0^L \sin^2 \left(\frac{n\pi}{L} x \right) dx = 1. \]

This is a straightforward integral (with the help of an integral table) and gives \(L/2\), so that \(A^2(L/2) = 1\) and \(A = \sqrt{2/L}\). The normalized wave function becomes

(6.24)#\[\psi_n(x) = \sqrt{\frac{2}{L}} \sin \left(\frac{n\pi}{L} x \right) \quad (n=1,\ 2,\ 3,\ \ldots).\]

These wave functions are identical to a vibrating string with its ends fixed. The application of the boundary conditions corresponds to fitting standing waves into the box. It is not a surprise to find standing waves as the solution because wea are considering time-independent solutions. Since \(k_n = n\pi/L\), we have

\[ k_n = \frac{n\pi}{L} = \sqrt{2mE_n}{\hbar^2}. \]

Notice the subscript \(n\) that denotes quantities that depend on the integer \(n\) and have multiple values. This equation is solved for \(E_n\) to give

(6.25)#\[E_n = n^2 \frac{\pi^2 \hbar^2}{2mL^2} \quad (n=1,\ 2,\ 3,\ \ldots).\]

The possible energies \(E_n\) of the particle are quantized and the integer \(n\) is a quantum number. The quantization of energy occurs from the application of the boundary conditions (standing waves) to possible solutions of the wave equation. Each wave function \(\psi_n(x)\) has associated with it a unique energy \(E_n\). The lowest energy level given by \(n=1\) is called the ground state and has an energy \(E_1 = \pi^2\hbar^2/(2mL^2)\).

Note that the lowest energy level cannot be zero because the wave function has zero probability (\(\psi_0 = 0\)). Classically the particle has equal probability of being anywhere inside the box. The classical probability density is \(P(x) = 1/L\). According to Bohr’s correspondence principle, we should obtain the same probability for large \(n\), where the classical and quantum results should agree. The quantum probability density is \((2/L)\sin^2(k_n x)\).

For large values of \(n\), there will be many oscillations within the box and the average value of \(\sin^2 \theta\) over many cycles is \(1/2\). Therefore the quantum probability for large \(n\) is equal to \(1/L\), in agreement with the classical result.

Exercise 6.6

Determine the expectation values for (a) \(x\), (b) \(x^2\), (c) \(p\), and (d) \(p^2\) of a particle in an infinite square well for the first excited state.

The first excited state corresponds to \(n=2\), and we know the quantized (normalized) wave function for a particle in an infinite square well from Eqn. (6.24). This gives

\[ \psi_2(x) = \sqrt{\frac{2}{L}} \sin\left(\frac{2\pi}{L}x \right). \]

(a) Using \(\psi_2(x)\), we can find the expectation value of \(x\) through the integral (using an integral table)

\[\begin{align*} \langle x \rangle &= \int_0^L \psi_2^* x \psi_2 dx, \\ &= \frac{2}{L}\int_0^L x\sin^2\left(\frac{2\pi}{L}x \right) dx, \\ &= \frac{L}{2}. \end{align*}\]

The average position of the particle is in the middle of the box \((x=L/2)\), although this is not the most probable location.

(b) The expectation value of \(x^2\) is found through

\[\begin{align*} \langle x^2 \rangle &= \frac{2}{L}\int_0^L \left[x\sin\left(\frac{2\pi}{L}x \right)\right]^2 dx, \\ &= 0.32L^2, \end{align*}\]

where you can use integration by parts, an integral table, or the python module sympy. the expectation value for \(\sqrt{\langle x^2\rangle}\) for the first excited state is \(0.57L\), which is larger than \(\langle x \rangle\).

(c) The expectation value of \(p\) requires the first derivative \(d\psi_2/dx\), which is,

\[ \frac{d\psi_2}{dx} = \sqrt{\frac{2}{L}} \frac{d}{dx}\left[\sin \left(\frac{2\pi}{L}x \right)\right] = \frac{2\pi}{L}\sqrt{\frac{2}{L}} \cos \left(\frac{2\pi}{L}x \right). \]

Applying this result to the integral for \(\langle p \rangle\) gives,

\[\begin{align*} \langle p\rangle &= i\hbar \int_0^L \psi_2^* \frac{d\psi_2}{dx} dx, \\ &= \frac{4\pi i\hbar}{L^2}\int_0^L sin\left(\frac{2\pi}{L}x \right)\cos \left(\frac{2\pi}{L}x \right),\\ &= 0. \end{align*}\]

The integral of \(\sin x \cos x\) over symmetric limits is always zero because the particle is moving left as often as right in the box.

(d) The expectation value for \(p^2\) requires us to operate a second time on \(\psi_2\) to get

\[ \frac{d^2\psi_2}{dx^2} = \frac{2\pi}{L}\sqrt{\frac{2}{L}} \frac{d}{dx}\left[\cos \left(\frac{2\pi}{L}x \right)\right] = -\frac{4\pi^2}{L^2}\sqrt{\frac{2}{L}} \sin \left(\frac{2\pi}{L}x \right). \]

Applying this result to the integral for \(\langle p^2 \rangle\) gives,

\[\begin{align*} \langle p^2\rangle &= -\hbar^2 \int_0^L \psi_2^* \frac{d^2\psi_2}{dx^2} dx, \\ &= \frac{8\pi^2 \hbar^2}{L^3}\int_0^L \sin^2 \left(\frac{2\pi}{L}x \right) ,\\ &= \frac{4\pi^2 \hbar^2}{L^2}. \end{align*}\]

Using \(\langle p^2\rangle\), we can calculate the energy of the first excited state as

\[ E_2 = \frac{\langle p^2\rangle}{2m} = \frac{4\pi^2 \hbar^2}{2mL^2}, \]

which is correct because \(V=0\).

from sympy import symbols, pi, simplify, integrate, diff, sin, var

var('x')
i, hbar, L = symbols('i, hbar L')
psi = sin(2*pi*x/L)

#find integral for <x>
exp_x = simplify(integrate((2/L)*x*psi**2,(x,0,L)))
print("The expectation value for x is",exp_x)

#find integral for <x^2>
exp_x2 = simplify(integrate((2/L)*(x*psi)**2,(x,0,L)))
print("The expectation value for x**2 is", exp_x2, '= [1/3 - 1/(8*pi**2)]*L**2 = %1.2f L**2' % (1/3 - 1/(8*pi**2)))

#find integral for <p>
exp_p = simplify(integrate((2/L)*psi*diff(i*hbar*psi,x),(x,0,L)))
print("The expectation value for p is", exp_p)

#find integral for <p^2>
exp_p2 = simplify(integrate((2/L)*psi*diff(diff(-hbar**2*psi,x),x),(x,0,L)))
print("The expectation value for p**2 is", exp_p2.args[0][0])
The expectation value for x is L/2
The expectation value for x**2 is -L**2/(8*pi**2) + L**2/3 = [1/3 - 1/(8*pi**2)]*L**2 = 0.32 L**2
The expectation value for p is 0
The expectation value for p**2 is 4*pi**2*hbar**2/L**2

6.4. Finite Square-Well Potential#

An infinite potential well is not very realistic. The finite-well potential is more realistic and similar to the infinite one. Consider a potential that is zero between \(x=0\) and \(x=L\), but equal to a constant \(V_o\) everywhere else. This is written mathematically as

(6.26)#\[\begin{align} V(x) = \begin{cases} V_o \quad & x\leq -L, \quad &\text{region I} \\ 0 \quad & -L < x < L, \quad &\text{region II} \\ V_o \quad & x\geq L. \quad &\text{region III} \\ \end{cases} \end{align}\]

First, let’s examine a particle of energy \(E < V_o\) that is classically bound inside the well. Using Eqn. (6.7), we have

(6.27)#\[\begin{align} -\frac{\hbar^2}{2m} \frac{d^2\psi}{dx^2} = \left[E-V_o \right]\psi. \quad \text{regions I, III} \end{align}\]

Let \(\alpha^2 = 2m(V_o-E)/\hbar^2\) (a positive constant), and we find

\[ \frac{d^2\psi}{dx^2} = \alpha^2 \psi. \]

The solution to this differential equation is known and can be written as a combination of exponential functions (\(e^{\alpha x}\) and \(e^{-\alpha x}\)), or

\[ \psi = Ae^{\alpha x} + Be^{-\alpha x}. \]

A valid wave function needs to be finite everywhere, where we can use this constraint to determine the form of the wave function \(\psi\) within region I and region III. Region I includes all values of \(x\leq -L\), where we can try to evaluate what conditions are necessary for \(\psi\) to be finite using a limit,

\[ \lim_{x\rightarrow -\infty} \psi = Ae^{-\infty} + Be^{\infty}, \]

which requires that \(B = 0\). Similarly, we can use a limit to find that \(A=0\) for \(x\geq L\). This allows us to define the wave functions in each region as

(6.28)#\[\begin{align} \psi_I(x) &= Ae^{\alpha x}, \quad \text{region I},\ x\leq -l \\ \psi_{III}(x) &= Be^{-\alpha x}. \quad \text{region III},\ x\geq L \end{align}\]

Inside the square well, the potential is zero and the Schrödinger equation becomes

\[ \frac{d^2\psi}{dx^2} = -k^2\psi, \]

where \(k=\sqrt{(2mE)/\hbar^2}\). This differential equation has a solution, which is a combination of sinusoidal functions:

(6.29)#\[\begin{align} \psi_{II}(x) = C\cos(kx) + D\sin(kx). \quad \text{region II},\ -L<x<L \end{align}\]

To determine a valid wave function, it must be continuous at the boundary (e.g., \((\psi_I = \psi_{II})\) and \((d\psi_I/dx = d\psi_{II}/dx)\) at \(x=-L\)). By choosing symmetric boundaries and noting that the potential is an even function (i.e., \(V_ox^0\)), we can simplify \(\psi_{II}\) to include only the even portion (i.e., \(D=0\)). As a result, we find the following conditions

(6.30)#\[\begin{align} C\cos(kL) &= A e^{-\alpha L}, \\ kC\sin(kL) &= -\alpha A e^{-\alpha L}, \\ C\cos(kL) &= Be^{-\alpha L}, \\ -kC\sin(kL) &= -\alpha B e^{-\alpha L}. \end{align}\]

We immediately find that \(B=A\), and we can divide the last two equations to eliminate \(C\) to get,

(6.31)#\[\begin{align} k\tan(kL) = \alpha. \end{align}\]

Recall that \(k\) and \(\alpha\) are both functions of energy \(E\). To solve for \(E\), we perform a variable transformation \(z \equiv kL\) and \(z_o \equiv \frac{L}{\hbar}\sqrt{2mV_o}\). Using the definitions of \(k\) and \(\alpha\), we find that

\[ k^2 + \alpha^2 = \frac{2mE}{\hbar^2} + \frac{2m(V_o-E)}{\hbar^2} = \frac{2mV_o}{\hbar^2}. \]

Combining the factors of \(z\) produces,

\[ \sqrt{z_o^2 - z^2} = \sqrt{\frac{2mV_o}{\hbar^2}L^2 - \frac{2mE}{\hbar^2}L^2} = \sqrt{\frac{2m(V_o-E)}{\hbar^2}}L = \alpha L. \]

Completing the transformation, we get

(6.32)#\[\begin{split}kL\tan(kL) &= \alpha L, \\ z\tan(z) &= \sqrt{z_o^2 - z^2},\\ \tan(z) &= \sqrt{\left(\frac{z_o}{z}\right)^2 -1}.\end{split}\]

We are left with a transcendental equation that must be solved numerically, where the equation depends on \(z_o\) (i.e., the size of the well) and \(z\) (i.e., a measure of the energy because \(k\propto \sqrt{E}\)). The wave functions can be determined through normalization (more details here).

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import newton
from myst_nb import glue

def fsw(z):
    #Calculate the root for the finite square well (symmetric case)
    return np.tan(z) - np.sqrt((z_o/z)**2-1)
def fsw_prime(z):
    #Calculate the derivative of fsw
    return 1./np.cos(z)**2 + z_o/np.sqrt((z_o/z)**2-1)/z**3

fs = 'medium'
lw = 1.5
col = (218/256., 26/256., 50/256.)

z = np.arange(0.01,3*np.pi,0.01)
z_o = 8 #Example from Griffiths (Fig. 2.13)
z1 = np.arange(0,np.pi/2,0.01)

fig = plt.figure(figsize=(4,2),dpi=300)
ax = fig.add_subplot(111)

for i in range(0,3):
    z1 = np.arange(i*np.pi,(2*i+1)*np.pi/2,0.01)
    if i == 0:
        ax.plot(z1,np.tan(z1),'-',color=col,lw=lw,label='$\\tan(z)$')
    else:
        ax.plot(z1,np.tan(z1),'-',color=col,lw=lw)
ax.plot(z[z<z_o],np.sqrt((z_o/z[z<z_o])**2-1),'k-',lw=lw,label='$\sqrt{(z_o/z)^2-1}$')
ax.legend(bbox_to_anchor=(-0.05, 1.05, 1., .102),ncol=2,fontsize=fs,frameon=False)

roots = []
for i in range(0,3):
    z_guess = 1.4 + i*np.pi
    z_root = newton(fsw,z_guess,fsw_prime)
    ax.plot(z_root,np.tan(z_root),'b.',ms=8)
    roots.append(z_root)

#print("The first three roots are approximately %1.2f pi, %1.2f pi, and %1.2f pi." % (roots[0]/np.pi,roots[1]/np.pi,roots[2]/np.pi))

ax.set_xticks([0,np.pi/2,np.pi,1.5*np.pi,2*np.pi,2.5*np.pi])
ax.set_xticklabels(['0','$\pi/2$','$\pi$','$3\pi/2$','$2\pi$','$5\pi/2$'])
ax.set_xlim(0,3*np.pi)

ax.set_ylim(0,10)
ax.set_yticks([])
ax.set_xlabel("$z$",fontsize=fs)

glue("fsw_fig", fig, display=False);
../_images/d7eb8c234e14c442ba44c422e7a732648ba9cef370def7ec0c171f8c0294f037.png
../_images/d7eb8c234e14c442ba44c422e7a732648ba9cef370def7ec0c171f8c0294f037.png

Fig. 6.2 Graphical solution for the finite square well potential using Eqn. (6.32), for \(z_o = 8\) (even states). The three roots (blue dots) are \(0.44\pi\), \(1.33\pi\), and \(2.17\pi\).#

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt
from scipy.constants import m_e, hbar, eV
from scipy.optimize import minimize
from myst_nb import glue

def fsw_wave_func(x,A,C,D,E,V_o,m):
    #Calculate the wave function for the finite square well
    k = calc_k(E,m)
    alpha = calc_alpha(E,V_o,m)
    region_I = np.where(x<=-L)[0]
    region_II = np.where(np.logical_and(-L<x,x<L))[0]
    region_III = np.where(x>=L)

    psi = np.zeros(len(x))
    psi[region_I] = A*np.exp(alpha*x[region_I])
    psi[region_II] = C*np.cos(k*x[region_II])+D*np.sin(k*x[region_II])
    psi[region_III] = A*np.exp(-alpha*x[region_III])

    return psi

def calc_alpha(E,V_o,m):
    #calculate alpha from E and V_o
    return np.sqrt(2*m*np.abs(E-V_o)/hbar**2)
def calc_k(E,m):
    #calculate k from E
    return np.sqrt(2*m*E/hbar**2)

def calc_En(n,m,V_o):
    #calculate E_n for a wide deep well 
    #return n**2*np.pi**2*hbar**2/(2*m*(2*L)**2) - V_o 
    z_n = [0.44*np.pi,1.33*np.pi,2.17*np.pi]
    return z_n[n]**2*hbar**2/(2*m*L**2)
def calc_coeff(E,V_o,m):
    #calculate the coefficient A_n
    k = calc_k(E,m)
    alpha = calc_alpha(E,V_o,m)
    term_1 = (np.exp(2*alpha*L)-np.exp(-2*alpha*L))/(2*alpha)
    term_2 = (L + np.sin(2*k*L)/(2*L))*np.exp(-2*alpha*L)/np.cos(k*L)**2
    A_n = np.sqrt(1./(term_1+term_2))
    C_n = A_n*np.exp(-alpha*L)/np.cos(k*L)
    return A_n, C_n

fs='medium'
lw = 2
col = (218/256., 26/256., 50/256.)

L = 5.29e-11 #half-width of well (in m)
V_pot = (8*hbar/L)**2/(2*m_e) #z_o = 8 converted into V_o
E_n = []
for i in range(1,4):
    E_n.append(calc_En(i-1,m_e,V_pot))
fig = plt.figure(figsize=(4,4),dpi=150)
ax = fig.add_subplot(111)

x_rng = np.arange(-2*L,2*L,L/40.)
for i in range(0,3):
    A_n,C_n = calc_coeff(E_n[i],V_pot,m_e)
    psi = fsw_wave_func(x_rng,A_n,C_n,0,E_n[i],V_pot,m_e)/1e-5
    ax.fill_between(x_rng[x_rng<=-L]/L,psi[x_rng<=-L]+3.5*i,3.5*i,color='gray')
    ax.fill_between(x_rng[x_rng>=L]/L,psi[x_rng>=L]+3.5*i,3.5*i,color='gray')
    ax.axhline(3.5*i,linestyle='--',color='k')
    ax.plot(x_rng/L,psi+3.5*i,'-',color=col,lw=lw)

ax.set_yticks([])
ax.set_ylim(0,9)
ax.axvline(-1,linestyle='-',color='k')
ax.axvline(1,linestyle='-',color='k')
ax.set_xlim(-2,2)
ax.set_xlabel("$x$ (L)")

glue("fsw_func_fig", fig, display=False);
../_images/d8e8a41a90347252ec3839c3cdf1d91a65c04b662a9b3074a1c832149bde3c0c.png
../_images/d8e8a41a90347252ec3839c3cdf1d91a65c04b662a9b3074a1c832149bde3c0c.png

Fig. 6.3 Wave functions (red curves) for the finite square well using \(z_o = 8\). The gray regions mark where the wave function penetrate into the walls of the potential well.#

The application of the boundary conditions leads to quantized energy values \(E_n\) and associated wave functions \(\psi_n\). Remarkably, the particle has a finite probability of being outside the square well (see Fig. 6.3). Notice that the wave functions joint smoothly at the edges of the well and approach zero exponentially outside the well.

The particle existing outside the well is prohibited classically, but occurs in quantum mechanics. Because of the exponential decrease of the the wave functions (\(\psi_I\) and \(\psi_{III}\)), the probability of the particle penetrating a distance greater than \(\delta x \approx 1/\alpha\) decreases quickly, where

(6.33)#\[\begin{align} \delta x \approx \frac{1}{\alpha} = \frac{\hbar}{\sqrt{2m(V_o-E)}} \end{align}\]

is called the penetration depth. The fraction of particles that successfully tunnel through the outer walls of the potential well is exceedingly small, but the results have important applications.

6.5. Three-Dimensional Infinite-Potential Well#

Like the one-dimensional infinite-potential well, the three-dimensional potential well is expected to have time-independent solutions that can be determined from the time-independent Schrödinger equation. The wave function must depend on all three spatial coordinates, \(\psi =\psi(x,y,z)\). Beginning with the conservation of energy, we multiply by the wave function to get

(6.34)#\[\begin{align} \frac{p^2}{2m}\psi + V\psi = E\psi. \end{align}\]

Then, we use the momentum operator for each dimension because \(\hat{p}^2 = \hat{p}_x^2 + \hat{p}_y^2 + \hat{p}_z^2\). Individually we have

\[\begin{align*} \hat{p}_x \psi &= -i\hbar \frac{\partial \psi}{\partial x}, \\ \hat{p}_y \psi &= -i\hbar \frac{\partial \psi}{\partial y}, \\ \hat{p}_z \psi &= -i\hbar \frac{\partial \psi}{\partial z}. \\ \end{align*}\]

The application of the \(\hat{p}^2\) operator gives

(6.35)#\[-\frac{\hbar^2}{2m}\left(\frac{\partial^2 \psi}{\partial x^2} + \frac{\partial^2 \psi}{\partial y^2}+ \frac{\partial^2 \psi}{\partial z^2} \right) + V\psi = E\psi,\]

which defines the time-independent Schrödinger equation in three dimensions. The expression in parentheses is the Laplacian operator, which is usually written using the shorthand notation as

(6.36)#\[\begin{align} \nabla^2 = \frac{\partial^2 }{\partial x^2} + \frac{\partial^2 }{\partial y^2} + \frac{\partial^2 }{\partial z^2}, \end{align}\]

and we can write Eqn. (6.35) as

(6.37)#\[\begin{align} -\frac{\hbar^2}{2m}\nabla^2 \psi + v\psi = E\psi. \end{align}\]

Exercise 6.7

Consider a free particle inside a box with lengths \(L_1\), \(L_2\), and \(L_3\) along the \(x\), \(y\), and \(z\) axes, respectively. The particle is constrained to be inside the box. (a) Find the wave functions and energies. (b) Find the ground-state wave function for a cube. (c) Find the energies of the ground and first excited state for a cube of sides \(L\).

(a) Similar to the one-dimensional infinite square well, it is reasonable to try a sinusoidal wave function for each of the dimensions. As a result, we obtain

(6.38)#\[\begin{align} \psi(x,y,z) = A\sin(k_1x)\sin(k_2y)\sin(k_3z), \end{align}\]

which has a normalization constant \(A\) and the quantities \(k_i\) (\(k_1\), \(k_2\), and \(k_3\)) are determined by applying the appropriate boundary conditions.

The condition that \(\psi = 0\) at \(x=L_1\) requires that \(k_1L_1 = n_1\pi\). Generalizing this constraint and solving for \(k_i\), we get

(6.39)#\[\begin{align} k_1 = \frac{n_1\pi}{L_1}, \quad k_2 = \frac{n_2\pi}{L_2}, \quad k_3 = \frac{n_3\pi}{L_3}, \end{align}\]

using the integers \(n_1\), \(n_2\), and \(n_3\) for quantum numbers in each of the dimensions.

Now, we need to take the appropriate derivatives of \(\psi\) with respect to \(x\) (the derivatives in \(y\) and \(z\) will be similar). We have

\[\begin{align*} \frac{\partial \psi}{\partial x} &= \frac{\partial}{\partial x} \left[A\sin(k_1x)\sin(k_2y)\sin(k_3z) \right], \\ &= k_1A\cos(k_1x)\sin(k_2y)\sin(k_3z), \\ \frac{\partial^2 \psi}{\partial x^2} &= \frac{\partial}{\partial x} \left[ k_1A\cos(k_1x)\sin(k_2y)\sin(k_3z) \right], \\ &= -k_1^2A \sin(k_1x)\sin(k_2y)\sin(k_3z) = -k_1^2 \psi. \end{align*}\]

Equation (6.35) becomes

\[ \frac{\hbar^2}{2m}\left(k_1^2 + k_2^2 + k_3^2\right)\psi = E\psi. \]

The \(k_i\) represent a wave number for each dimension, where we can generalize the energies of the one-dimensional case to

(6.40)#\[\begin{align} E_n = \frac{\pi^2\hbar^2}{2m}\left(\frac{n_1^2}{L_1^2} + \frac{n_2^2}{L_2^2} + \frac{n_3^2}{L_3^2} \right), \end{align}\]

where the allowed energies depend on the three quantum numbers. The wave function can also be written in terms of the quantum numbers as

(6.41)#\[\begin{align} \psi(x,y,z) = A\sin\left(\frac{n_1\pi x}{L_1}\right) \sin\left(\frac{n_2\pi y}{L_2}\right) \sin\left(\frac{n_3\pi z}{L_3}\right) \end{align}\]

(b) For a cube \(L_1 = L_2 = L_3 = L\) and the ground state is given by \(n_1 = n_2 = n_3 = 1\). The ground-state wave function is simply

(6.42)#\[\begin{align} \psi(x,y,z) = A\sin\left(\frac{\pi x}{L}\right) \sin\left(\frac{\pi y}{L}\right) \sin\left(\frac{\pi z}{L}\right). \end{align}\]

(c) For a cube the energies are given as

(6.43)#\[\begin{align} E_n = \frac{\pi^2\hbar^2}{2mL^2}\left(n_1^2 + n_2^2 + n_3^2\right), \end{align}\]

and the ground state energy is \(E_{111} = \frac{3\pi^2\hbar^2}{2mL^2}\).

The first excited state is when one quantum number is increased by one so that

\[\begin{align*} E_{211} &= \frac{\pi^2\hbar^2}{2mL^2}\left(2^2 + 1^2 + 1^2\right) = \frac{6\pi^2\hbar^2}{2mL^2} = \frac{3\pi^2\hbar^2}{mL^2}, \\ &= E_{121} = E_{112}. \end{align*}\]

There are three possible combinations for the first excited state depending on which dimension of the cube is excited.

A given state is degenerate when there is more than one wave function for a given energy (see Exercise 6.7). Degeneracies are often the result of symmetries. For a cube, there is a degeneracy, but it is removed if the sides have different lengths because there would be different energies. Degeneracy also occurs in classical physics where orbits with different eccentricities may have the same energy. A perturbation of the potential energy can remove a degeneracy. Energy levels can split (thereby removing a degeneracy) by applying an external magnetic (Zeeman effect) or electric (Stark effect) field.

6.6. Simple Harmonic Oscillator#

Simple harmonic oscillators (SHOs) commonly occur in nature and are one of the first physical systems studied within introductory physics. Consider a spring with a spring constant \(\kappa\), which is in equilibrium at \(x=x_o\). There is a restoring force described through Hooke’s law and a potential energy, which is given as

\[\begin{align*} F &= -\kappa(x-x_o), \\ V &= \kappa (x-x_o)^2. \end{align*}\]

The application of the restoring force introduces simple harmonic motion (SHM). Diatomic molecules or a system of atoms in a solid lattice can be approximated by SHM in a general way. Within a lattice, the force on the atoms depends on the distance \(x\) from some equilibrium position and the potential \(V(x)\) can be represented by a Taylor series as

(6.44)#\[\begin{align} V(x) = V_o + V_1(x-x_o) + \frac{V_2}{2}\left(x-x_o\right)^2 + \cdots, \end{align}\]

where each term of the series has a constant \(V_i\). For \(x-x_o \approx 0\) (i.e., a small perturbation), the higher terms are negligible. The minimum of the potential occurs at the equilibrium position, which means that \(dV/dx = 0\) and the lowest surviving term of the potential is

\[ V(x) = \frac{V_2}{2}\left(x-x_o\right)^2. \]

To study the quantum description of SHM, we insert a potential \(V = kx^2/2\) into Eqn. (6.7) to get

(6.45)#\[\begin{align} \frac{d^2 \psi}{dx^2} = -\frac{2m}{\hbar^2}\left(E - \frac{1}{2}\kappa x^2 \right)\psi = \left(\frac{m\kappa}{\hbar^2}x^2 - \frac{2mE}{\hbar^2}\right) \psi. \end{align}\]

Through a pair of variable transformations,

(6.46)#\[\begin{align} \alpha^2 &= \frac{m\kappa}{\hbar^2}, \\ \beta = \frac{2mE}{\hbar^2}, \end{align}\]

we get

(6.47)#\[\begin{align} \frac{d^2 \psi}{dx^2} = (\alpha^2 x^2 - \beta)\psi. \end{align}\]

The particle is confined to the potential well, and thus, has zero probability of being at \(x= \pm \infty\), which means

\[ \lim_{x\rightarrow \pm \infty} \psi(x) = 0. \]

Additionally, the lowest energy state cannot be \(E=0\) because if \(x=0\) and \(v=0\), then the kinetic energy is also zero (i.e., \(p=0\)). The uncertainty principle prevents us from knowing \(x\) and \(p\) exactly, which means at \(x=0\) that \(p>0\) (i.e., \(E>0\)).

The lowest energy state is defined as some non-zero potential or \(E_o = V_o\). If we represent the energy by a height above the bottom of the well (i.e., some inherent potential energy), then there will be a symmetric distance \(\pm a\) from the equilibrium position to denote the lowest energy (i.e., \(E_o = \kappa a^2/2\)). The walls of the potential are similar to the finite potential well, where there is some small probability of the wave function existing outside the potential. But the wave function decreases rapidly on the other side of the barrier. The minimum energy \(E_o\) is also called the zero-point energy (see Fig. 6.4).

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import hermite
from scipy.constants import m_e
from myst_nb import glue
from matplotlib import rcParams

rcParams.update({'font.size': 16})
rcParams.update({'mathtext.fontset': 'cm'})

def SHO_potential(x):
    #calculate the SHO potential
    return 0.5*kappa*x**2

def SHO_wave_func(n,x):
    #calculate the SHO wave function
    A = (alpha/np.pi)**0.25
    if n == 0:
        return A*np.exp(-alpha*x**2/2)

alpha  = 1. #Assume alpha = 1 --> kappa = hbar^2/m
kappa = hbar**2*alpha**2/m_e 
omega = np.sqrt(kappa/m_e)
a = np.sqrt(hbar*omega/kappa) #width of the potential at E_o

fs = 'medium'
lw = 2 
col = (218/256., 26/256., 50/256.)

fig = plt.figure(figsize=(8,4),dpi=150)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)

x = np.arange(-3*a,3*a,a/40.)
ax1.plot(x/a,SHO_potential(x)/(hbar*omega),'-',color=col,lw=lw,label='$V(x)$')
ax1.axhline(0.5,0.25,0.75,linestyle='--',lw=1.5)
ax1.axvline(-1,0,0.25,linestyle='--',lw=1.5)
ax1.axvline(1,0,0.25,linestyle='--',lw=1.5)
ax1.axvline(0,0,0.8,linestyle='-',color='k',lw=1.5)
ax1.legend(loc='best',fontsize=fs)

ax2.plot(x/a,SHO_wave_func(0,x),'-',color=col,lw=lw,label='$\psi_o$')
ax2.axvline(-1,0,0.45,linestyle='--',lw=1.5)
ax2.axvline(1,0,0.45,linestyle='--',lw=1.5)
ax2.legend(loc='best',fontsize=fs)

ax1.set_ylim(0,2)
ax1.set_xlim(-2,2)
ax1.set_xlabel("Position $x$ ($a$)",fontsize=fs)
ax1.set_yticks([])

ax2.set_ylim(0,1.)
ax2.set_xlim(-3,3)
ax2.set_xlabel("Position $x$ ($a$)",fontsize=fs)
ax2.set_xticks(range(-2,3))
ax2.set_yticks([])

glue("SHO_fig", fig, display=False);
../_images/74dcea92cfb16bc9981b3e05e5e6dae430c162a06ca47fee94078f1303defd9d.png
../_images/74dcea92cfb16bc9981b3e05e5e6dae430c162a06ca47fee94078f1303defd9d.png

Fig. 6.4 Simple harmonic oscillator potential (left) and ground state wave function (right). The horizontal (dashed) line represents the zero-point energy.#

Exercise 6.8

Estimate the minimum energy of the simple harmonic oscillator (SHO) allowed by the uncertainty principle.

From classical mechanics, the average kinetic energy is equal to the average potential energy (i.e., \(K_{\rm av}=V_{\rm av}\); virial theorem. As a result, the average potential and kinetic energy are equal to half of the total energy. We can write this mathematically as

\[\begin{align*} E_{\rm tot} &= K_{\rm av} + V_{\rm av} = 2 K_{\rm av},\\ K_{\rm av} &= \frac{1}{2}E_{\rm tot}. \end{align*}\]

Applying this to the SHO gives

\[ K_{\rm av} = \frac{(p^2)_{\rm av}}{2m} = \frac{\kappa}{2}(x^2)_{\rm av} = \frac{E_{\rm tot}}{2}. \]

The mean value of \(x=0\), but the mean value of \((x^2)_{\rm av} \) is the mean square deviation \((\Delta x)^2\) (see the uncertainty principle in general). A similar reasoning can be applied to get \((p^2)_{\rm av} = (\Delta p)^2\). Then, we must have

\[\begin{align*} E = \kappa (\Delta x)^2 &= \frac{(\Delta p)^2}{2m}, \\ \Delta x &= \frac{\Delta p}{\sqrt{m\kappa}}. \end{align*}\]

From Heisenberg’s uncertainty principle, we have \(\Delta p \Delta x \geq \hbar/2\). Then the minimum value of \(\Delta x = \hbar/(2\Delta p)\). Combining with the previous result, we have

\[ (\Delta x)^2 = \Delta x \cdot \Delta x = \frac{\Delta p}{\sqrt{m\kappa}} \frac{\hbar}{2\Delta p} = \frac{\hbar}{2\sqrt{m\kappa}}.\]

Then, the lowest energy is

\[ E_o = \kappa\frac{\hbar}{2\sqrt{m\kappa}} = \frac{\hbar}{2}\sqrt{\frac{\kappa}{m}} = \frac{1}{2}\hbar \omega. \]

The zero point energy for the SHO is \(\frac{1}{2}\hbar \omega\).

Deriving the wave function solutions for the SHO is beyond the our scope. They are written in terms of Hermite polynomial functions \(H_n(x)\) that have an order \(n\) and \(n\) is a positive integer. The wave function general solution and first four \(\psi_n\) are

(6.48)#\[\begin{split}\psi_n &= H_n(x)e^{-\alpha x^2/2}, \\ \psi_0 &= \left(\frac{\alpha}{\pi}\right)^{1/4}e^{-\alpha x^2/2}, \\ \psi_1 &= \sqrt{2\alpha}x \psi_o, \\ \psi_2 &= \frac{1}{\sqrt{2}}\left(2\alpha x^2 - 1\right)\psi_o, \\ \psi_3 &= \sqrt{\frac{\alpha}{3}}\left(2\alpha x^3 - 3x\right)\psi_o.\end{split}\]
../_images/023a17d4bd6514fa3652ce192ab1ec9adc909200d7d84f77b41370cbd47b4bee.png

Fig. 6.5 Simple harmonic oscillator wave functions (left) and probability density functions (right). The vertical (dashed) line represents the with of the potential well at a given energy \(E_n\).#

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import hermite
from scipy.constants import m_e
from myst_nb import glue
from matplotlib import rcParams

rcParams.update({'font.size': 16})
rcParams.update({'mathtext.fontset': 'cm'})

def SHO_potential(x):
    #calculate the SHO potential
    return 0.5*kappa*x**2

def SHO_wave_func(n,x):
    #calculate the SHO wave function
    psi_o = (alpha/np.pi)**0.25*np.exp(-alpha*x**2/2)
    if n == 0:
        return psi_o
    elif n == 1:
        return np.sqrt(2*alpha)*x*psi_o
    elif n == 2:
        return (2*alpha*x**2-1)/np.sqrt(2)*psi_o 
    elif n == 3:
        return np.sqrt(alpha)*x*(2*alpha*x**2-3)/np.sqrt(3)*psi_o 

alpha  = 1. #Assume alpha = 10 --> kappa = hbar^2/m
kappa = hbar**2*alpha**2/m_e 
omega = np.sqrt(kappa/m_e)
a = np.sqrt(hbar*omega/kappa) #width of the potential at E_o

fs = 'medium'
lw = 2 
col = (218/256., 26/256., 50/256.)

fig = plt.figure(figsize=(8,8),dpi=150)
ax11 = fig.add_subplot(421)
ax12 = fig.add_subplot(422)
ax21 = fig.add_subplot(423)
ax22 = fig.add_subplot(424)
ax31 = fig.add_subplot(425)
ax32 = fig.add_subplot(426)
ax41 = fig.add_subplot(427)
ax42 = fig.add_subplot(428)
ax_psi = [ax41,ax31,ax21,ax11]
ax_prob = [ax42,ax32,ax22,ax12]

x = np.arange(-5*a,5*a,a/40.)

for i in range(0,4):
    ax_psi[i].plot(x/a,SHO_wave_func(i,x),'-',color=col,lw=lw,label='$\psi_%i$' % i)
    ax_prob[i].plot(x/a,SHO_wave_func(i,x)**2,'-',color=col,lw=lw,label='$|\psi_%i|^2$' % i)

    x_a = np.sqrt(2*i+1)
    ax_psi[i].axvline(-x_a,linestyle='--',lw=1.5)
    ax_prob[i].axvline(-x_a,linestyle='--',lw=1.5)
    ax_psi[i].axvline(x_a,linestyle='--',lw=1.5)
    ax_prob[i].axvline(x_a,linestyle='--',lw=1.5)

    ax_psi[i].set_yticks([])
    ax_prob[i].set_yticks([])
    ax_psi[i].set_ylim(-1,1)
    ax_prob[i].set_ylim(0,0.45)
    if i == 0:
        ax_psi[i].set_ylim(0,1)
        ax_prob[i].set_ylim(0,0.7)

    ax_psi[i].set_xlim(-5,5)
    ax_prob[i].set_xlim(-5,5)
    ax_psi[i].set_xticks(range(-5,6))
    ax_prob[i].set_xticks(range(-5,6))
    if i > 0:
        ax_psi[i].set_xticklabels([])
        ax_prob[i].set_xticklabels([])
    else:
        ax_psi[i].set_xticklabels(['','-4','','-2','','0','','2','','4',''])
        ax_prob[i].set_xticklabels(['','-4','','-2','','0','','2','','4',''])

ax41.set_xlabel("Position $x$ ($a$)",fontsize=fs)
ax42.set_xlabel("Position $x$ ($a$)",fontsize=fs)

glue("SHO_waves", fig, display=False);
../_images/023a17d4bd6514fa3652ce192ab1ec9adc909200d7d84f77b41370cbd47b4bee.png

In contrast to the particle-in-a-box, the oscillatory behavior is due to the polynomial, which dominates at small \(x\), and the damping occurs due to the Gaussian function, which dominates at large \(x\). Then energy levels are given by

(6.49)#\[\begin{align} E_n = \left(n+\frac{1}{2}\right)\hbar \omega = \left(n+\frac{1}{2}\right)\hbar \sqrt{\frac{\kappa}{m}}. \end{align}\]
../_images/b7fa4a3a6545839abd2624965a12b8cab8d3087d6fcd1f1b00bd2dbed64e99a8.png

Fig. 6.6 Simple harmonic oscillator potential (red curve) with first four energy levels (horizontal dashed lines).#

The zero-point energy E_o is \(\hbar\omega/2\), where this result is precisely the value found in Exercise 6.8 by using the uncertainty principle. The uncertainty principle is responsible for the minimum energy of the SHO, where the minimum value of the uncertainty principle is can be determined using Gaussian wave packets (see Sect. 5.6). The minimum energy \(E_o\) allowed by the uncertainty principle for the ground state of the SHO is sometimes called the Heisenbergy limit.

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import hermite
from scipy.constants import m_e
from myst_nb import glue
from matplotlib import rcParams

rcParams.update({'font.size': 16})
rcParams.update({'mathtext.fontset': 'cm'})

def SHO_potential(x):
    #calculate the SHO potential
    return 0.5*kappa*x**2

alpha  = 1. #Assume alpha = 10 --> kappa = hbar^2/m
kappa = hbar**2*alpha**2/m_e 
omega = np.sqrt(kappa/m_e)
a = np.sqrt(hbar*omega/kappa) #width of the potential at E_o

fs = 'medium'
lw = 2 
col = (218/256., 26/256., 50/256.)
E_col = ['k','b','c','violet']

fig = plt.figure(figsize=(4,4),dpi=150)
ax = fig.add_subplot(111)
x = np.arange(-5*a,5*a,a/40.)
ax.plot(x/a,SHO_potential(x)/(hbar*omega),'-',color=col,lw=lw)
for i in range(0,4):
    E_n = (i + 0.5)*hbar*omega
    x_n = 0.5 - np.sqrt(2*i+1)/10.
    x_p = 0.5 + np.sqrt(2*i+1)/10.
    ax.axhline(E_n/(hbar*omega),x_n,x_p,color=E_col[i],linestyle='--',lw=1.5,label='$E_%i$' % i)

ax.legend(loc='best',fontsize=fs)
ax.set_xticks(range(-5,6))
ax.set_xticklabels(['','-4','','-2','','0','','2','','4',''])
ax.set_xlabel("Position $x$ ($a$)",fontsize=fs)
ax.set_ylim(0,8.5)
ax.set_xlim(-5,5)
ax.set_yticks([])

glue("SHO_energy", fig, display=False);
../_images/b7fa4a3a6545839abd2624965a12b8cab8d3087d6fcd1f1b00bd2dbed64e99a8.png

Classically, the probability of finding a mass on a spring is greatest at the ends of the motion and smallest at the center. The quantum theory probability density for the lowest energy state \((\psi_o^2)\) is contrary to the classical one, where the largest probability is for the particle to be at the center. From the correspondence principle, this distinction disappears for large \(n\) and the average value of the quantum probability approaches the classical result.

Exercise 6.9

Normalize the ground state wave function \(\psi_o\) for the simple harmonic oscillator (SHO). Find the expectation values \(\langle x \rangle\) and \(\langle x^2 \rangle\).

Starting from the general solution for the SHO, we have \(H_o = 1\) for the ground state and

\[ \psi_o(x) = Ae^{-\alpha x^2/2}. \]

The constant \(A\) is determined through normalization, where

\[\begin{align*} \int_{-\infty}^\infty \psi^*_o \psi_o\ dx &= 1, \\ A^2 \int_{-\infty}^\infty e^{-\alpha x^2}\ dx &= 1, \\ 2A^2 \int_0^\infty e^{-\alpha x^2}\ dx &= 1. \end{align*}\]

The above integral is called a Gaussian integral, which can be solved with the help of an integral table. Substituting the result from the integral table, we find

\[\begin{align*} 1 &= 2A^2\left( \frac{1}{2}\sqrt{\frac{\pi}{\alpha}}\right), \\ A^2 &= \sqrt{\frac{\alpha}{\pi}}, \\ A &= \left(\frac{\alpha}{\pi} \right)^{1/4}. \end{align*}\]

For the ground state wave function of the SHO, we have

\[ \psi_o(x) = \left(\frac{\alpha}{\pi} \right)^{1/4} e^{-\alpha x^2/2}. \]

The expectation value of \(x\) is found using

\[\begin{align*} \langle x \rangle &= \int_{-\infty}^\infty \psi^*_o x \psi_o\ dx, \\ &= \sqrt{\frac{\alpha}{\pi}} \int_{-\infty}^\infty x e^{-\alpha x^2} dx, \\ &= \sqrt{\frac{\alpha}{\pi}} \left(\int_0^\infty x e^{-\alpha x^2} dx + \int_{-\infty}^0 x e^{-\alpha x^2} dx \right), \\ &= \sqrt{\frac{\alpha}{\pi}} \left(\int_0^\infty x e^{-\alpha x^2} dx - \int_0^{\infty} x e^{-\alpha x^2} dx \right) = 0. \end{align*}\]

The value of \(\langle x \rangle\) must be zero because we are integrating an odd function of \(x\) over symmetric limits.

The expectation value of \(x^2\) is found using

\[\begin{align*} \langle x^2 \rangle &= \int_{-\infty}^\infty \psi^*_o x^2 \psi_o\ dx, \\ &= \sqrt{\frac{\alpha}{\pi}} \int_{-\infty}^\infty x^2 e^{-\alpha x^2} dx, &= 2\sqrt{\frac{\alpha}{\pi}} \int_0^\infty x^2 e^{-\alpha x^2} dx, \\ &= 2\sqrt{\frac{\alpha}{\pi}} \left(\frac{1}{4\alpha} \sqrt{\frac{\pi}{\alpha}} \right), \\ &= \frac{1}{2\alpha}. \end{align*}\]

Inserting the value of \(\alpha\), we find

\[ \langle x^2 \rangle = \frac{\hbar}{2\sqrt{m\kappa}} = \frac{\hbar}{2m\omega}, \]

which is what we argued in Exercise 6.8 using the uncertainty principle.

6.7. Barriers and Tunneling#

For the finite square-well potential (see Sect. 6.3), there was a finite probability for the wave function to enter the walls of the potential, but it must decay quickly. A potential barrier describes a similar situation with the boundaries inverted. The potential is instead

(6.50)#\[\begin{align} V(x) = \begin{cases} 0 \quad & x\leq -L, \quad &\text{region I} \\ V_o \quad & -L < x < L, \quad &\text{region II} \\ 0 \quad & x\geq L. \quad &\text{region III} \end{cases} \end{align}\]

A particle is affected by the barrier potential \(V_o\) depending on its energy \(E\).

6.7.1. Potential Barrier with \(E>V_o\)#

Classically we expect that a particle with \(E>V_o\) will pass the barrier, but move with a reduced velocity in the region of \(V_o\). This can be shown through conservation of energy as

\[\begin{align*} E &= K + V_o, \\ K &= E - V_o, \\ v &= \sqrt{\frac{2}{m}(E-V_o)}. \end{align*}\]

The particle velocity must decrease for a positive \(V_o\) and increase for a negative \(V_o\). A classical analogy is where a ball is through horizontally over a mountain, where it experiences a pull due to gravity (ignoring air resistance). The pull increases when the ball is directly over the mountain because there is slightly increased mass and this extra force reduces the ball’s speed. Conversely, a ball traveling over a valley experiences slightly less gravity. This is not noticeable for most objects, but NASA’s GRACE has measured the perturbations of Earth’s gravity field using two satellites.

According to quantum mechanics, the particle will behave differently because of its wavelike character, where the will be differences in the wave number \(k\) in the three regions. The wave numbers are

(6.51)#\[\begin{align} k_I &= k_{III} = \frac{\sqrt{2mE}}{\hbar}, \quad &\text{where }V = 0 \\ k_{II} &= \frac{\sqrt{2m(E-V_o)}}{\hbar}. \quad &\text{where }V = V_o \end{align}\]

Another analogy comes from optics, where light bends (i.e., changes speed) after it penetrates another medium. Actually, some of the light is reflected and the rest is transmitted into the medium. The wave function will exhibit a similar behavior, where there will be an (1) incident wave, (2) reflected wave, and (3) a transmitted wave.

Classical mechanics allows no reflection if \(E>V_o\) and total reflection for \(E<V_o\). In contrast, quantum mechanics predicts some reflection for \(E\gg V_o\) and some transmission for \(E\ll V_o\).

The time-independent Schrödinger equation for the three regions are as follows:

(6.52)#\[\begin{align} \frac{d^2\psi_I}{dx^2} + \frac{2m}{\hbar^2}E\psi_I &= 0, \quad & \text{Region I } \\ \frac{d^2\psi_{II}}{dx^2} + \frac{2m}{\hbar^2}(E-V_o)\psi_{II} &= 0, \quad & \text{Region II } \\ \frac{d^2\psi_{III}}{dx^2} + \frac{2m}{\hbar^2}E\psi_{III} &= 0. \quad & \text{Region III } \end{align}\]

The corresponding wave functions for these equations are:

(6.53)#\[\begin{align} \psi_I &= Ae^{ik_Ix} + Be^{-ik_Ix}, \quad & \text{Region I } \\ \psi_{II} &= Ce^{ik_{II}x} + De^{-ik_{II}x}, \quad & \text{Region II } \\ \psi_{III} &= Fe^{ik_Ix} + Ge^{-ik_Ix}. \quad & \text{Region III } \end{align}\]

We assume that we have an incident particle coming from the left moving along the \(+x\) direction. In this case, the term \(Ae^{ik_Ix}\) represents the incident particle and the term \(Be^{-ik_Ix}\) represents a reflected particle moving in the \(-x\) direction. In region III, there are no reflected particles and thus, \(G=0\).

Similar to the finite square-well potential, we try to determine the coefficients using the conditions for a valid wave function:

  • continuous across a boundary \((\psi_I(-L) = \psi_{II}(-L))\),

  • spatial derivative must be continuous \((d\psi_I(-L)/dx = d\psi_{II}(-L)/dx)\) across the boundary.

We find the following system of equations:

(6.54)#\[\begin{split}Ae^{-ik_IL} + Be^{ik_IL} &= Ce^{-ik_{II}L} + De^{ik_{II}L}, \\ k_IAe^{-ik_IL} - k_IBe^{ik_IL} &= k_{II}Ce^{-ik_{II}L} -k_{II}De^{ik_{II}L}, \\ Ce^{ik_{II}L} + De^{-k_{II}L} &= Fe^{ik_IL}, \\ k_{II}Ce^{ik_{II}L} - k_{II}De^{-ik_{II}L} &= k_IFe^{ik_IL}.\end{split}\]

The probability of particles being reflected \(R\) or transmitted \(T\) is determined by the ratio of the appropriate \(\psi^*\psi\), or

(6.55)#\[\begin{align} R &= \frac{|\psi_I(\text{reflected})|^2}{|\psi_I(\text{incident})|^2} = \frac{B^*B}{A^*A}, \\[5pt] T &= \frac{|\psi_{III}(\text{transmitted})|^2}{|\psi_1(\text{incident})|^2} = \frac{F^*F}{A^*A}. \end{align}\]

Because the particles must be either reflected or transmitted, we must have \(R+T = 1\) (i.e., the total probability must equal unity).

The values of \(R\) and \(T\) are found by solving for \(C\) and \(D\) in terms of \(F\). Then those results are substituted to find \(A\) and \(B\) in terms of \(F\). This a long process of tedious algebra, where the transmission probability is

(6.56)#\[\begin{align} T = \left[ 1 + \frac{V_o^2\sin^2(k_{II}L)}{4E(E-V_o)}\right]^{-1}. \end{align}\]

Notice when \(k_{II}L = n\pi\), then the second term goes to zero, resulting in a transmission probability equal to 1. Also, it is possible for particles to be reflected at both \(x=-L\) and \(x=L\). Their path difference back toward the \(-x\) direction is \(2L\). For an integral number of wavelengths inside the potential barrier, the incident and reflected wave functions are precisely out of phase and cancel completely.

../_images/45555120335fbeb6ccb677dbebdb0a989a3620dfce42d3d4e57ffd14ae581195.png

Fig. 6.7 Wave function for a free particle (\(E=20\ {\rm eV}\)) traveling over a potential barrier of width \(2L\) and height \(V_o = 10\ {\rm eV}\).#

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt
from scipy.constants import hbar, m_e, eV
from scipy.integrate import odeint, simpson
from scipy.optimize import minimize,fsolve
from myst_nb import glue


def ksqr(x):
    k_squared = 0
    if x<=-L:
        V = 0
    elif -L<x<L:
        V = V_o
    elif x>= L:
        V = 0
    return 2*m_e*np.abs(E_o-V)/hbar**2

def psi_func(x,coeff):
    #calculate the wave function
    psi_x = 0
    k_x = np.sqrt(ksqr(x))
    if x<=-L:
        psi_x = coeff[0]*np.cos(k_x*x) + coeff[1]*np.sin(k_x*x)
    elif -L<x<L:
        if E_o>V_o:
            psi_x = coeff[2]*np.cos(k_x*x) + coeff[3]*np.sin(k_x*x)
        else:
            psi_x = coeff[2]*np.exp(k_x*x) + coeff[3]*np.exp(-k_x*x)
    elif x>=L:
        psi_x = coeff[4]*np.cos(k_x*x) + coeff[5]*np.sin(k_x*x)
    return psi_x

def boundary(x):
    k_I = np.sqrt(ksqr(-1.5*L))
    k_II = np.sqrt(ksqr(0))
    x_coeff = [1,1,x[0],x[1],1,1]
    psi_I =  psi_func(-L,psi_coeff)
    psi_IIa = psi_func(-L+0.01*L,x_coeff)
    psi_IIb = psi_func(L-0.01*L,x_coeff)
    psi_III = psi_func(L,psi_coeff)
    return [psi_IIa - psi_I, psi_IIb-psi_III]



def advance_psi(x,xm,xp,dx,psi_coeff):
    #advance the wave function using Numerov's method
    k_x = np.sqrt(ksqr(x))
    k_xm = np.sqrt(ksqr(xm))
    k_xp = np.sqrt(ksqr(xp))
    psi_x = psi_func(x,psi_coeff)
    psi_xm = psi_func(xm,psi_coeff)
    psi_next = (psi_x*(2+5*dx**2*k_x**2/6.) - psi_xm*(1-dx**2*k_xm**2))/(1-dx**2*k_xp**2/12.)
    return psi_next

L = 1e-9

E_o = 20*eV
V_o = 10*eV
ymax = (E_o+V_o)/eV+5
k_I = np.sqrt(2*m_e*E_o)/hbar
x_o = -4*L
delta_x = L/1e2

x_rng = np.arange(-4*L,4*L,delta_x)
V_rng = np.zeros(len(x_rng))
x_bound = [300,500]

V_rng[x_bound[0]:x_bound[1]] = V_o/eV*np.ones(x_bound[1]-x_bound[0])

psi = np.zeros(len(x_rng))
psi_coeff = np.ones(6)

rng = (E_o + V_o)/eV
bounds = [(-rng,rng),(-rng,rng)]
opt = fsolve(boundary,[1,1])

psi_coeff[2:4] = opt
#psi_coeff_left[4:] = -psi_coeff_left[:2]

psi[0] = psi_func(x_o,psi_coeff) 
psi[-1] = psi_func(-x_o,psi_coeff)

full_idx = len(x_rng)
for i in range(1,full_idx-1):
    psi[i] = advance_psi(x_rng[i],x_rng[i-1],x_rng[i+1],delta_x,psi_coeff)

fig = plt.figure(figsize=(8,4),dpi=150)
ax = fig.add_subplot(111)

ax.plot(x_rng/L,psi + E_o/eV,'-',color=col,lw=2)
ax.plot(x_rng/L,V_rng,'k-',lw=2)
ax.fill_between(x_rng[300:500]/L,V_rng[300:500],color='gray',alpha=0.5)

ax.set_xlim(-4,4);
ax.set_xlabel("Position $x$ ($L$)",fontsize=fs)
ax.set_ylim(0,ymax)
ax.set_yticks([])

glue("barrier_potential", fig, display=False);
../_images/45555120335fbeb6ccb677dbebdb0a989a3620dfce42d3d4e57ffd14ae581195.png

6.7.2. Potential Barrier with \(E<V_o\)#

When the energy \(E\) is less than a potential barrier \(V_o\), the particle cannot penetrate the barrier classically because its kinetic energy is negative. The particle is reflected at \(x=-L\) and returns. However, the quantum mechanical result describes a small (but finite) probability that the particle can penetrate the barrier and even emerge on the other side.

barrier potential

Fig. 6.8 Classically, a particle of energy \(E<V_o\) (\(U_o = V_o\) in the figure labels) could not penetrate the region inside the barrier. But the wave function associated with a free particle must be continuous at the barrier and will show an exponential decay inside the barrier. The wave function must also be continuous on the far side of the barrier, so there is a finite probability that the particle will tunnel through the barrier. Image credit: Hyperphysics#

There are only a few changes to the equations already presented in Sect. 6.7.1. In region II, the wave function becomes

(6.57)#\[\begin{align} \psi_{II}(x) = Ce^{\kappa x} + De^{-\kappa x},\quad \text{where } \kappa = \sqrt{\frac{2m(V_o-E)}{\hbar^2}}. \end{align}\]

The parameter \(\kappa\) is a positive, real number because \(V_o > E\). The application of boundary conditions and a lot of tedious algebra relate the coefficients of the wave functions.

The equations for the reflection and transmission probabilities are unchanged, but the results will be modified by replacing \(ik_{II}\rightarrow \kappa\). Tunneling is the effect where the particle penetrates the barrier and emerges on the other side. The transmission probability is

(6.58)#\[\begin{align} T = \left[ 1 + \frac{V_o^2 \sinh^2(kL)}{4E(V_o-E)}\right]^{-1}. \end{align}\]

The only difference from the previous transmission coefficient is the replacement of sine with hyperbolic sine (\(\sinh\)). When \(kL \gg 1\) the transmission probability reduces to

(6.59)#\[\begin{align} T = \frac{16E}{V_o}\left(1 -\frac{E}{V_o}\right)e^{-2\kappa L}. \end{align}\]

Tunneling can be understood using the uncertainty principle, where inside the barrier the wave function \(\psi_{II}\) is dominated by the \(e^{-\kappa x}\) term, and the probability density \(|\psi_{II}|^2 \approx e^{-2\kappa x}\). Over a small interval \(\Delta x = 1/\kappa\), the probability density of observing the particle has decreased substantially (\(e^{-2} \approx 0.14\)). The uncertainty in the momentum is \(\Delta p \geq \hbar/\Delta x = \hbar \kappa\). The minimum kinetic energy must be

(6.60)#\[\begin{align} K_{\rm min} = \frac{(\Delta p)^2}{2m} = \frac{\pi^2 \kappa^2}{2m} = V_o - E. \end{align}\]

The violation allowed by the uncertainty principle is exactly equal to the negative kinetic energy required. The particle is allowed by quantum mechanics and the uncertainty principle to penetrate into a classically forbidden region.

Exercise 6.10

In a particular semiconductor device, electrons that are accelerated through a potential of \(5\ {\rm V}\) attempt to tunnel through a barrier (\(0.8\ {\rm nm}\) wide and \(10\ {V}\) tall). What fraction of the electrons are able to tunnel through the barrier if the potential is zero outside the barrier?

The kinetic energy of the electrons is \(5\ {\rm eV}\), which is less than the barrier potential height (\(V_o = 10\ {\rm eV}\)). Thus, we use the equation for the transmission probability with the \(\sinh\) term and \(\kappa\). The parameter \(\kappa\) is given as

\[\begin{align*} \kappa &= \sqrt{\frac{2m_e(V_o-E)}{\hbar^2}}, \\ & = \frac{\sqrt{2(9.109 \times 10^{-31}\ {\rm kg})(5\ {\rm eV})(1.602 \times 10^{-19}\ {\rm J/eV})}}{6.626 \times 10^{-34}\ {\rm J\cdot s}},\\ & = 1.15 \times 10^{10}\ {\rm m^{-1}}, \end{align*}\]

and the value of \(\kappa L \approx 9.2\).

The equation for the transmission probability gives

\[\begin{align*} T = \left[\frac{1 + (1.602 \times 10^{-18}\ {\rm J})\sinh^2(9.2)}{4(8 \times 10^{-19}\ {\rm J})(8 \times 10^{-19}\ {\rm J})}\right]^{-1} = 4.1 \times 10^{-8} \end{align*}\]

The fraction of electrons that can tunnel through the barrier is equal to the transmission probability.

from scipy.constants import hbar, m_e, eV

def get_kappa(m,V_o,E):
    #calculate the parameter kappa
    #m = mass of particle
    #V_o = height of potential barrier
    #E = kinetic energy of particle
    return np.sqrt(2*m*(V_o-E))/hbar

def Transmission_coeff(kappa,L,V_o,E):
    T_inv = 1. + V_o**2*np.sinh(kappa*L)**2/(4*E*(V_o-E))
    return 1./T_inv

L_bar = 8e-10 #width of potential barrier
E_el = 5*eV #kinetic energy of the electrons
V_bar = 10*eV #potential barrier height

kappa = get_kappa(m_e,V_bar,E_el)
print("The parameter kappa is %1.2e m^-1 and kappa*L is %1.2f.\n" % (kappa,kappa*L_bar))
T = Transmission_coeff(kappa,L_bar,V_bar,E_el)
print("The fraction of electrons (transmission coefficient) that can penetrate the barrier is %1.1e." % T)
The parameter kappa is 1.15e+10 m^-1 and kappa*L is 9.16.

The fraction of electrons (transmission coefficient) that can penetrate the barrier is 4.4e-08.

6.7.3. Potential Well#

A particle of energy \(E>0\) could pass through a potential well rather than into a potential barrier. This is an example for the above cases where \(V=-V_o\) in the region \(-L< x <L\) and zero elsewhere. Classically the particle would gain energy and accelerate while passing the well region. In quantum mechanics, reflection and transmission may occur, but the wavelength inside the potential well is smaller than outside.

When the potential well width is precisely equal to half-integral or integral units of the wavelength, then the reflected waves may be out of phase or in phase with the original wave. Additionally cancellations or resonances may occur.

6.7.4. Alpha-Particle Decay#

The phenomenon of tunneling explains the alpha-particle decay of radioactive nuclei. Many nuclei heavier than lead are natural emitters of alpha particles, but their emission rates vary (by a factor of \(10^{13}\)) and their energies are limited to only \(4-8\ {\rm MeV}\). Inside the nucleus, the alpha particle feels the strong, short-range attractive nuclear force (i.e., a potential well) and the repulsive Coulomb force (i.e., a potential barrier).

alpha particle tunneling

Fig. 6.9 The nuclear strong force and the electromagnetic force influence radioactivity in the form of alpha decay. Image credit: Hyperphysics#

The alpha particle is trapped inside the nucleus and classically, it does not have enough energy to penetrate the Coulomb barrier. Through quantum mechanics, the alpha particle can “tunnel” through the Coulomb potential to escape. The widely varying alpha emission rates can be explained by small changes in the potential barrier (both height and width).

6.8. Homework Problems#

Problem 1

Show directly that the trial wave function \(\Psi (x,t) = e^{i(kx-\omega t)}\) satisfies the time-dependent Schrödinger wave equation.

Problem 2

Normalize the wave function \(Are^{-r/a}\) from \(r=0\) to \(\infty\), where \(\alpha\) and \(A\) are constants. See Wolfram alpha for help with difficult integrals and/or mathematical functions.

Problem 3

A wave function has the value \(A \sin{x}\) for \(0\leq x \leq \pi\), but zero elsewhere. Normalize the wave function and find the probability for the particle’s position: (a) \(0\leq x \leq \pi/4\) and (b) \(0\leq x \leq \pi/2\).

Problem 4

Determine the average value of \(\psi^2_n(x)\) inside an infinite square-well potential for \(n = 1,\ 5,\ 10,\ \text{and}\ 100\). Compare these averages with the classical probability of detecting the particle inside the box.

Problem 5

Compare the results of the infinite and finite square-well potentials.

(a) Are the wavelengths longer or shorter for the finite square well compare with the infinite well?

(b) Use physical arguments to decide whether the energies (for a given quantum number \(n\)) are larger or smaller for the finite square well compared to the infinite square well.

Problem 6

Write the possible (unnormalized) wave functions for each of the first four excited energy levels for the cubical box.

Problem 7

Check to see whether the simple linear combination of sine and cosine functions (\(\psi = A\sin(kx) + B\cos(kx)\)) satisfies the time-independent Schrödinger equation for a free particle (\(V=0\)).