2. Solutions to the Schrödinger equation#

2.1. The time-independent Schrödinger equation#

The full Schrödinger equation (1.12) is a partial differential equation, as it involves derivatives to both time and space. Finding general solutions to partial differential equations is often difficult. However, for the case that the potential \(\hat{V}\) is independent of time, we can use separation of variables to reduce the equation to two ordinary differential equations, in time and space separately; from the solutions of these ordinary differential equations we can then construct the general solution to the partial differential equation.

The first step in separating variables is to look for a solution that can be written as a product of two functions, one of time and one of space. We’ll use a lowercase psi, \(\psi(x)\), for the space-dependent function, and an uppercase phi, \(\Phi(t)\), for the time-dependent one. Substituting \(\Psi(x, t) = \Phi(t) \psi(x)\) into the Schrödinger equation (1.12) we get

(2.1)#\[ i \hbar \psi(x) \frac{\partial \Phi(t)}{\partial t} = - \frac{\hbar^2}{2m} \Phi(t) \frac{\partial^2 \psi(x)}{\partial x^2} + V(x) \Phi(t) \psi(x). \]

We can now divide both sides of equation (2.1) by the product \(\Phi(t) \psi(x)\), which gives

(2.2)#\[ \frac{i \hbar}{\Phi(t)} \frac{\partial \Phi(t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{1}{\psi(x)} \frac{\partial^2 \psi(x)}{\partial x^2} + V(x). \]

Note that in equation (2.2), the left-hand side only depends on time, while the right-hand side only depends on space. That means that if we change the time but stay at the same place, the right-hand side doesn’t change, so neither can the left-hand side. Likewise, if we compare two positions at the same time, the left-hand side doesn’t change, so neither can the right-hand side. Therefore, both sides must be equal to some constant. For later purposes, we’ll call this constant \(E\). Consequently, we can write equation (2.2) as two separate ordinary differential equations:

(2.3)#\[\begin{split}\begin{align*} i \hbar \frac{\mathrm{d}\Phi}{\mathrm{d}t} &= E \Phi(t) \\ - \frac{\hbar^2}{2m} \frac{\partial^2 \psi(x)}{\partial x^2} + V(x) \psi(x) &= E \psi(x). \end{align*}\end{split}\]

We multiplied equations (2.3)a and (2.3)b by \(\Phi(t)\) and \(\psi(x)\) again to make them more readable. Equation (2.3)a is a first-order differential equation that can be solved easily (see Section 7.4.1); the solution is given by

(2.4)#\[ \Phi(t) = A \exp\left(-\frac{i E t}{\hbar} \right), \]

where \(A\) is an integration constant. Because the final solution, \(\Psi(x, t) = \Phi(t) \psi(x)\) must be normalized, and we’ll get an integration constant in \(\psi(x)\) as well, we can set \(A\) to \(1\) here.

The solution of equation (2.3)b will depend on the choice of the potential function \(V(x)\). We will work out some examples in Section 2.2. You may have noticed that the left-hand side of equation (2.3)b is simply the Hamiltonian operator \(\hat{H}\), equation (1.19), applied to the space-dependent part of the wavefunction, \(\psi(x)\), which allows us to write the equation as

(2.5)#\[ \hat{H} \psi(x) = - \frac{\hbar^2}{2m} \frac{\mathrm{d}^2 \psi(x)}{\mathrm{d}x^2} + V(x) \psi(x) = E \psi(x). \]

Equation (2.5) is known as the time-independent Schrödinger equation. The solutions to this equation are pairs of eigenfunctions[1] \(\psi(x)\) and eigenvalues \(E\) of the Hamiltonian operator \(\hat{H}\). As the Hamiltonian represents the energy of the system, you’ll understand why we chose to label the constant in equation (2.3) as \(E\).

In general, we will find a collection of solutions of equation (2.5). The spectrum (collection of eigenvalues) could be discrete, continuous, or a combination of both. If the spectrum is discrete, we can label the solutions and their eigenvalues by integers; in one dimension, that will be a single integer for which we usually will use the letter \(n\). The solutions are then given as combinations (\(\psi_n(x)\), \(E_n\)). For each of the solutions \(\psi_n(x)\), we have a corresponding time-dependent part \(\Phi_n(t) = \exp(-i E_n t / \hbar)\). By Axiom 1.4, the general solution to the time-independent Schrödinger equation can then be written as a sum over the eigenfunctions[2] of the Hamiltonian:

(2.6)#\[ \Psi(x, t) = \sum_{n=1}^\infty c_n e^{-i E_n t / \hbar} \psi_n(x). \]

Given an initial condition, i.e. a wavefunction \(\Psi(x, 0)\) at \(t=0\), we can determine each of the \(c_n\) values using the orthogonality of the eigenfunctions (Lemma 1.2):

(2.7)#\[ \Braket{\psi_n | \Psi(x, 0)} = \sum_{k=1}^\infty c_k \Braket{\psi_n | \psi_k(x)} = \sum_{k=1}^\infty c_k \delta_{kn} = c_n. \]

We will get examples of discrete spectra for the infinite well and the harmonic potential, as well as for the bound states of hydrogen. If the spectrum of the Hamiltonian is continuous, we can also write the general solution in terms of the eigenfunctions, but now the sum becomes an integral, and the coefficients become functions of space; we will work out the details when considering the free particle. In both cases, the eigenstates of the Hamiltonian have two more properties. First, they are stationary states. Although the full wavefunction \(\Psi(x, t)\) is time-dependent, the probability density \(|\Psi(x, t)|^2\) is not, because the time-dependent part is a complex exponential:

\[ |\Psi(x, t)|^2 = \Psi^*(x, t) \Psi(x, t) = \psi^*(x) e^{i E t / \hbar} e^{-i E t / \hbar} \psi(x) = |\psi(x)|^2. \]

Consequently, all probabilities and all expectation values of particles in an eigenstate are time-independent. Second, the eigenstates have definite energy, which is simply an application of the fact that the energies are the eigenvalues of the Hamiltonian (see Section 1.4.3). We can easily verify this statement, as we have

\[ \Braket{\hat{H}} = \Braket{\Psi | \hat{H} \Psi} = \Braket{\Psi | E \Psi} = E \Braket{\Psi | \Psi} = E \]

and

\[ \Braket{\hat{H}^2} = \Braket{\Psi | \hat{H} \hat{H} \Psi} = \Braket{\Psi | \hat{H} E \Psi} = E^2 \Braket{\Psi | \Psi} = E^2 \]

so

\[ \sigma_{H}^2 = \Braket{\Psi | \left(\hat{H} - E\right)^2 \Psi} = \Braket{\hat{H}^2} - \Braket{\hat{H}}^2 = E^2 - E^2 = 0. \]

2.2. Some one-dimensional examples#

2.2.1. The infinite square well#

As our first example, we’ll consider a potential that is utterly unrealistic, but nicely illustrates some of the quantum effects while keeping the maths (relatively) simple. This example is known as the (one-dimensional) infinite square well potential, defined as

(2.8)#\[\begin{split} V(x) = \begin{cases} 0 & \text{if}\; 0 < x < a, \\ \infty & \text{otherwise}. \end{cases} \end{split}\]

As long as the particle is inside the well (\(0 < x < a\)), it’s potential energy is zero; outside the well it’s potential energy is infinite. Therefore, the particle will be constricted to the well. The Hamiltonian inside the well is simply the kinetic energy; finding the eigenfunctions of that energy is a straightforward exercise. We have

\[ - \frac{\hbar^2}{2m} \frac{\mathrm{d}^2 \psi}{\mathrm{d}x^2} = E \psi, \]

or, defining \(k = \sqrt{2 m E / \hbar^2}\) and rearranging terms

(2.9)#\[ \frac{\mathrm{d}^2 \psi}{\mathrm{d}x^2} + k^2 \psi = 0. \]

Mathematically, equation (2.9) is identical to the equation for a harmonic oscillator in classical mechanics. Its solutions can be written either as sines and cosines, or as complex exponentials. Taking the first approach, we have for the general solution to (2.9):

(2.10)#\[ \psi(x) = A \sin(kx) + B \cos(kx), \]

where \(A\) and \(B\) are integration constants, to be determined by the boundary conditions. Because the potential is infinite outside the well, we must have \(\psi(x) = 0\) for \(x<0\) or \(x>a\). We demand that \(\psi(x)\) is continuous; if not, we’d also get a discontinuity in our probability density. Therefore

(2.11)#\[ \psi(0) = A \sin(k \cdot 0) + B \cos(k \cdot 0) = B = 0 \]

and

(2.12)#\[ \psi(a) = A \sin(k a) = 0. \]

While equation (2.11) simply gives \(B=0\), equation (2.12) gives us a collection of possible values for \(k\), which in turn determine the eigenvalues \(E\). Equation (2.12) is satisfied for any value \(k = n \pi / a\), where \(n\) is an integer. For physical reasons, we cannot have \(n=0\), as then the solution would be \(\psi(x) = 0\), which is not normalizible (the particle has to be somewhere). As \(\sin(x) = -\sin(-x)\), we can also exclude the negative values of \(n\), as the probability density goes with the square of \(\psi\) and thus we won’t be able to distinguish between the solution for \(n\) and \(-n\). Therefore, the solutions to the time-independent Schrödinger equation with the infinite well potential are given by

(2.13)#\[ \psi_n(x) = A_n \sin(\frac{n\pi}{a} x), \qquad E_n = \frac{\pi^2 \hbar^2}{2m a^2} n^2. \]

Note that the solutions are pairs of eigenfunctions and corresponding eigenvalues. We still need to determine the value of the integration constants \(A_n\) (which in principle can depend on \(n\), though in this case they won’t); these are set by the normalization condition (1.10):

\[ 1 = \int_{-\infty}^\infty \left|\Psi(x,t)\right|^2 \mathrm{d}{x} = \int_0^a \psi_n(x)^* \psi_n(x) \mathrm{d}x = A_n^* A_n \int_0^a \sin[2](\frac{n\pi}{a} x) \mathrm{d}x = \left|A_n\right|^2 \frac{a}{2}. \]

In principle, \(A_n\) could contain a complex part, which we intrinsically cannot determine as we cannot measure \(\psi(x)\) directly. This complex part is known as the phase of the wavefunction; we can write any complex number \(A\) as a magnitude times a phase factor:

\[ A = |A| e^{i \phi}. \]

As we cannot determine the phase, we’ll usually take the magnitude of the integration constant for the normalization factor; in this case we find that \(A_n = \sqrt{2/a}\) for all values of \(n\).

In the calculation above, we found the possible energy eigenvalues of the Hamiltonian for a particle in an infinite square well. As we’ve asserted in Axiom 1.3, a measurement of the energy of such a particle will always yield an eigenvalue of the Hamiltonian. Therefore, the possible energies of the particle are quantized: only specific values are allowed, all values in between are explicitly excluded. This quantization is what gives quantum mechanics its name, and is a clear distinction from classical mechanics, where there is no reason the particle inside the well could not have a specific energy. In particular, the quantum particle will always have a nonzero energy, because the lowest eigenvalue is nonzero. As the energy inside the well is kinetic energy only, this implies that a quantum particle in such a well cannot be standing still, in correspondence with the Heisenberg uncertainty principle; after all, a particle that would stand still has a definite position and a definite (namely zero) momentum.

The eigenfunction corresponding to the lowest energy eigenvalue is known as the ground state. The other eigenfunctions are known as excited states. The ground state and excited states closely correspond to the fundamental and higher-order modes of waves in a string; for the current example, they are mathematically identical. The first few eigenstates of the square well are plotted in Fig. 2.1.

Hide code cell source
%config InlineBackend.figure_formats = ['svg']
import numpy as np
import matplotlib.pyplot as plt
from myst_nb import glue

#define the wavefunction
def psi(x,n,a):
    return np.sqrt(2/a)*np.sin(n*np.pi*x)


a = 1
x = np.linspace(0.0,a,500)

fig,ax = plt.subplots(figsize = (6.5,5))

ax.spines["bottom"].set_linewidth(30)
ax.spines["bottom"].set_color("grey")

ax.spines["right"].set_linewidth(30)
ax.spines["right"].set_color("grey")

ax.spines["left"].set_linewidth(30)
ax.spines["left"].set_color("grey")

ax.spines["top"].set_visible(False)

ax.set_xlim(-0.05,a+0.05)
ax.set_ylim(-2,19)

ax.set_xticks([])
ax.set_yticks([])

ax.hlines(0.0, 0.0, a, linewidth=1, linestyle=(5,(10,3)), color="black")
ax.plot(x, psi(x,1,a), color="slateblue", linewidth=2.8) 
ax.text(0.05,2,r"$\psi_1$",c = "slateblue",fontsize = 20)

ax.hlines(5, 0.0, a, linewidth=1, linestyle=(5,(10,3)), color="black")
ax.plot(x, psi(x,2,a)+5, color="orange", linewidth=2.8) 
ax.text(0.05,7,r"$\psi_2$",c = "orange",fontsize = 20)

ax.hlines(10, 0.0, a, linewidth=1, linestyle=(5,(10,3)), color="black")
ax.plot(x, psi(x,3,a)+10, color="olive", linewidth=2.8) 
ax.text(0.05,12,r"$\psi_3$",c = "olive",fontsize = 20)

ax.hlines(15, 0.0, a, linewidth=1, linestyle=(5,(10,3)), color="black")
ax.plot(x, psi(x,4,a)+15, color="firebrick", linewidth=2.8) 
ax.text(0.05,17,r"$\psi_4$",c = "firebrick",fontsize = 20)
# Save graph to load in figure later (special Jupyter Book feature)
glue("infinitewelleigenstates", fig, display=False)
../_images/2ecd3a6eac03aa07ed0f5f54f296c5e9f3849614c88d2cbbbc15db51c48bc9c4.svg

Fig. 2.1 The first few eigenstates of the one-dimensional Hamiltonian with an infinite square well potential. The eigenstates are simply the sines which are zero at both sides of the well; while the ground state \(\psi_1(x)\) has no nodes inside the well, the excited states have increasing numbers of nodes, where the probability of finding the particle equals zero. Eigenstates have been shifted vertically to show them more clearly, for each the corresponding dashed line represents \(\psi = 0\).#

It is a straightforward exercise to verify that the eigenstates \(\psi_n(x)\) of the Hamiltonian as given by equation (2.13) are orthogonal (as they have to be by Lemma 1.2). Once the eigenfunctions are normalized, they are also orthonormal, i.e.

\[ \Braket{\psi_m(x) | \psi_n(x)} = \delta_{mn}. \]

In this case, you can also prove that the eigenfunctions form a complete set; the fact that the sines form a complete set is known as Dirichlet’s theorem and is why you can write any function \(f(x)\) as a Fourier sine series. As we’ve seen in Section 2.1, the general solution to the Schrödinger equation will be a linear combination of eigenstates \(\psi_n(x)\) with coefficients \(c_n\). If we then measure the energy of a particle in such an infinite well, by Axiom 1.3 we force it to choose one of the eigenvalues of the Hamiltonian, resulting in a ‘collapse’ of the wavefunction to the corresponding eigenstate. A concrete example may help illustrate these ideas.

Example 2.1 (particle in an infinite well)

Suppose a particle in an infinite well has an initial wave function given by

\[\Psi(x, 0) = A x (a-x) \quad\text{for}\quad 0 \leq x \leq a,\]

and of course \(\Psi(x, 0) = 0\) outside the well. Find (a) the normalization constant \(A\), (b) \(\Psi(x, t)\), (c) the probability that a measurement of the energy of this particle yields the value \(E_2\), and (d) the probability that a measurement of the energy of this particle yields the value \(E_1\).


Solution

  1. We can find the normalization constant \(A\) by imposing the normalization condition (1.10), which gives

    \[1 = \int_{-\infty}^\infty \left|\Psi(x,t)\right|^2 \mathrm{d}x = |A|^2 \int_0^a x^2 (a-x)^2 \mathrm{d}x = |A|^2 \frac{a^5}{30},\]

    so \(A = \sqrt{30/a^5}\).

  2. To find \(\Psi(x, t)\), we need to expand \(\Psi(x, 0)\) in the eigenstates of the Hamiltonian. By equation (2.7), we get the coefficient of the \(n\)th eigenstate by taking the inner product between \(\Psi(x, 0)\) and \(\psi_n(x)\):

    \[\begin{split}\begin{align*} c_n &= \Braket{\psi_n(x) | \Psi(x, 0)} = \int_0^a \sqrt{\frac{2}{a}} \sin(\frac{n\pi}{a} x) \sqrt{\frac{30}{a^5}} x (a-x) \mathrm{d}x \\ &= \frac{2\sqrt{15}}{a^2} \int_0^a x \sin(\frac{n\pi}{a} x) \mathrm{d}x - \frac{2\sqrt{15}}{a^3} \int_0^a x^2 \sin(\frac{n\pi}{a} x) \mathrm{d}x \\ &= \frac{4 \sqrt{15}}{n^3 \pi^3} \left[ \cos(0) - \cos(n\pi) \right]\\ &= \begin{cases} 0 & n \;\text{even} \\ 8 \sqrt{15}/(n\pi)^3 & n \;\text{odd} \end{cases} \end{align*}\end{split}\]

    Given the coefficients, the time-dependent solution is given by equation (2.6), which in this case reads

    \[\Psi(x, t) = \sqrt{\frac{30}{a}} \left( \frac{2}{a} \right)^3 \sum_{n\;\text{odd}} \frac{1}{n^3} \sin(\frac{n\pi}{a} x) \exp\left(- \frac{i \pi^2 \hbar^2}{2 m a^2} n^2 t \right).\]
  3. As \(c_2 = 0\), the probability of measuring \(E_2\) equals zero.

  4. The probability of measuring \(E_1\) is given by

    \[|c_1|^2 = \left|\frac{8 \sqrt{15}}{\pi^3} \right|^2 = \frac{960}{\pi^6} \approx 0.998.\]

    The probability of measuring \(E_1\) is thus very high, which makes sense if you consider the shape of the function we started with: a parabola with a maximum halfway the well, strongly resembling the ground state.

2.2.2. Free particles#

You might wonder why we bothered restricting the particle to the infinite well in Section 2.2.1. Inside the well, the potential is zero, so wouldn’t it be easier to start with a potential that is zero everywhere? Such a particle is known as a free particle. As we’ll see, there’s a subtlety involved with the absence of boundary conditions that makes the analysis of these particles a bit more tricky; also, the spectrum of energy eigenvalues of these particles will be continuous rather than discrete (and thus not ‘quantized’).

Of course, the differential equation for the free particle is identical to that of the particle inside the well, as in both cases the potential energy is zero. We thus retrieve equation (2.9), but this time we’ll write the solutions as complex exponentials:

\[ \psi(x) = A e^{ikx} + B e^{-ikx}. \]

Writing the eigenstates in this form, you might recognize that they are the same as the plane wave eigenfunctions of the momentum operator \(\hat{p}\) we found in Section 1.4.3 (see equation (1.39)). This is hardly surprising, as for the free particle, the Hamiltonian is simply the kinetic energy, and the kinetic energy operator is proportional to the momentum squared, \(\hat{K} = \hat{p}^2 / 2m\), so eigenfunctions of the momentum operator become eigenfunctions of the kinetic energy as well. If we have a momentum eigenfunction \(f_p(x)\) with eigenvalue \(p\), we can find its (kinetic) energy eigenvalue by applying the kinetic energy operator to it:

\[\hat{K} f_p(x) = \frac{1}{2m} \hat{p}^2 f_p(x) = \frac{p^2}{2m} f_p(x),\]

so we unsurprisingly find that \(E = p^2 / 2m\), and \(k = \sqrt{2 m E / \hbar^2} = p/\hbar\). We already encountered this exact same relation in equation (1.2), relating the momentum \(p\) of a particle to its wave number \(k\), as postulated by De Broglie; we now find that this is a direct consequence of the Schrödinger equation. Moreover, we can explicitly write the time-dependent wave function of a free particle as a traveling wave (where those of the particle in a well were stationary, standing waves), as we get

(2.14)#\[ \Psi(x, t) = \psi(x) e^{-i E t / \hbar} = A e^{i k \left(x - \frac{\hbar k}{2m} t\right)} + B e^{-i k \left(x + \frac{\hbar k}{2m} t\right)}. \]

Our plane wave solutions thus consist of a wave traveling to the right (first part) and a wave traveling to the left (second part), both at speed

(2.15)#\[ v = \frac{\hbar |k|}{2m} \]

The shape of the wave doesn’t change. The wave is a function of the combination \(x \pm v t\), so after a time \(t\), the whole wave has shifted a distance \(x = vt\) to the right or left. To simplify the notation, we can allow \(k\) to assume negative values, and write the plane wave for wavevector \(k\) as

(2.16)#\[ \Psi_k(x, t) = A e^{i k \left(x - \frac{\hbar k}{2m} t\right)}. \]

Unfortunately, our plane wave solution suffers from the same problem we had with the eigenfunctions of the momentum operator: they are orthogonal but not normalizable, as we get

\[ \Braket{\Psi_k (x, t)| \Psi_k (x, t)} = |A|^2 \int_{-\infty}^\infty e^{-i k \left(x - \frac{\hbar k}{2m} t\right)} e^{i k \left(x - \frac{\hbar k}{2m} t\right)} \mathrm{d}x = |A|^2 \int_{-\infty}^\infty 1 \mathrm{d}x, \]

so the integral becomes infinite. Therefore, just like there are no free particles with a definite momentum (as the eigenfunctions are outside the quantum mechanical Hilbert space), there are no eigenstates with a definite energy. However, the eigenfunctions of the Hamiltonian still form a basis for all functions that are in the Hilbert space, and the general solution to the time-dependent Schrödinger equation can therefore be written as a linear combination of the eigenstates. Because the spectrum is now continuous, this linear combination comes in the form of an integral rather than a sum. The coefficients of the various eigenstates in this integral are usually denoted as functions \(\phi(k)\) rather than \(c(k)\) (which would be more consistent with our earlier notation), which gives the general solution as

(2.17)#\[ \Psi(x, t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \phi(k) e^{i k \left(x - \frac{\hbar k}{2m} t\right)} \mathrm{d}{k}. \]

Note the similarity with equation (1.42); the only difference is that we now integrate over the wavenumber instead of the momentum. At \(t=0\) equation (2.17) simplifies to the inverse Fourier transform of \(\phi(k)\). Finding the function \(\phi(k)\) from the initial condition \(\Psi(x, 0)\) then boils down to taking the Fourier transform of that function, as we have

(2.18)#\[ \phi(k) = \Braket{\frac{1}{\sqrt{2\pi}} e^{ikx} | \Psi(x, 0)} = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \Psi(x, 0) e^{-ikx} \mathrm{d}x. \]

The solution given in equation (2.17) is known as a wave packet, a collection of plane waves that together (through mutual interference) gives the particle a finite probability density for a limited range in space. The price is that the particle no longer has a well-defined momentum, but will carry a range of possible momenta, and the more precisely we determine the momentum (meaning a narrower function \(\phi(k)\)), the more spread out the position probability density will be, and vice versa. The plane wave solution itself is the (mathematically valid but unphysical) case where the momentum is precisely defined but the probability of finding the particle anywhere becomes zero.

../_images/wavepackets.svg

Fig. 2.2 Wave packets. (a) Interference between two waves with close but not identical wavelengths results in alternating constructive and destructive interference, and the emergence of beats. (b) Wave packets with a carrier wave (blue) ‘supporting’ a modulation (purple). The speed of the carrier wave is known as the phase velocity, \(\omega / k\), while the envelope travels at the group velocity, \(\mathrm{d}\omega / \mathrm{d}k\).#

Wave packets are familiar concepts from classical mechanics. There too, they emerge from wave superposition, for instance by combining two traveling waves with slightly different wavelengths, see Fig. 2.2. The resulting modulation wave can be found as the envelope of the generating waves: it describes how the maximum amplitude of the waves changes. In both classical and quantum mechanics, it is the envelope that carries information, not the generating waves (known in classical mechanics as the carrier waves). Moreover, the envelope and carrier waves travel at different velocities, known as the group and phase velocities respectively. As we’ll see, the velocity we found in equation (2.15) is the phase velocity. To find the velocity, we need the relation between the frequency \(\omega\) and the wave number \(k\) of the wave; this relation is known as the wave’s dispersion relation. Writing the wave in the form \(\exp\left(i (k x - \omega t) \right)\), we can read off from equation (2.16) that the dispersion relation for a plane wave solution is given by

(2.19)#\[ \omega(k) = \frac{\hbar k^2}{2m}. \]

The phase velocity is simply the ratio of \(\omega\) and \(k\), for which we find equation (2.15). However, the group velocity measures how quickly the envelope changes over time, which is the derivative of the dispersion relation, given by

(2.20)#\[ v_\mathrm{group} = \frac{\mathrm{d}\omega}{\mathrm{d}k} = \frac{\hbar k}{m}. \]

The group velocity is a better measure for the actual speed of the quantum particle. Note that it is double the phase velocity of the underlying plane wave; this behavior differs from the typical classical case, where the carrier wave travels faster than the envelope. However, the group velocity does match the classical prediction of the particle’s velocity, which would be \(p / m = \hbar k / m\).

2.2.3. Tunneling#

In classical mechanics, in the absence of dissipative forces like friction or drag, mechanical energy is conserved. Energy can be converted from kinetic to potential energy, for instance when throwing up a stone, or the other way around, when the stone falls. However, without external action, the internal energy of the system cannot increase, and therefore a potential energy barrier higher than the total energy of the system cannot be overcome. In quantum mechanics, this is not the case. One of the best-known quantum effects, tunneling, tells us that a quantum particle has a finite chance of passing through a potential energy barrier, even if its total energy is less than the height of that barrier.

To see how tunneling emerges from the basic laws of quantum mechanics, we’re going to use another totally unrealistic potential: a single Dirac delta function peak or well (we’ll cover both cases in one go). We encountered the delta function as the eigenfunction of the position operator in Section 1.4.3; here we’ll use it for our potential, with the caveat that it cannot really stand by itself, as it is defined under an integral only (see equation (1.98)). In a sense, the potential represents the limit of an infinitely high but also infinitely narrow barrier, constructed such that the area under the barrier is normalized to \(1\). For our potential energy we then write

(2.21)#\[ V(x) = \varepsilon \delta(x). \]

For positive values of \(\varepsilon\), we have a barrier, for negative values a well. In both cases, the potential energy is zero outside the region of interest. As we’ll see, that will also be the case for many real-life examples, in particular atoms and molecules. Consequently, we can distinguish between two options: if the total energy is positive, the particle can travel infinitely far away from the region of interest, but if it comes close to it, it will interact. In experiments, this interaction will often be of the form of a collision resulting in a scattering of our particle; states with positive energy are therefore known as scattering states. States with negative energy are trapped; they are known as bound states[3]. Of the two examples we’ve worked out, all states of the infinite well are bound states (as the potential at infinity is infinite), while the free particle is (by definition) in a scattering state. In general, bound states will have a discrete energy spectrum, while scattering states will have a continuous one.

For a Dirac-delta well, we get both bound and scattering states. We’ll get the same for the more realistic potential of the hydrogen atom in Section 2.3. For atoms and molecules, we’ll mostly be interested in the bound states, as they are involved in their chemistry. For the Dirac delta potential however, we’re interested in the scattering states, as those are the ones that exhibit tunneling. Solving for the bound state is a straightforward exercise (see Exercise 2.6). We’ll focus on the scattering state here. Outside the region of interest, we’ll have a free particle, for which we already solved in Section 2.2.2. The solutions there are linear combinations of plane waves; which linear combination depends on the initial conditions. Therefore, if we know what happens with an arbitrary plane wave, we can write down the total wave function, and it suffices to study the plane wave solution at the delta potential barrier. We do have to distinguish between solutions left and right of the barrier though, as they may (and will) not be identical. We therefore write the general solution of the time-independent Schrödinger equation for \(x<0\) as

(2.22)#\[ \psi(x) = A e^{ikx} + B e^{-ikx}, \]

and for \(x>0\) as

(2.23)#\[ \psi(x) = F e^{ikx} + G e^{-ikx}, \]

where as before \(k = \sqrt{2 m E / \hbar^2}\), and we have \(E>0\). To tie these solutions together, we need to know what happens at the well. We established earlier that the wave function itself needs to be continuous, which gives

\[\begin{split}\begin{align*} \lim_{x \uparrow 0} \psi(x) &= \lim_{x \downarrow 0} \psi(x) \\ A+B &= F+G. \end{align*}\end{split}\]

Usually we’d also demand that the derivative of the wave function is continuous, but we won’t get that here, as the potential is infinite at \(x=0\). To find out what we get (and to prove in one go that the wave function has a continuous derivative for finite \(V(x)\)), we go back to the Schrödinger equation, which is a second-order differential equation. We integrate this equation over a small interval of width[4] \(2\epsilon\) around \(x=0\), then take the limit that \(\epsilon \to 0\) to determine the value of the derivative of \(\psi(x)\) at \(x=0\). First taking the integrals, we have:

(2.24)#\[ -\frac{\hbar^2}{2m} \int_{-\epsilon}^\epsilon \frac{\mathrm{d}^2 \psi}{\mathrm{d}x^2} \mathrm{d}x + \int_{-\epsilon}^\epsilon V(x) \psi(x) \mathrm{d}x = E \int_{-\epsilon}^\epsilon \psi(x) \mathrm{d}x. \]

If \(\psi(x)\) is continuous, the integral on the right-hand side of (2.24) evaluates to \(0\) in the limit that \(\epsilon \to 0\). The first integral on the left-hand side equals the derivative of \(\psi(x)\) evaluated at \(\epsilon\) and \(-\epsilon\), or the difference in the derivative between the points \(x = \epsilon\) and \(x = -\epsilon\). If the potential \(V(x)\) is continuous, the second term on the left-hand side of (2.24) also vanishes, making the difference between the derivative at the two close points disappear in the limit that \(\epsilon \to 0\), i.e., the derivative becomes continuous. However, if \(V(x)\) is not continuous, we get a jump in the derivative at \(x=0\):

(2.25)#\[ \Delta \left( \frac{\mathrm{d}\psi}{\mathrm{d}x} \right) = \lim_{\epsilon \to 0} \left[ \left.\frac{\mathrm{d}\psi}{\mathrm{d}x} \right|_{x=\epsilon} - \left.\frac{\mathrm{d}\psi}{\mathrm{d}x}\right|_{x=-\epsilon} \right] = \frac{2m}{\hbar^2} \lim_{\epsilon \to 0} \int_{-\epsilon}^\epsilon V(x) \psi(x) \mathrm{d}x. \]

Now for our choice of potential, \(V(x) = \varepsilon \delta(x)\), the integral on the right-hand side of (2.25) evaluates to \((2m\varepsilon/\hbar^2) \psi(0)\), so we get a jump in the derivative. Going back to our solutions, we can calculate the derivatives left and right of the barrier from equations (2.22) and (2.23):

(2.26)#\[\begin{split}\begin{align*} \lim_{x \uparrow 0} \frac{\mathrm{d}\psi}{\mathrm{d}x} &= \lim_{x \uparrow 0} ik \left(A e^{ikx} - B e^{-ikx}\right) = ik(A-B), \\ \lim_{x \downarrow 0} \frac{\mathrm{d}\psi}{\mathrm{d}x} &= \lim_{x \downarrow 0} ik \left(F e^{ikx} - G e^{-ikx}\right) = ik(F-G), \end{align*}\end{split}\]

and from equation (2.25) we then get

(2.27)#\[ \Delta \left( \frac{\mathrm{d}\psi}{\mathrm{d}x} \right) = ik\left(F-G-A+B\right) = \frac{2m\varepsilon}{\hbar^2} \psi(0) = \frac{2m\varepsilon}{\hbar^2} (A+B). \]

Using equations (2.26) and (2.27), we can now relate the wave functions left and right of the barrier to each other. We still have two unknowns (typically the amplitudes \(A\) and \(G\) of the incoming waves), but given those, we can calculate the amplitudes \(B\) and \(F\) of the outgoing waves. In particular, we’ll consider what happens to a plane wave coming in from the left, i.e., \(A e^{ikx}\), with \(G=0\). We can then solve for \(B\) and \(F\) in terms of \(A\) as

(2.28)#\[\begin{split}\begin{align*} B &= \frac{i \beta}{1-i \beta} A \\ F &= \frac{1}{1-i\beta} A, \end{align*}\end{split}\]

where \(\beta = m \varepsilon / \hbar^2 k\) is the (rescaled) inverse of the square root of the energy \(E\) of the particle. \(B\) and \(F\) are the amplitude of the reflected and the transmitted wave, respectively. As the probability density of the particle goes with the square of the wave function, we can calculate the reflection and transmission coefficients (\(R\) and \(T\)) by squaring the ratio of the reflected or transmitted amplitude to the incoming amplitude:

(2.29)#\[\begin{split}\begin{align*} R &= \frac{|B|^2}{|A|^2} = \frac{\beta^2}{1+\beta^2} = \frac{1}{1+2 \hbar^2 E / m \varepsilon^2},\\ T &= \frac{|F|^2}{|A|^2} = \frac{1}{1+\beta^2} = \frac{1}{1+ m \varepsilon^2 / 2 \hbar^2 E}. \end{align*}\end{split}\]

Note that we get \(R + T = 1\), as we should. As we might expect, for a stronger well (i.e., larger value of \(\varepsilon\)), the reflection coefficient gets larger and the transmission coefficient smaller; inversely, for a more energetic particle (i.e, larger value of \(E\)), the transmission coefficient increases and the reflection coefficient decreases. However, as both \(R\) and \(T\) depend on the square of \(\varepsilon\), the sign of the potential doesn’t matter. Therefore, we always get both scattering and transmission. A quantum particle thus has a finite chance to scatter off a cliff (where a classical particle always would fall in), and a finite chance to tunnel through a barrier (where a classical particle would always reflect).

../_images/tunnelingexamples.png

Fig. 2.3 Two applications of tunneling. (a/b) In a scanning tunneling microscope, surfaces can be resolved at atomic length scales by bringing a sharp tip close to the surface, applying a potential difference between the surface and tip, and measuring the resulting tunneling current. (a) Illustration of the method [5]. (b) An atomically resolved image of a carbon nanotube [6]. (c) Biological enzymes catalyze many reactions in the cell. They do so highly specifically by only binding to the exact reagants. The binding typically induces a conformational change in the enzyme, bringing the reagants closer together. In many enzymatic reactions, the reagants exchange either an electron (redox reaction) or a proton (acid-base reaction). The exchange of the electron or proton can be highly accelerated through quantum tunneling, facilitated by a metal ion (which reduces the barrier) in the enzyme. The tunneling rate is measured to be a factor 1000 higher than the reaction rate would be in the absence of quantum effects, making tunneling crucial to many biological processes. The illustration shows how the hexokinase enzyme facilitates the reaction between xylose and adenosine triphosphate (ATP); the yellow spot is a magnesium ion (\(\mathrm{Mg}^{2+}\)) [7].#

Quantum tunneling has many applications in research, technology, and biology. Two examples are illustrated in Fig. 2.3. Tunneling is the basis of the scanning tunneling microscope (STM). In an STM, a (very!) sharp tip is brought in close proximity to a conducting surface. By applying an electric potential across the surface and the tip, a charge difference builds up between them, as in a classical capacitor. Electrons can then tunnel from the tip to the surface, generating a measurable current. Because the distance between the tip and the surface depends on the structure of the surface (it is smaller when the tip is directly above an atom, and larger when it is in between atoms), the current will change as we move the tip, allowing us to re-construct the shape of the surface. An example of such a reconstruction of the surface of a carbon nanotube is shown in Fig. 2.3(b). Tunneling also plays a crucial role in enzymatic reactions in biology, as illustrated in Fig. 2.3(c). Reagants binding to an enzyme commonly induce a conformational change in the enzyme, which not only brings the reagants themselves closer together, but also puts them close to a catalyst in the enzyme, usually one or two metal ions. These metal ions significantly lower the tunneling barrier for the exchange of electrons [Devault, 1980, Marcus and Sutin, 1985] or (to a lesser degree, because they are heavier) protons [Klinman and Kohen, 2013, Sutcliffe and Scrutton, 2002] between the reagants. Without the tunneling effect, enzymatic reactions would proceed much more slowly, if at all.

2.2.4. Harmonic potential#

The examples we discussed so far are extremes that couple (relative) mathematical simplicity to lack of physical realism. There are no infinite deep wells or infinitely thin but infinitely high barriers. Even a completely free particle does not exist, and if it would, it would be infinitely boring (as it would be the only thing in the universe). Of course, in many cases you can approximate the actual physics with the crude models of the previous sections, but as soon as you start dealing with actual forces, they fail utterly.

The simplest nontrivial force law in classical mechanics is Hooke’s law, which states that the force exerted by a spring is proportional to the extension of the spring. Consequently, a particle with mass \(m\) suspended on a spring with spring constant \(k\) will oscillate with frequency \(\omega = \sqrt{k/m}\). Spring forces are conservative (i.e., don’t dissipate mechanical energy) and can thus be written as (minus) the derivative of a potential; for a Hookean spring, the potential is given by

(2.30)#\[ V_\mathrm{harmonic}(x) = \frac12 k x^2 = \frac12 m \omega^2 x^2. \]

In everyday life, many things oscillate, often annoyingly; take, for example, the lid of a pan that you leave upside-down on the kitchen counter. There is no spring attached to the lid; its oscillation can however be understood through the same physics, as close to the minimum of the lid’s potential energy, it can be expanded in a Taylor series, of which the first nontrivial term has the same mathematical form as the harmonic potential in equation (2.30). The same concept holds for quantum potentials: close to a minimum, the Taylor expansion locally closely matches a harmonic potential, and therefore local solutions to the Schrödinger equation will also closely match those of the harmonic potential, at least for the lowest eigenvalues. Note that these solutions will not be oscillations in the classical sense, but rather in the same sense as those we found in the infinite square well: we get standing-wave solutions for definite energies (though with very different shapes than the simple sines in the infinite well) and oscillating solutions for states that are linear combinations of eigenstates.

There are two ways to solve the Schrödinger equation for the harmonic potential (2.30). One is substituting a series solution (see Section 7.4.5), a technique we will also use in Section 2.3 and you can try for yourself in Exercise 2.11. The other technique is algebraic, which is at the same time simpler (the mathematical operations are easier) and harder (as it involves new concepts). We will also use this technique when discussing angular momentum, and if you continue studying the quantum world, you will run into it again for the creation and annihilation of particles.

To set the stage, we write the time-independent Schrödinger equation with a harmonic potential as

(2.31)#\[ \hat{H} \psi = \frac{1}{2m} \left[ \hat{p}^2 + (m \omega \hat{x})^2 \right] \psi = E \psi. \]

If \(\hat{p}\) and \(\hat{x}\) were numbers (or their actions both were to multiply with a number), we could factor the Hamiltonian, writing the term in square brackets as \((p + i m \omega x)(p - i m \omega x)\). Unfortunately, life isn’t that easy with operators, because, as we’ve seen in Section 1.5, the order in which you apply them generally matters, in particular for the position and momentum operators. However, nobody stops us from defining new operators that are combinations of the momentum and position operator, so, inspired by the possibility of factorization, we define[8]

(2.32)#\[ \hat{a}_\pm \equiv \frac{1}{\sqrt{2m\hbar\omega}} \left(\mp i \hat{p} + m \omega \hat{x}\right). \]

Our two new operators don’t commute; using \([\hat{x}, \hat{p}] = i \hbar\) from equation (1.62), we find

(2.33)#\[ [\hat{a}_+, \hat{a}_-] = \frac{1}{2m\hbar\omega} [i\hat{p} + m \omega \hat{x}, -i\hat{p} + m \omega \hat{x}] = \frac{1}{2\hbar} \left( i [\hat{p}, \hat{x}] - i [\hat{x}, \hat{p}] \right) = -1, \]

where we used that any operator commutes with itself. By equation (2.33), the operators don’t commute, but the price for switching them is relatively mild, a numerical factor, which comes out at \(-1\) by our choice of the prefactor in the definition of \(\hat{a}_\pm\). Therefore, we can hope to express the Hamiltonian in terms of our new operators, which indeed we can do. To see how, simply apply both operators to the wave function \(\psi(x)\), which gives:

(2.34)#\[\begin{split}\begin{align*} \hat{a}_+ \hat{a}_- \psi(x) &= \frac{1}{2m\hbar\omega} \left(-i\hat{p} + m \omega \hat{x} \right) \left(i\hat{p} + m \omega \hat{x}\right) \psi(x) \\ &= \frac{1}{2m\hbar\omega} \left( \hat{p}^2 - i m \omega \hat{p} \hat{x} + i m \omega \hat{x} \hat{p} + m^2 \omega^2 \hat{x}^2 \right) \psi(x) \\ &= \frac{1}{2m\hbar\omega} \left(\hat{p}^2 + (m \omega \hat{x})^2 \right) \psi(x) + \frac{i }{2\hbar} [\hat{x}, \hat{p}] \psi(x) \\ &= \frac{1}{\hbar\omega}\hat{H} \psi(x) - \frac12 \psi(x), \\ \hat{a}_- \hat{a}_+ \psi(x) &= \frac{1}{\hbar\omega}\hat{H} \psi(x) + \frac12 \psi(x). \end{align*}\end{split}\]

For the Hamiltonian, we can thus write

(2.35)#\[ \hat{H} = \hbar \omega \left(\hat{a}_+ \hat{a}_- + \frac12\right) = \hbar \omega\left(\hat{a}_- \hat{a}_+ - \frac12 \right). \]

As it turns out, the ‘almost-factorization’ of equation (2.35) is good enough, because we can use it to construct all the solutions of the Schrödinger equation (2.31). To see how this works, first assume that we have a solution in the form of an eigenfunction \(\psi(x)\) with eigenvalue \(E\). Then the function \(\hat{a}_+ \psi(x)\) is also a solution, with eigenvalue \(E + \hbar \omega\):

\[\begin{split}\begin{align*} \hat{H} (\hat{a}_+ \psi) &= \hbar \omega \left(\hat{a}_+ \hat{a}_- + \frac12\right) (\hat{a}_+ \psi) \\ &= \hbar\omega \hat{a}_+ \hat{a}_- \hat{a}_+ \psi + \frac12 (\hat{a}_+ \psi) \\ &= \hbar \omega \hat{a}_+ \left( \hat{a}_- \hat{a}_+ + \frac12 \right) \psi \\ &= \hat{a}_+ \left( \hat{H} + \frac12 \hbar\omega + \frac12 \hbar\omega \right) \psi \\ &= \hat{a}_+ ( E + \hbar \omega ) \psi = (E + \hbar \omega) (\hat{a}_+ \psi). \end{align*}\end{split}\]

Similarly, we find that \(\hat{a}_- \psi(x)\) is a solution with eigenvalue \(E - \hbar \omega\). From a solution \(\psi(x)\) we can thus construct a whole family of solutions by repeatedly applying the operators \(\hat{a}_\pm\). Because these operators effectively increase and decrease the eigenvalue of the solution, they are known as the raising and lowering operators; the family of solutions is sometimes called a ‘ladder’ because they are evenly spaced by \(\hbar \omega\).

We’re not done yet; notwithstanding our nice creation of a whole family of solutions, we do not yet know if any solutions exist, nor if all possible solutions can be constructed from any given one (there could be multiple independent starting points). As it turns out, there is only one family. To see why, consider what happens when we repeatedly apply the lowering operator \(\hat{a}_-\): the energy of the new state is \(\hbar \omega\) lower than that of the previous state. Mathematically that’s fine, but physically we run into trouble at some point: the energy of the particle cannot be less than the minimum of the potential energy. Our potential nicely has its minimum at zero, so the energies always need to be non-negative. Let \(\psi_0(x)\) be the solution with the lowest non-negative solution. Applying \(\hat{a}_-\) would give us a solution with negative energy, unless it returns zero, which is a solution of the Schrödinger equation but not normalizable (and thus not a physical solution). We can thus determine \(\psi_0(x)\) from the condition that \(\hat{a}_- \psi_0(x) = 0\), which gives

(2.36)#\[ 0 = \hat{a}_- \psi_0(x) = \frac{1}{\sqrt{2m\hbar\omega}} \left( i\hat{p} + m \omega \hat{x}\right) \psi_0(x) = \frac{1}{\sqrt{2m\hbar\omega}} \left[ \hbar \frac{\partial \psi_0}{\partial x} + m \omega x \psi_0(x) \right]. \]

Equation (2.36) is a first-order differential equation, which we can solve by separating variables and integrating; the (normalized) solution (see Exercise 2.8) is

(2.37)#\[ \psi_0(x) = \left( \frac{m \omega}{\pi \hbar} \right)^{1/4} \exp\left( - \frac{m \omega x^2}{2\hbar} \right) \qquad \text{with eigenvalue} \quad E_0 = \frac12 \hbar \omega. \]

Note that, like for the infinite square well, the lowest-energy (ground state) solution has an energy larger than the minimum of the potential: any quantum particle in a harmonic potential will always be moving. Now that we have one solution (which we know to be the unique lowest-energy solution), we can simply construct the whole family of solutions by repeatedly applying the raising operator \(\hat{a}_+\), which gives

(2.38)#\[ \psi_n(x) = A_n \left( \hat{a}_+ \right)^n \psi_0(x) \qquad \text{with eigenvalue} \quad E_n = \left( n + \frac12\right) \hbar \omega. \]

Unlike in the case of the infinite well (or the hydrogen atom in Section 2.3), we label the ground state with \(n=0\) for the harmonic oscillator (an unfortunate historical convention). The excited states have positive integer values of \(n\). Expressions for these states can in principle be found using equation (2.38), though going up to \(\psi_{10}\) this way would be rather tedious. Moreover, we would still need to normalize each state (though, as we’ll see, we can get the normalization constant algebraically). In contrast, we get all eigenvalues ‘for free’ without the need of additional calculation. What’s more, we can find the expectation value of many observables without having to obtain explicit expressions for the states \(\psi_n(x)\).

The first step to finding the normalization constants and expectation values is to observe that we can find the value of \(n\) by applying the first the lowering and then the raising operator to any state \(\psi_n(x)\):

(2.39)#\[ \hat{a}_+ \hat{a}_- \psi_n(x) = \frac{1}{\hbar\omega} \hat{H} \psi_n(x) - \frac12 \psi_n(x) = \left(\frac{E_n}{\hbar \omega} - \frac12 \right) \psi_n(x) = n \psi_n(x), \]

where we used equation (2.34)a to re-write \(\hat{a}_+ \hat{a}_-\) in terms of the Hamiltonian, and equation (2.38) to evaluate \(\hat{H} \psi_n(x)\). We can thus define a new operator, the number operator \(\hat{n} = \hat{a}_+ \hat{a}_-\), whose action on \(\psi_n(x)\) returns the value of \(n\): \(\hat{n} \psi_n(x) = n \psi_n(x)\). Likewise, we can show that \(\hat{a}_- \hat{a}_+ \psi_n(x) = (n+1) \psi_n(x)\). Step two is to observe that \(\hat{a}_+\) and \(\hat{a}_-\) are each other’s Hermitian conjugate (see Section 1.4.2 and Exercise 2.9), i.e.

(2.40)#\[ \Braket{f | \hat{a}_+ g} = \Braket{\hat{a}_- f | g}, \]

for any two functions \(f(x)\) and \(g(x)\) in our Hilbert space. Combining equations (2.39) and (2.40), we get

\[ \Braket{\hat{a}_+ \psi_n(x) | \hat{a}_+ \psi_n(x)} = \Braket{\hat{a}_- \left(\hat{a}_+ \psi_n(x)\right) | \psi_n(x) } = (n+1) \Braket{\psi_n(x) | \psi_n(x)}. \]

Therefore, we get \(\psi_{n+1}(x) = \frac{1}{\sqrt{(n+1)}} \hat{a}_+ \psi_n(x)\). The normalization constant \(A_n\) of the \(n\)-th wave function is thus \(A_n = 1/\sqrt{n!}\). Third, we have that the eigenfunctions of the harmonic oscillator Hamiltonian are orthonormal:

\[ n \Braket{\psi_m | \psi_n} = \Braket{ \psi_m | \hat{n} \psi_n } = \Braket{\psi_m | \hat{a}_+ \hat{a}_- \psi_n} = \Braket{ \hat{a}_- \psi_m | \hat{a}_- \psi_n} = \Braket{ \hat{a}_+ \hat{a}_- \psi_m | \psi_n} = m \Braket{\psi_m | \psi_n}, \]

so either \(m=n\) or \(\Braket{\psi_m | \psi_n} = 0\).

To find the expectation value of an observable, remember that all observables are functions of \(\hat{x}\) and \(\hat{p}\). We can express both in terms of our operators \(\hat{a}_\pm\) by inverting equation (2.32):

(2.41)#\[ \hat{x} = \sqrt{\frac{\hbar}{2 m \omega}} \left( \hat{a}_+ + \hat{a}_- \right) \quad \text{and} \quad \hat{p} = i\sqrt{\frac{\hbar m \omega}{2}}\left( \hat{a}_+ - \hat{a}_- \right). \]

As an example on how to use these expressions to efficiently calculate the expectation value of an observable for a particle in a state \(\psi_n(x)\), let us consider the potential energy \(V(x)\), which we can express in terms of \(\hat{a}_\pm\) as

\[ \hat{V} = \frac12 m \omega^2 \hat{x}^2 = \frac{\hbar \omega}{4} \left( \hat{a}_+ + \hat{a}_- \right)^2. \]

For the expectation value of \(V\) we then get

\[\begin{split}\begin{align*} \Braket{V} &= \Braket{\psi_n | \hat{V} \psi_n} = \frac{\hbar \omega}{4} \Braket{\psi_n | \left( \hat{a}_+ - \hat{a}_- \right)^2 \psi_n} \\ &= \frac{\hbar \omega}{4} \Braket{\psi_n | \hat{a}_+^2 \psi_n + \hat{a}_+\hat{a}_-\psi_n + \hat{a}_-\hat{a}_+\psi_n + \hat{a}_-^2 \psi_n} \\ &= \frac{\hbar \omega}{4} \left(0 + n + (n+1) + 0\right) = \frac12 \left(n+\frac12\right) \hbar \omega. \end{align*}\end{split}\]

In the third line, we used that \(\hat{a}_+^2 \psi_n \propto \psi_{n+2}\), \(\hat{a}_-^2 \psi_n \propto \psi_{n-2}\), and the fact that the eigenfunctions are orthogonal; we also used equation (2.39) to evaluate the remaining two terms. We find that the expectation value of the potential energy in the \(n\)th state is exactly half the total energy in that state; that’s a specific result for the harmonic oscillator potential that does not generalize to other cases.

2.3. The hydrogen atom#

2.3.1. The Schrödinger equation in three dimensions#

In Section 2.2, we solved the Schrödinger equation for some one-dimensional examples. These examples have the advantage of mathematical simplicity, with solutions of the form of simple sines or Gaussians, but the distinct downside of being physically unrealistic. In classical mechanics, we may, on occasion, be able to restrict a system to one or two spatial dimensions, but in quantum mechanics this turns out to be very complicated. The examples in Section 2.2 thus mostly serve to develop some physical intuition about the quantum world, rather than provide testable predictions. In this section, we’ll study the simplest nontrivial three-dimensional case: that of the wave function and energies of an electron in a hydrogen atom (we’ll take the nucleus, a single proton, which is about 2000 times heavier than the electron, as stationary). This case is realistic and leads to directly testable predictions, but at the same time is at the very limit of our analytical capabilities. Even the next-simplest cases, either the helium atom or molecular hydrogen, have not been solved analytically in closed form, though of course we have techniques of approximating the solutions (we’ll study those in Section 5), and we can find them numerically.

To study hydrogen, the first thing we need to do is to generalize the Schrödinger equation to three dimensions. Fortunately, doing so is straightforward: the second derivative in the kinetic energy term becomes a Laplacian, and the potential simply becomes a (still scalar) function of three coordinates:

(2.42)#\[ i \hbar \frac{\partial \Psi}{\partial t} = \hat{H} \Psi(\bm{r}, t) = -\frac{\hbar^2}{2m} \nabla^2 \Psi(\bm{r}, t) + V(\bm{r}) \Psi(\bm{r}, t). \]

As long as the potential remains independent of time, the time dependence in the three-dimensional equation is exactly the same as in the one-dimensional case, so we also retrieve the same time-independent form of the Schrödinger equation, \(\hat{H} \psi(\bm{r}) = E \psi(\bm{r})\).

2.3.2. Spherical harmonics#

In our example of interest, the hydrogen atom, the potential is a function of the radial coordinate only[9], so \(V(\bm{r}) = V(r)\). We can then solve equation (2.42) by separation of variables again, splitting it into a radial and an angular part. In spherical coordinates, we write \(\psi(\bm{r}) = R(r) Y(\theta, \phi)\), and (looking up the correct expression for the Laplacian in spherical coordinates), we have

(2.43)#\[ \hat{H} R(r) Y(\theta, \phi) = -\frac{\hbar^2}{2m} \left[ \frac{1}{r^2} \frac{\partial }{\partial r} \left( r^2 \frac{\partial R}{\partial r} \right) Y + \frac{1}{r^2 \sin\theta} \frac{\partial }{\partial \theta} \left( \sin\theta \frac{\partial Y}{\partial \theta} \right) R + \frac{1}{r^2 \sin^2\theta} \frac{\partial^2 Y}{\partial \phi^2} R \right] + V(r) R Y = E R Y. \]

If we now multiply by \(2mr^2/\hbar^2\) and divide through by \(RY\), we find

(2.44)#\[ \frac{1}{R} \frac{\partial }{\partial r} \left( r^2 \frac{\partial R}{\partial r}\right) - \frac{2mr^2}{\hbar^2} \left[ V(r) - E \right] = -\frac{1}{Y} \left[ \frac{1}{\sin\theta} \frac{\partial }{\partial \theta} \left( \sin\theta \frac{\partial Y}{\partial \theta} \right) + \frac{1}{\sin^2\theta} \frac{\partial^2 Y}{\partial \phi^2} \right] \]

By the usual argument in separation of variables, as the left-hand side of equation (2.44) depends only on \(r\) (and is thus the same for any value of \(\theta\) or \(\phi\)), and the right-hand side is a function of \(\theta\) and \(\phi\) alone (and thus invariant when changing \(r\)), the only option is that both sides are equal to a (presently arbitrary) constant. For later purposes, we write that constant as \(l(l+1)\).

The angular part of the equation is now independent of the potential, and equal to the angular part of the Poisson equation from electrostatics, \(\nabla^2 \psi = \rho_\mathrm{e}/\varepsilon_0\) (where \(\psi\) is the electrical potential, \(\rho_\mathrm{e}\) the electric charge, and \(\varepsilon_0\) the permittivity of vacuum). Unsurprisingly, the solutions are also the same. They are known as the spherical harmonics, in essence the Fourier components of an expansion in spherical modes, similar to the sines and cosines for a linear expansion. To find the functional form of these spherical harmonics, we split the angular equation again, writing \(Y(\theta, \phi) = \Theta(\theta) \Phi(\phi)\), and substitute into the condition that the right-hand side of equation (2.44) equals \(l(l+1)\). Separating \(\Theta\) and \(\Phi\) then gives:

(2.45)#\[ \frac{1}{\Theta} \left[ \sin\theta \frac{\partial }{\partial \theta} \left( \sin\theta \frac{\partial \Theta}{\partial \theta} \right) + l(l+1) \sin^2 \theta \right] = - \frac{1}{\Phi} \frac{\partial^2 \Phi}{\partial \phi^2} = m_l^2. \]

In equation (2.45), we multiplied both sides with \(\sin^2\theta\) and divided through by \(\Theta \Phi\) to separate the variables. Repeating the same argument as above, both sides have to be equal to a constant, which (again for future purposes) we write as[10] \(m_l^2\). The equation for \(\Phi\) is now easy:

(2.46)#\[ \frac{\mathrm{d}^2 \Phi}{\mathrm{d}\phi^2} = - m_l^2 \Phi, \]

with solutions

(2.47)#\[ \Phi(\phi) = e^{i m_l \phi}. \]

Here \(m_l\) can be either positive or negative. As the variable \(\phi\) ‘wraps around the sphere’, \(\Phi(\phi)\) has to be periodic: \(\Phi(\phi) = \Phi(\phi + 2\pi)\), which restricts the possible values of \(m_l\) to integers, and we have \(m_l \in \mathbb{Z}\). These solutions will perhaps not surprise you: we stated above that the spherical harmonics are like the Fourier modes of the sphere, and in the azimuthal (i.e., \(\phi\)) direction, they turn out to be the familiar sines and cosines.

In the polar (i.e., \(\theta\)) direction, things are less easy, as the polar angle runs only from \(0\) to \(\pi\). Our equation for \(\Theta\) reads:

(2.48)#\[ \sin\theta \frac{\mathrm{d}}{\mathrm{d}\theta} \left( \sin\theta \frac{\mathrm{d}\Theta}{\mathrm{d}\theta}\right) + \left[ l(l+1)\sin^2\theta - m_l^2 \right] \Theta(\theta) = 0. \]

For \(m_l = 0\), the solutions to equation (2.48) can be found through series substitution, which can be done most easily if you first make a coordinate transform from \(\theta\) to \(x = \cos\theta\). The series substitution then gives solutions in the form of the Legendre polynomials \(P_l(x)\), defined as

(2.49)#\[ P_l(x) = \frac{1}{2^l l!} \left( \frac{\mathrm{d}}{\mathrm{d}x}\right)^l (x^2-1)^l, \]

for any non-negative integer value of \(l\) (i.e., \(l= 0, 1, 2, 3, \ldots\)). Solutions for nonzero \(m_l\) are then derivatives of the Legendre polynomials, known as the associated Legendre polynomials:

(2.50)#\[ P_l^m(x) = (1-x^2)^{|m|/2} \left( \frac{\mathrm{d}}{\mathrm{d}x}\right)^{|m|} P_l(x). \]

Although \(P_l^m(x)\) is defined for any integer value of \(m\), it is identically zero if \(|m|>l\), restricting the number of allowed azimuthal modes. For the polar functions we thus find \(\Theta(\theta) = A P_l^m(\cos\theta)\), and for the spherical harmonics, after normalization[11],

(2.51)#\[ Y_l^m(\theta, \phi) = \varepsilon \sqrt{\frac{2l+1}{4\pi} \frac{(l-|m|)!}{(l+|m|)!}} P_l^m(\cos\theta) e^{im\phi}, \]

where \(\varepsilon = 1\) if \(m \leq 0\) and \(\varepsilon = (-1)^m\) if \(m>0\). We can rewrite the complex spherical harmonics given here as real functions by making linear combinations, giving sine and cosine dependencies on \(\phi\) rather than complex exponentials. The resulting functions for the first four values of \(l\) are plotted in Fig. 2.4.

../_images/Spherical_Harmonics.png

Fig. 2.4 Visual representation of the (real) spherical harmonics for \(l=0, 1, 2, 3\) (top to bottom) [12]. The real harmonics are obtained from the complex ones by linear combinations of \(P_l^m\) and \(P_l^{-m}\), which give sine and cosine functions of \(\phi\), instead of complex exponentials. Blue and yellow shaded regions have opposite sign. The single \(l=0\) harmonic corresponds to uniform expansion / contraction, the three \(l=1\) harmonics to translations in the \(x\), \(y\) and \(z\) directions. Note that the \(m_l = 0\) harmonics (central axis) are always axisymmetric (i.e., invariant under rotations about the \(z\)-axis), and that you can always go from a solution with a positive value of \(m_l\) to one with a negative value by a rotation about the \(z\)-axis over an angle \(\pi / 2m_l\).#

2.3.3. The radial equation#

The spherical harmonics are the solutions of the angular half of equation (2.44). The other half is a function of the radial coordinate \(r\) alone:

(2.52)#\[ \frac{\mathrm{d}}{\mathrm{d}r} \left(r^2 \frac{\mathrm{d}R}{\mathrm{d}r} \right) - \frac{2 m r^2}{\hbar^2} \left[ V(r) - E \right] R(r) = l(l+1) R(r). \]

In this form, the radial equation is a rather tough nut to crack, as its coefficients are functions of \(r\). Moreover, it contains both a first and a second derivative of \(R(r)\). The term with the first derivative can be removed by introducing a new function, \(u(r) = r R(r)\), which is simply a re-scaling of \(R(r)\). In terms of \(u(r)\), the equation reads

(2.53)#\[ -\frac{\hbar^2}{2m} \frac{\mathrm{d}^2u}{\mathrm{d}r^2} + \left[ V(r) + \frac{\hbar^2}{2m} \frac{l(l+1)}{r^2} \right] u(r) = E u(r), \]

which is similar in form to the one-dimensional Schrödinger equation. The price of the rescaling is that we pick up an extra term in the potential, of the form \(1/r^2\). Something similar happens in classical mechanics when you make a transformation from a fixed to a co-rotating coordinate system: you pick up terms that act like forces on your system. These additional forces are known as ‘fictitious forces’ (as they are not due to a potential but to your choice of coordinates), but their effects are very real [Idema, 2018]. Examples include the centrifugal force and the Coriolis force. The \(1/r^2\) term we have here is a centrifugal term. In a classical picture, the corresponding outward force would compensate the inward Coulomb force on the electron, keeping it in a closed orbit.

Equation (2.53) may look somewhat nicer than (2.52), but it still is a nontrivial eigenvalue problem. We can solve it in two ways: by series substitution and by the use of raising and lowering operators. We’ll work out both, but first we need to ‘rephrase’ the normalization condition in terms of \(u(r)\), as our final solutions will have to be normalized. We have

(2.54)#\[\begin{split}\begin{align*} 1 &= \Braket{\psi(\bm{r}) | \psi(\bm{r}} = \Braket{R(r) Y(\theta, \phi)| R(r) Y(\theta, \phi)} \\ &= \int |R(r)|^2 |Y(\theta, \phi)|^2 \mathrm{d}^3\bm{r} = \int_0^\infty \int_0^\pi \int_0^{2\pi} |R(r)|^2 |Y(\theta, \phi)|^2 r^2 \sin\theta \mathrm{d}r \mathrm{d}\theta \mathrm{d}\phi \\ &= \int_0^\infty |r R(r)|^2 \mathrm{d}r = \int_0^\infty |u(r)|^2 \mathrm{d}r, \end{align*}\end{split}\]

where we used in the third line that the spherical harmonics are normalized independently. The normalization condition on \(u(r)\) is thus that its square integrates to \(1\) over the range of positive real numbers \(r\).

For the hydrogen atom, we have

(2.55)#\[ V(r) = - \frac{e^2}{4 \pi \varepsilon_0} \frac{1}{r}. \]

We have a continuum of scattering states with \(E>0\), and a discrete spectrum of bound states with \(E<0\). We will solve here for the latter. To simplify the notation, we rescale the energy by dividing equation (2.53) through by \(\hbar^2/2m_\mathrm{e}\), like we did before for the particle in the infinite well, and define:

(2.56)#\[ \kappa \equiv \frac{\sqrt{-2 m_\mathrm{e} E}}{\hbar}. \]

Note that for bound states, \(\kappa\) is real as \(E\) is negative. I’ve added a subscript ‘e’ to the mass to indicate that we are dealing with the electron mass, and to distinguish from the number \(m_l\). Note that \(\kappa\) has dimensions of length, while \(u\) is dimensionless. If we now divide equation (2.53) by \(\kappa^2\), we get terms that are all dimensionless:

(2.57)#\[ \frac{1}{\kappa^2} \frac{\mathrm{d}^2 u}{\mathrm{d}r^2} - \frac{1}{\kappa^2} \left[ -\frac{2m_\mathrm{e}}{\hbar^2} \frac{e^2}{4 \pi \varepsilon_0} \frac{1}{r} + \frac{l(l+1)}{r^2} \right] u(r) = u(r). \]

The combination of factors in front of the \(1/r\) term in the square brackets must have dimensions of inverse length, so we can define a characteristic length scale of the hydrogen atom, known as the Bohr radius:

(2.58)#\[ a_0 \equiv \frac{4 \pi \varepsilon_0 \hbar^2}{m_\mathrm{e} e^2}, \]

where the choice of taking a numerical factor \(4\) is historical (see equation (1.5)), but will make this length close to (the best definition of) the actual radius, as we will see. The numerical value of the Bohr radius is \(5.29 \cdot 10^{-11}\;\mathrm{m}\), or about half an Ångström. We’ll see that \(\kappa\) and \(a_0\) are closely related.

Finally, we can express the radial coordinate in terms of the length scale \(\kappa\), defining \(\rho = \kappa r\) and \(\rho_0 = 2 / \kappa a_0\). In terms of these rescaled variables, equation (2.57) becomes

(2.59)#\[ \frac{\mathrm{d}^2u}{\mathrm{d}\rho^2} = \left[ 1 - \frac{\rho_0}{\rho} + \frac{l(l+1)}{\rho^2} \right] u(\rho). \]

We could try a series solution for equation (2.59), but the smarter approach is to check the asymptotic behavior first. In the limit of large \(\rho\), only the first term in parentheses on the right-hand side of equation (2.59) remains, and we are left with an equation that we can easily solve exactly, giving \(u(\rho) = A e^{-\rho} + B e^\rho\). The second term of this solution diverges, and thus cannot be normalized, so we must set \(B=0\); unsurprisingly the radial wave function will drop off like \(e^{-\rho}\) at large values. In the limit that \(\rho \to 0\), we find that the \(1/\rho^2\) term dominates equation (2.59), and again we can solve the equation exactly if we ignore the other two terms, which gives \(u(\rho) = C \rho^{l+1} + D \rho^{-l}\). As negative powers diverge when we approach zero, the second term again cannot be normalized, and we find that \(u(\rho)\) scales as \(\rho^{l+1}\) close to zero. An educated guess for the solution of equation (2.59) would therefore be a function that interpolates between the two limit solutions. Introducing a new function \(v(\rho)\), we write our trial solution as \(u(\rho) = e^{-\rho} \rho^{l+1} v(\rho)\). We then find

(2.60)#\[\begin{split}\begin{align*} \frac{\mathrm{d}^2u}{\mathrm{d}\rho^2} &= e^{-\rho} \rho^l \left\lbrace \left[ \rho - 2(l+1) + \frac{l(l+1)}{\rho} \right] v(\rho) + 2 (l+1-\rho) \frac{\mathrm{d}v}{\mathrm{d}\rho} + \rho \frac{\mathrm{d}^2v}{\mathrm{d}\rho^2} \right\rbrace \\ &= \left[ 1 - \frac{\rho_0}{\rho} + \frac{l(l+1)}{\rho^2} \right] e^{-\rho} \rho^{l+1} v(\rho). \end{align*}\end{split}\]

At first glance, it seems that we made matters significantly worse: while in equation (2.59) we had gotten rid of the first derivative term through re-scaling, we got it back in (2.60). On the other hand, the asymptotic factors \(e^{-\rho}\) and \(\rho^l\) appear on both sides of the equation and thus drop out, and we can therefore expect \(v(\rho)\) to be well-behaved enough to allow for a series solution. Rearranging terms, our equation now reads:

(2.61)#\[ \rho \frac{\mathrm{d}^2v}{\mathrm{d}\rho^2} + 2 (l+1-\rho) \frac{\mathrm{d}v}{\mathrm{d}\rho} + \left[ \rho_0 - 2(l+1) \right] v(\rho) = 0, \]

for which we try a series expansion

(2.62)#\[\begin{split}\begin{align*} v(\rho) &= \sum_{j=0}^\infty c_j \rho^j, \\ \frac{\mathrm{d}v}{\mathrm{d}\rho} &= \sum_{j=0}^\infty j c_j \rho^{j-1} = \sum_{j=0}^\infty (j+1) c_{j+1} \rho^j, \\ \frac{\mathrm{d}^2 v}{\mathrm{d}\rho^2} &= \sum_{j=0}^\infty j (j+1) c_{j+1} \rho^{j-1}, \end{align*}\end{split}\]

where in the second equality of (2.62)b we shifted the summation by one (which we can do as the \(j=0\) term was identically zero). We do not need such a shift in (2.62)c as the second derivative of \(v(\rho)\) is multiplied by \(\rho\) in equation (2.61). Substituting back and collecting like terms, we get

(2.63)#\[\begin{split}\begin{align*} 0 &= \sum_{j=0}^\infty \left\lbrace (j(j+1) c_{j+1} \rho^j 2(l+1) (j+1) c_{j+1} \rho^j - 2 j c_j \rho^j + \left[\rho_0 - 2(l+1)\right] c_j \rho^j \right\rbrace \\ &= \sum_{j=0}^\infty \left\lbrace (j+1)(j + 2l + 2) c_{j+1} + \left[\rho_0 - 2(j+l+1)\right] c_j \right\rbrace \rho^j. \end{align*}\end{split}\]

Now for the series to vanish, each individual term must vanish, which gives us a recursion relation between the coefficients:

(2.64)#\[ c_{j+1} = \frac{2(j+l+1) - \rho_0}{(j+1)(j + 2l + 2)} c_j. \]

Given a starting value \(c_0\), we can then find all the \(c_j\)’s. Typically, series solutions come in two flavors: those for which the series terminates at some point (with polynomials as solutions) and those which go on with nonzero terms indefinitely. In this case, the non-terminating series cannot be normalized. To see why, consider the limit in which \(j\) gets large, then (2.64) goes to

\[ c_{j+1} = \frac{2}{j+1} c_j = \frac{2^{j+1}}{(j+1)!} c_0. \]

The sum of the series with these coefficients is well known: it’s \(e^{2\rho}\). Even with the \(e^{-\rho}\) factor that we will multiply \(v(\rho)\) with to get the full solution, this series diverges at large \(\rho\), and thus will not give us normalizable solutions. The only allowable solutions are thus the ones for which the series terminates at some point, i.e., \(c_{j+1} = 0\) for some value of \(j\). If \(j_\mathrm{max}\) is the largest value of \(j\) for which \(c_j\) is nonzero, we have

(2.65)#\[ \rho_0 = 2(j_\mathrm{max}+l+1) = 2n, \]

so we find that the values of \(\rho_0\) are quantized. The smallest possible value of \(n\) is \(1\) (for \(j_\mathrm{max} = l = 0\)), and \(n\) can take any integer value greater than zero. However, for a given value of \(n\), \(l\) is now restricted from above to \(n-1\), as for a larger value equation (2.65) cannot be satisfied for any nonnegative \(j_\mathrm{max}\).

To get the actual energies, we simply re-substitute the definitions of \(\rho_0\) and \(\kappa\), which gives:

(2.66)#\[ E_n = - \frac{\hbar^2}{2 m_\mathrm{e} a_0^2} \frac{1}{n^2} = -\frac{R_\mathrm{E}}{n^2}. \]

We have retrieved Bohr’s result from Section 1.2.2 (equation (1.6)). That means that we also retrieve the predictions for the absorption and emission spectra of the hydrogen atom. However, the classical orbits Bohr assumed are replaced with wave functions (sometimes referred to as ‘orbitals’), the square of which represents a probability distribution, not a trajectory through space. A measurement of an electron in a state with \(n=2\) might give a position closer to the nucleus than the measurement of an election in the \(n=1\) state. Moreover, the positions are not restricted to a plane, but rather have a spherical symmetry around the nucleus, making them very different from the Keplerian orbits of planets around the sun.

The actual polynomial functions \(v(\rho)\) that are the solutions of equation (2.61) are given in terms of the (associated) Laguerre polynomials \(L_q^p(x)\), defined by

(2.67)#\[\begin{split}\begin{align*} L_q^p(x) &= (-1)^p \left(\frac{\mathrm{d}}{\mathrm{d}x}\right)^p L_{p+q}(x) = \frac{x^{-p} e^x}{q!} \left(\frac{\mathrm{d}}{\mathrm{d}x}\right)^q \left( e^{-x} x^{p+q} \right), \\ L_q(x) &= \frac{e^x}{q!} \left(\frac{\mathrm{d}}{\mathrm{d}x}\right)^q \left( e^{-x} x^q \right). \end{align*}\end{split}\]

Given values of \(n\) and \(l\), the function \(v(\rho)\) is given by

\[ v(\rho) = L_{n-l-1}^{2l+1}(2\rho). \]

Substituting back, we can eventually work out the radial function \(R(r)\), which we characterize by \(n\) and \(l\). The (normalized) first six are given by

(2.68)#\[\begin{split}\begin{align*} R_{10}(r) &= \frac{2}{\sqrt{a_0^3}} e^{-r/a_0}, \\ R_{20}(r) &= \frac{1}{\sqrt{2 a_0^3}} \left(1-\frac{r}{2 a_0} \right) e^{-r/2a_0}, \\ R_{21}(r) &= \frac{1}{\sqrt{6 a_0^3}} \left(\frac{r}{2 a_0} \right) e^{-r/2a_0}, \\ R_{30}(r) &= \frac{2}{3\sqrt{3 a_0^3}} \left(1 - \frac{2r}{3 a_0} + \frac23 \left(\frac{r}{3 a_0}\right)^2 \right) e^{-r/3a_0}, \\ R_{31}(r) &= \frac{8}{9\sqrt{6 a_0^3}} \left(1 - \frac12 \frac{r}{3 a_0} \right) \left( \frac{r}{3 a_0} \right) e^{-r/3a_0}, \\ R_{32}(r) &= \frac{4}{9\sqrt{30 a_0^3}} \left( \frac{r}{3 a_0} \right)^2 e^{-r/3a_0}. \end{align*}\end{split}\]

The radial wave functions are plotted in Fig. 2.5. Note the patterns: the \(l=0\) functions contain a exponential decay with \(l\) crossings of the horizontal axis, while the \(l>0\) functions start at \(0\) and have a well-defined local maximum. The product of these radial functions with the spherical harmonic functions \(Y_l^m(\theta, \phi)\) for the angular part gives the full solution \(\psi_{nlm}(r, \theta, \phi)\). The full wave functions are thus characterized by the three quantum numbers \(n\), \(l\) and \(m_l\), known as the principal, orbital, and magnetic quantum number, respectively[13]. It is an easy exercise to show that for a given value of the principal quantum number \(n\), there are \(n^2\) states. As the energy depends on \(n\) alone, these states will thus be \(n^2\)-fold degenerate.

Hide code cell source
%config InlineBackend.figure_formats = ['svg']
import numpy as np
import matplotlib.pyplot as plt
from myst_nb import glue

def R10(r):
    # Returns R_{10}(r), with r in units of the Bohr radius.
    return 2 * np.exp(-r)

def R20(r):
    # Returns R_{20}(r), with r in units of the Bohr radius.
    return (1 / np.sqrt(2)) * (1 - r/2) * np.exp(-r/2)

def R21(r):
    # Returns R_{21}(r), with r in units of the Bohr radius.
    return (1/np.sqrt(6)) * (r/2) * np.exp(-r/2)

def R30(r):
    # Returns R_{30}(r), with r in units of the Bohr radius.
    return (2 / np.sqrt(27)) * (1 - (2*r/3) + (2/3) * pow(r/3, 2)) * np.exp(-r/3)

def R31(r):
    # Returns R_{31}(r), with r in units of the Bohr radius.
    return (8/np.sqrt(486)) * (1- r/6) * (r/3) * np.exp(-r/3)

def R32(r):
    # Returns R_{32}(r), with r in units of the Bohr radius.
    return (4/np.sqrt(2430)) * pow(r/3,2) * np.exp(-r/3)

r = np.linspace(0, 10, 1000)

fig, ax = plt.subplots(figsize=(6,4))

line10 = ax.plot(r, R10(r), label='$R_{10}(r)$')
line20 = ax.plot(r, R20(r), label='$R_{20}(r)$')
line21 = ax.plot(r, R21(r), label='$R_{21}(r)$')

line30 = ax.plot(r, R30(r), label='$R_{30}(r)$')
line31 = ax.plot(r, R31(r), label='$R_{31}(r)$')
line32 = ax.plot(r, R32(r), label='$R_{32}(r)$')

ax.axhline(y = 0, color = 'k', linestyle = ':')

ax.set_xlim(0,10)
ax.set_ylim(-0.2,1.0)
ax.set_xlabel('$r/a_0$')
ax.set_ylabel('$R_{nl}$')
ax.legend()
# Save graph to load in figure later (special Jupyter Book feature)
glue("hydrogenradialfunctions", fig, display=False)
../_images/1ef1e053345b5c5b8fb64eb7600aa01d30c8d72805267bf156826b48f7fcdec4.svg

Fig. 2.5 Radial part of the first six eigenfunctions of the hydrogen Hamiltonian.#

2.4. Problems#