8.1. Symmetric matrices#
8.1.1. Introduction#
Definition 8.1.1
A matrix \(A\) is called a symmetric matrix if
Note that this definition implies that a symmetric matrix must be a square matrix.
Example 8.1.1
The matrices
are symmetric. The matrices
are not symmetric.
Symmetric matrices appear in many different contexts. In statistics the covariance matrix is an example of a symmetric matrix. In engineering the so-called elastic strain matrix and the moment of inertia tensor provide examples.
The crucial thing about symmetric matrices is stated in the main theorem of this section.
Theorem 8.1.1
Every symmetric matrix \(A\) is orthogonally diagonalisable.
By this we mean: there exist an orthogonal matrix \(Q\) and a diagonal matrix \(D\) for which
Conversely, every orthogonally diagonalisable matrix is symmetric.
This theorem is known as the Spectral Theorem for Symmetric Matrices. In other contexts the word spectrum of a transformation is used for the set of eigenvalues.
So, for a symmetric matrix an orthonormal basis of eigenvectors always exists. For the inertia tensor of a 3D body such a basis corresponds to the (perpendicular) principal axes.
Proof of the converse in Theorem 8.1.1
Recall that an orthogonal matrix is a matrix \(Q\) for which \(Q^{-1} = Q^T\).
With this reminder it is just a one line proof.
If \(A = QDQ^{-1} = QDQ^T\),
then \(A^T = (QDQ^{-1} )^T = (Q^{-1} )^TD^TQ^T = (Q^T)^TD^TQ^T = QDQ^T = A\).
The proof of the other implication we postpone till Subsection 8.1.3.
We end this introductory section with one representative example.
Example 8.1.2
Let \(A\) be given by \(A = \begin{pmatrix} 1&2\\2&-2 \end{pmatrix}\).
The eigenvalues are found via
They are \(\lambda_1 = 2\) and \(\lambda_2 = -3\).
Corresponding eigenvectors are \(\mathbf{v}_1 = \begin{pmatrix} 2\\1 \end{pmatrix}\) for \(\lambda_1\), and \(\mathbf{v}_2 = \begin{pmatrix} -1\\2 \end{pmatrix}\).
The eigenvectors are orthogonal,
and \(A\) can be diagonalised as
In Figure 8.1.1 the image of the unit circle under the transformation \(\vect{x} \mapsto A\vect{x}\) is shown. In the picture on the right,
are two orthonormal unit eigenvectors.
Fig. 8.1.1 The transformation \(T(\vect{x}) = \begin{pmatrix} 1&2\\2&-2 \end{pmatrix}\vect{x}\). The vectors \(\vect{q}_1\) and \(\vect{q}_2\) are two orthogonal vectors on the unit circle that are mapped onto multiples of themselves.#
Furthermore, if we normalise the eigenvectors, i.e., the columns of \(P\), we find the following diagonalisation of \(A\) with an orthogonal matrix \(Q\):
8.1.2. The essential properties of symmetric matrices#
Proposition 8.1.1
Suppose \(A\) is a symmetric matrix.
If \(\mathbf{v}_1\) and \(\mathbf{v}_2\) are eigenvectors of \(A\) for different eigenvalues, then \(\mathbf{v}_1\perp \mathbf{v}_2\).
Proof of Proposition 8.1.1
Suppose \(\mathbf{v}_1\) and \(\mathbf{v}_2\) are eigenvectors of the symmetric matrix \(A\) for the different eigenvalues \(\lambda_1,\lambda_2\). We want to show that \(\mathbf{v}_1 \ip \mathbf{v}_2 = 0\).
The trick is to consider the expression
On the one hand
On the other hand
Since we assumed that \(A^T = A\) we can extend the chain of identities:
So we have shown that
Since
it follows that indeed
as was to be shown.
Exercise 8.1.1
Prove the following slight generalisation of Proposition 8.1.1.
If \(\vect{u}\) is an eigenvector of \(A\) for the eigenvalue \(\lambda\), and \(\vect{v}\) is an eigenvector of \(A^T\) for a different eigenvalue \(\mu\), then \(\vect{u} \perp \vect{v}\).
Solution to Exercise 8.1.1
The proof is completely analogous to the proof of Proposition 8.1.1. Suppose
We consider the expression \(\mathbf{u}^T\ip A \mathbf{v} = \mathbf{u}^T A \mathbf{v}\).
On the one hand
On the other hand
Comparing Equation (8.1.2) and Equation (8.1.3) we can conclude that \(\mathbf{u}\ip\mathbf{v} = 0\), i.e., \(\mathbf{u}\) and \(\mathbf{v}\) are indeed orthogonal.
Proposition 8.1.2
All eigenvalues of symmetric matrices are real.
The easiest proof is via complex numbers. Feel free to skip it, in particular when you don’t feel comfortable with complex numbers.
Proof of Proposition 8.1.2
For two vectors \(\mathbf{u},\mathbf{v}\) in \(\C^n\) we consider the expression
If we take \(\mathbf{v}\) equal to \(\mathbf{u}\) we get
where \(|u_i|\) denotes the modulus of the complex number \(u_i\). This sum of squares (of real numbers) is a non-negative real number. We also see that \(\overline{\mathbf{u}}^{T}\mathbf{u} = 0\) only holds if \(\mathbf{u} = \mathbf{0}\).
It can also be verified that
Now suppose that \(\lambda\) is an eigenvalue of the symmetric matrix \(A\), and \(\mathbf{v}\) is a non-zero (possibly complex) eigenvector of \(A\) for the eigenvalue \(\lambda\). Note that, since \(A\) is real and symmetric, \(\overline{{A}^T} = \overline{A} = A\). To prove that \(\lambda\) is real, we will show that \(\overline{\lambda} = \lambda\).
We use kind of the same ‘trick’ as in Equation (8.1.1) in thre proof of Proposition 8.1.1.
On the one hand
On the other hand,
So we have that
Since we assumed that \(\mathbf{v}\) is not the zero vector, we have that \(\overline{\mathbf{v}}^T \mathbf{v} \neq 0\) , and so it follows that \( \overline{\lambda} =\lambda\). Which is equivalent to \(\lambda\) being real.
Example 8.1.3
Let \(A = \begin{pmatrix} a&b\\b&d \end{pmatrix} \).
Then the characteristic polynomial is computed as
The discriminant of this second order polynomial is given by
The discriminant is non-negative, so the characteristic polynomial has only real roots, and consequently the eigenvalues of the matrix are real.
Obviously, an elementary approach like this will soon get very complicated for larger \(n \times n\)-matrices.
Lastly we come to the third of three essential properties of symmetric matrices.
Proposition 8.1.3
For each eigenvalue of a symmetric matrix the geometric multiplicity is equal to the algebraic multiplicity.
We will incorporate the proof of this proposition into the proof of the main theorem in Subsection 8.1.3. For now, we will look at a few examples.
Example 8.1.4
We will verify that the symmetric matrix \(A = \begin{pmatrix} 1 & 0 & 1\\0 & 1 & 2 \\ 1 & 2 & 5 \end{pmatrix}\) is diagonalisable and has mutually orthogonal eigenvectors.
We first compute the characteristic polynomial.
Expansion along the first column gives
So \(A\) has the real eigenvalues \(\lambda_{1} = 1\), \(\lambda_2 = 6\) and \(\lambda_3 = 0\). Since all eigenvalues have algebraic multiplicity \(1\), the corresponding eigenvectors will give a basis of eigenvectors, and we can immediately conclude that \(A\) is diagonalisable.
The eigenvectors are found to be
We see: the three eigenvectors form an orthogonal threesome, in accordance with Proposition 8.1.1.
Example 8.1.5
Consider the matrix \(A = \begin{pmatrix} 2&2&4\\2 & -1 & 2 \\ 4&2&2 \end{pmatrix}\).
A (rather involved) computation yields the eigenvalues \(\lambda_{1,2} = -2\) and \(\lambda_3 = 7\). Indeed all eigenvalues are real, conforming to Proposition 8.1.2.
Next we find the eigenvectors and the geometric multiplicities of the eigenvalues.
For \(\lambda = -2\) we find via row reduction
the two linearly independent eigenvectors \(\mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \\ -1\end{pmatrix}\) and \(\mathbf{v}_2 = \begin{pmatrix} 1 \\ -2 \\ 0\end{pmatrix}\). The geometric multiplicity of \(\lambda_{1,2}\) is equal to \(2\). The other eigenvalue has algebraic multiplicity \(1\), so its geometric multiplicity has to be \(1\) as well. With this Proposition 8.1.3 is verified.
Lastly we leave it to you to check that an eigenvector for \(\lambda_3 = 7\) is given by \(\mathbf{v}_3 = \begin{pmatrix} 2 \\ 1 \\ 2\end{pmatrix}\). And that both \(\mathbf{v}_3 \perp \mathbf{v}_1\) and \(\mathbf{v}_3 \perp \mathbf{v}_2\), so that Proposition 8.1.1 is satisfied as well.
8.1.3. Orthogonal diagonalisability of symmetric matrices#
Let us restate the main theorem (Theorem 8.1.1) about symmetric matrices:
A matrix \(A\) is symmetric if and only if it is orthogonally diagonalisable.
Note that this also establishes the property that for each eigenvalue of a symmetric matrix the geometric multiplicity equals the algebraic multiplicity (Proposition 8.1.3).
We will put the intricate proof at the end of the subsection, and first consider two examples.
The first example is a continuation of the earlier Example 8.1.5.
Example 8.1.6
The matrix \(A = \begin{pmatrix} 2&2&4\\2 & -1 & 2 \\ 4&2&2 \end{pmatrix}\) was shown to have the eigenvalues/eigenvectors
The pairs \(\mathbf{v}_1, \mathbf{v}_3\) and \(\mathbf{v}_2, \mathbf{v}_3\) are ‘automatically’ orthogonal.
For the eigenspace \(E_{-2} = \Span{\mathbf{v}_1, \mathbf{v}_2}\) we can use Gram-Schmidt to get an orthogonal basis:
Normalising the orthogonal basis \(\{\mathbf{u}_1, \mathbf{u}_2, \mathbf{v}_3\}\) and putting them side by side in a matrix yields the orthogonal matrix
The conclusion becomes that
The procedure followed in Example 8.1.6 leads way to an algorithm for constructing an orthogonal diagonalisation.
Algorithm 8.1.1
Compute the eigenvalues of the matrix.
Find a basis for each eigenspace.
Use the Gram-Schmidt procedure to turn these bases into orthonormal bases for the eigenspaces.
Put everything together in the matrices \(D\) and \(Q\).
One more example to illustrate matters, before we get to the proof (or you jump over to Subsection 8.1.5).
Example 8.1.7
Let the symmetric matrix \(A\) be given by \( A = \begin{pmatrix} 1 & 2 & 2 & 0 \\ 2 & -1 & 0 & 2 \\ 2 & 0 & -1 & -2 \\ 0 & 2 & -2 & 1 \end{pmatrix}\).
The hard part is to find the eigenvalues. (I.e., how to solve an equation of the order four?!) Once we know what the eigenvalues are, the other steps are ‘routine’.
It appears that \(A\) has the double eigenvalues \(\lambda_{1,2} = 3\) and \(\lambda_{3,4} = -3\).
To find the eigenvectors for the eigenvalue \(3\) we row reduce the matrix \((A - 3I)\).
We can read off two linearly independent eigenvectors
As in Example 8.1.6 we can construct an orthogonal basis for the eigenspace \(E_{3}\):
Likewise we can first find a ‘natural’ basis for the eigenspace \(E_{-3}\) by row reducing \((A - (-3I))\):
Two independent eigenvectors: \(\vect{v}_3 = \left(\begin{array}{c} -1 \\ 1 \\ 1 \\ 0 \end{array} \right)\) and \(\vect{v}_4 = \left(\begin{array}{c} 1 \\ -2 \\ 0 \\ 1 \end{array} \right)\).
Again these can be orthogonalised, and then we find the following complete set of eigenvectors, i.e., a basis for \(\R^4\):
We conclude that \(A = QDQ^{-1}\), where
And now it’s time for the proof of the main theorem. The proof is of the type technical and intricate. Skip it if you like.
Proof of Theorem 8.1.1
Suppose that \(A\) is a symmetric \(n \times n\)-matrix. We know there are \(n\) real, possibly multiple, eigenvalues \(\lambda_1, \lambda_2, \ldots, \lambda_n\). Suppose \(\vect{q}_1\) is an eigenvector for \(\lambda_1\) with unit length. We can extend \(\{\vect{q}_1\}\) to an orthonormal basis \(\{\vect{q}_1,\vect{q}_2,\ldots,\vect{q}_n\}\). Let \(Q_1\) be the matrix with the columns \(\vect{q}_1,\vect{q}_2,\ldots,\vect{q}_n\).
It can be shown that \(A_1 = Q_1^{-1}AQ_1 = Q_1^TAQ_1\) is of the form
where \(B_1\) is an \((n-1)\times(n-1)\)-matrix that is also symmetric.
Namely, the first column of \(A_1\) can be computed as
and \(Q_1^{-1}\vect{q}_1\) is the first column of \(Q_1^{-1}Q_1\), which is \(\vect{e}_1\).
This shows that the first column of \(A_1\) must indeed be \(\lambda_1\vect{e}_1 = \left(\begin{array}{c} \lambda_1 \\ 0 \\ \vdots \\ 0 \end{array}\right)\).
Since \(A\) is symmetric and \(Q_1\) is by construction an orthogonal matrix,
So \(A_1\) is also symmetric. Thus if the first column of \(A_1\) contains \(n-1\) zeros, so does its first row.
Since \(A\) and \(A_1\) are similar, they have the same eigenvalues. It follows that \(B_1\) has the eigenvalues \(\lambda_2, \ldots, \lambda_n\).
We can apply the same construction to \(B_1\), yielding
Note that in this formula the matrices have size \((n-1)\) by \((n-1)\).
If we then define
it follows that
Continuing like this we find
This proves that \(A\) is diagonalisable, with \(Q = Q_1Q_2 \cdots Q_{n-1}\) as a diagonalising matrix.
Moreover, since the product of orthogonal matrices is orthogonal, \(A\) is in fact orthogonally diagonalisable.
Example 8.1.8
We will illustrate the proof for the matrix
Since
we have as a starter the eigenvalue and corresponding eigenvector
An orthogonal basis for \(\mathbb{R}^4\), starting with this first eigenvector is, for instance
Rescaling and putting them into a matrix yields
Next we compute
This is indeed of the form stated in the proof.
We continue with the matrix \(B_1 = \left(\begin{array}{ccc} 2 & \sqrt{3} & \sqrt{2} \\ \sqrt{3} & 0 & -\sqrt{6} \\ \sqrt{2} & -\sqrt{6} & 1 \end{array} \right)\).
\(B_1\) has eigenvalue \(-3\) with eigenvector \(\vect{u}_1 = \left(\begin{array}{c} 1 \\ -\sqrt{3} \\ -\sqrt{2} \end{array} \right)\).
Again we extend to an orthogonal basis for \(\mathbb{R}^3\). For instance,
If we normalise and use them as the columns of \(\tilde{Q}_2\) as in the proof of Theorem 8.1.1, we find as second matrix in that construction
And then
indeed a diagonal matrix.
For this example the matrix has the second double eigenvalue \(\lambda_{3,4} = 3\). Because of that, the construction takes one step less than in the general case.
Defining \(Q = Q_1Q_2\), we can rewrite the last identity as
for the matrix \(Q = Q_1Q_2\). This is the matrix
So we see that \(A\) has the ‘simpler’ eigenvectors
Note: given the eigenvalues, these eigenvectors could have been found more efficiently by solving the systems \((A - \lambda_iI)\vect{x} = \vect{0}\), and then orthogonalise by the Gram-Schmidt procedure. As is done in Example 8.1.6.
The importance of the step-by-step reduction is that it shows that from the ‘minimal’ assumptions of symmetry and the existence of real eigenvalues it is possible to create an orthogonal diagonalisation.
8.1.4. Maximising \(||A\vect{x}||\) for a symmetric matrix \(A\)#
How much can a vector \(\vect{x}\) in \(\R^{n}\) ‘blow up’ when multiplied by an \(m \times n\)-matrix \(A\)? To answer this question we have to consider how to maximise the ratio
for non-zero vectors \(\vect{x}\). Since
we may restrict ourselves to vectors of norm \(1\). Then the denominator in Equation (8.1.4) becomes \(1\), so we we just have to maximise \(\norm{A\vect{x}}\).
The general case, for non-square matrices, will be handled in Subsection 8.3.4. For symmetric matrices the question is answered by the next proposition.
Proposition 8.1.4
Suppose \(A\) is a symmetric matrix. Then the maximum value \(\norm{A\mathbf{x}}\) will attain on the set of unit vectors is equal to \(|\lambda_{\operatorname{max}}|\), where \(\lambda_{\operatorname{max}}\) is the eigenvalue of the highest absolute value. In formula form
We will give a proof that makes good use of the existence of an orthogonal basis of eigenvectors. But first we will give an example that catches the main idea.
Example 8.1.9
The (symmetric) matrix \(A = \begin{pmatrix} -1 & 4 \\ 4 & -1 \end{pmatrix}\) has the eigenvalues \(\lambda_1 = -5\) and \(\lambda_2 = 3\) with corresponding unit eigenvectors \(\mathbf{u}_1 = \dfrac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ -1 \end{pmatrix}\) and \(\mathbf{u}_2 = \dfrac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ 1 \end{pmatrix}\) respectively. So according to Proposition 8.1.4 the maximum value of \(\norm{A\vect{x}}\) on the set of vectors with norm \(1\) must be \(5\).
First of all, for \(\vect{x} = \vect{u}_1\) it holds that \(\norm{A\vect{u}_1} = ||5\vect{u}_1|| = 5\).
Second, suppose \(\vect{x} \) is an arbitrary unit vector. We will in fact show that \(\norm{A\vect{x}}^2 \leq 5^2\). Since \(\{\vect{u}_1,\vect{u}_2\}\) is a basis, \(\vect{x} = c_1\vect{u}_1 + c_2\vect{u}_2 \), for some parameters \(c_1, c_2\). Then, since \(\vect{u}_1\) and \(\vect{u}_2\) are orthogonal unit vectors,
so \(c_1^2 + c_2^2 = \norm{\vect{x}}^2 = 1\).
Likewise,
So we have
which implies that indeed \(\norm{A\vect{x}} \leq 5\) for all vectors \(\vect{x}\) with \(\norm{\vect{x}} = 1\).
The second example shows that symmetry of the matrix is necessary for the property to hold.
Example 8.1.10
The matrix \(B = \begin{pmatrix} 3 & 4 \\ 0 & 3\end{pmatrix}\) has the double eigenvalue \(\lambda_1 = \lambda_2 = 3\) and for the unit vector \(\mathbf{x} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\) it holds that \( \norm{A\vect{x}} = \norm{\begin{pmatrix} 4\\3 \end{pmatrix}} = 5 > 3 = |\lambda_1|\).
As mentioned, Example 8.1.9 contains the main idea, but for a proof of the general situation you can open the following exposition.
Proof of Proposition 8.1.4
Suppose \(A\) is a symmetric \(n \times n\)-matrix. Then \(A\) has an orthonormal basis \(\vect{u}_1, \vect{u}_2,\ldots,\vect{u}_n\) of eigenvectors for the eigenvalues \(\lambda_1, \ldots, \lambda_n\), where we may suppose that these are ordered according to their absolute values in decreasing order
First of all
so there is at least a unit vector of which the norm gets a factor \(|\lambda_1|\), as required by the proposition.
It remains to show that for an arbitrary unit vector \(\vect{x}\) always
We will in fact show that \(\norm{A\mathbf{x}}^2 \leq |\lambda_1|^2\).
Since \(\{\vect{u}_1, \ldots, \vect{u}_n \}\) is a basis of \(\R^n\) it follows that
From the orthonormality of the \(\vect{u}_i\) it follows that
thus \(c_1^2 + \cdots + c_n^2=1\).
Next, invoking that each \(\vect{u}_i\) is an eigenvector for \(\lambda_i\) and again that the \(\vect{u}_i\) form an orthonormal set, we get
At the \(\leq\) step we used that \(\lambda_i^2 \leq \lambda_1^2\), for \(i = 2, \ldots, n\).
In the last subsection we will show how the orthogonal diagonalisation can be rewritten in an interesting and meaningful way.
8.1.5. The spectral decomposition of a symmetric matrix#
Let’s take up an earlier example (Example 8.1.2) to illustrate what the spectral decomposition is about.
Example 8.1.11
For the matrix \(A = \begin{pmatrix} 1&2\\2&-2 \end{pmatrix}\) we found the orthogonal diagonalisation
This is of the form
We bring in mind the column-row expansion of the matrix product. For two \(2\times 2\)-matrices this reads
Applying this to the last expression for \(A = QDQ^T\) we find
The matrices
represent the orthogonal projections onto the one-dimensional subspaces \(\Span{\mathbf{q}_1}\) and \(\Span{\mathbf{q}_2}\).
Furthermore these one-dimensional subspaces are orthogonal to each other.
So we have that this symmetric matrix can be written as a linear combination of matrices that represent orthogonal projections.
The construction we performed in the last example can be generalised. Which is the content of the last theorem in this section.
Theorem 8.1.2 (Spectral Decomposition of Symmetric Matrices)
Every \(n \times n\) symmetric matrix \(A\) is the linear combination
of \(n\)-matrices \(P_i\) that represent orthogonal projections onto one-dimensional subspaces that are mutually orthogonal.
Equation (8.1.5) is referred to as being a spectral decomposition of the matrix \(A\).
Proof of Theorem 8.1.2
For a general \(n\times n\) symmetric matrix \(A\), there exists an orthogonal diagonalisation
Exactly as in Example 8.1.11 we can use the column-row expansion of the matrix product to derive
where the vectors \(\mathbf{q}_i\) of course are the (orthonormal) columns of the diagonalising matrix \(Q\). This is indeed a linear combination of orthogonal projections, as was to be shown.
Exercise 8.1.2
The eigenvalues of the matrix \(A=\begin{pmatrix} 2 & 1 & 0 \\ 1 & 3 & 1\\ 0 & 1& 2 \end{pmatrix}\) are \(1\), \(2\) and \(4\).
Find the spectral decomposition of \(A\).
Solution to Exercise 8.1.2
We first find an orthogonal diagonalisation \(QDQ^T\) of \(A\), which results in
Use the column-row expansion of the matrix product results in:
If in Theorem 8.1.2 the projections onto eigenvectors for the same eigenvalue are grouped together, then the following alternative form of the spectral decomposition results.
Corollary 8.1.1 (Spectral Theorem, alternative version)
Every symmetric \(n \times n\)-matrix \(A\) can be written as a linear combination of the orthogonal projections onto its (orthogonal) eigenspaces.
where \(P_i\) denotes the orthogonal projection onto the eigenspace \(E_{\lambda_i}\).
Proof of Corollary 8.1.1
We know that
If all eigenvalues \(\lambda_1, \ldots, \lambda_n\) are different that’s just it.
If \(\lambda_i\) is an eigenvalue of multiplicity \(m\) with \(m\) orthonormal eigenvectors \(\vect{q}_1, \ldots, \vect{q}_m\), then
\(P_i = Q_iQ_i^T\) is precisely the orthogonal projection onto the eigenspace \(E_{\lambda_i}\).
The following example provides an illustration.
Example 8.1.12
For the matrix \(A = \begin{pmatrix} 1 & 2 & 2 & 0 \\ 2 & -1 & 0 & 2 \\ 2 & 0 & -1 & -2 \\ 0 & 2 & -2 & 1 \end{pmatrix}\) we had already found the orthogonal decomposition \(A = QDQ^{-1}= QDQ^T\) with
and
The spectral decomposition according to Corollary 8.1.1 then becomes
8.1.6. Grasple exercises#
Grasple Exercise 8.1.1
To check whether a matrix \(A\) is symmetric.
Click to show/hide
Grasple Exercise 8.1.2
Recognising orthogonal matrices.
Click to show/hide
Grasple Exercise 8.1.3
To check whether a matrix \(A\) is orthogonal. And, if it is, to give its inverse.
Click to show/hide
Grasple Exercise 8.1.4
To check whether a matrix \(A\) is orthogonal. And, if it is, to give its inverse.
Click to show/hide
Grasple Exercise 8.1.5
To give an orthogonal diagonalisation of a \(2\times2\)-matrix.
Click to show/hide
Grasple Exercise 8.1.6
To give an orthogonal diagonalisation of a \(2\times2\)-matrix.
Click to show/hide
Grasple Exercise 8.1.7
To give an orthogonal diagonalisation of a \(3\times3\)-matrix.
Click to show/hide
Grasple Exercise 8.1.8
To give an orthogonal diagonalisation of a \(3\times3\)-matrix.
Click to show/hide
Grasple Exercise 8.1.9
To give an orthogonal diagonalisation of a \(3\times3\)-matrix.
Click to show/hide
Grasple Exercise 8.1.10
To give an orthogonal diagonalisation of a \(3\times3\)-matrix.
Click to show/hide
Grasple Exercise 8.1.11
To give an orthogonal diagonalisation of a \(4\times4\)-matrix.
Click to show/hide
Grasple Exercise 8.1.12
One step in an orthogonal diagonalisation (as in the proof of the existence of an orthogonal diagonalisation).
Click to show/hide
Grasple Exercise 8.1.13
Sequel to previous question, now for a \(4\times4\)-matrix.
Click to show/hide
Grasple Exercise 8.1.14
To give an example of an symmetric \(2\times2\)-matrix with \(1\) eigenvalue and \(1\) eigenvector given.
Click to show/hide
Grasple Exercise 8.1.15
To give an example of a symmetric \(3\times3\)-matrix with given eigenvalues and eigenspace.
Click to show/hide
Grasple Exercise 8.1.16
To find a third eigenvector of a symmetric matrix.
Click to show/hide
Grasple Exercise 8.1.17
Deciding about the spectral decomposition of a \(3\times3\)-matrix (with lot of prerequisites laid out).
Click to show/hide
The following exercise have a more theoretical flavour.
Grasple Exercise 8.1.18
True/False question to think about symmetric versus orthogonally diagonalisable.
Click to show/hide
Grasple Exercise 8.1.19
About the (non-)symmetry of \(A + A^T\) and \(A - A^T\).
Click to show/hide
Grasple Exercise 8.1.20
About the (non-)symmetry of products.
Click to show/hide
Grasple Exercise 8.1.21
If \(A\) and \(B\) are symmetric, what about \(A^2\), \(A^{-1}\) and \(AB\)?
Click to show/hide
Grasple Exercise 8.1.22
True or false. If \(A\) is symmetric, then \(A^2\) has non-negative eigenvalues. (And what if \(A\) is not symmetric?)
Click to show/hide
A kind of counterpart of symmetric matrices are skew-symmetric matrices. We give the definition, and if interested you can explore this species by working through the exercises following this definition.
Definition 8.1.2
A matrix \(A\) is called skew-symmetric if \(A^T = -A\).
So two examples of skew-symmetric matrices are
Grasple Exercise 8.1.23
Basic properties of skew-symmetric matrices.
Click to show/hide
Grasple Exercise 8.1.24
Slightly less basic properties of skew-symmetric matrices.
Click to show/hide
Grasple Exercise 8.1.25
About the eigenvalues of skew-symmetric matrices.
Click to show/hide
Grasple Exercise 8.1.26
Eigenvalues and eigenvectors of skew-symmetric matrices (sequel to previous exercise).
Click to show/hide
Grasple Exercise 8.1.27
Geometric interpretation of 3x3 skew-symmetric matrices.