5.2. Determinants via Cofactor Expansion#
5.2.1. Definition of an \(n \times n\) determinant#
In Section 5.1 we have defined determinants of 2 by 2 and 3 by 3 matrices in a geometric way. We start with the general definition straightaway.
Let \(A\) be an \(n\times n\) matrix, with \(n \geq 2\). The submatrix \(A_{ij}\) is the \((n-1) \times (n-1)\) matrix that remains when the \(i\)th row and the \(j\)th column of \(A\) are deleted.
For the matrix \(A = \left[\begin{array}{cccc} 2 & 0 & 0 & 4 \\ 1 & 2 & 3 & 4 \\ 2 & 1 & 0 & 3 \\ 6 & 4 & 3 & 5 \end{array}\right] \) we have that
Let \(A\) be an \(n\times n\) matrix, with \(n \geq 3\). The determinant of \(A\), which we denote by either \(\det{A}\) or \(|A|\), is defined by
Here
is called the \((i,j)\)th cofactor of \(A\).
This is an example of a so-called recursive definition. The evaluation of an \(n\) by \(n\) determinant is reduced to the evaluation of \(n\) determinants ‘one size smaller’. By repeating this reduction we get smaller and smaller determinants and end up with 2 by 2 determinants as defined in Definition 1.3.2. And the formula also works for \(2 \times 2\) matrices:
Let us now look at an example first.
We will compute the determinant of the matrix \(A = \left[\begin{array}{cccc} 7 & 2 & 3 & 4 \\ 0 & 2 & 5 & 2 \\ 0 & 1 & 4 & 3 \\ 6 & 2 & 3 & 1 \end{array}\right]\).
For the two remaining determinants we act likewise. Using the alternative notation for determinants we find
and
So that
We call the procedure indicated in Definition 5.2.2 cofactor expansion along the first column. Note that this is exactly in line with Proposition 5.1.6. Often the word ‘cofactor’ is omitted and we simply say expansion along the first column. The signs are determined by the position in the matrix/determinant according to the following pattern
For instance, on the diagonal all signs are \(+\).
For the \(4\times 4\) Example 5.2.2 we could take advantage of the two zeros in the first column. For an \(n\times n\) matrix without zeros the complete expansion will contain \(n\cdot (n-1) \cdot \ldots \cdot 3 \cdot 2 \cdot 1 = n!\) products. We have already seen in the previous section
and
They contain \(2=2\cdot1\) resp. \(6 = 3\cdot2\cdot1\) products. For a larger matrix the work involved quickly runs out of hand. Luckily there are several shortcuts to compute a determinant. The first important one is that the first column does not play any special role. The determinant can be found by expanding along any column, and also along any row. This is the content of the first theorem.
The determinant can be found by expansion along any column. Taking the \(j\)th column this gives
It can also be found by expansion along any row. For the \(i\)th row this gives
We omit the proof, which is rather long and technical.
The following example illustrates the rule for the determinant of an arbitrary \(3 \times 3\) matrix.
Let us compute the cofactor expansion of the matrix \(A = \left[\begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array}\right] \) along its third row.
Cofactor expansion along the second column yields
These are the same six products with the same signs as we found with the first expansion (Equation (5.2.1)).
In the following example we use this freedom of choice to compute a determinant of a matrix that contains a lot of zeros.
We will compute the determinant of the matrix
With its four zeros the third row seems a good candidate to expand along. Of the five terms
only the second gives a nonzero contribution:
As a next step we may choose the third column to expand along, and for the ensuing \(3\times3\) determinant we single out the second row:
5.2.2. Determinants of Triangular Matrices#
The next example is meant to illustrate a more general property.
The determinant of the matrix \(A = \left[\begin{array}{cccc}
2 & 3 & -1 & 2 \\
0 & 3 & 5 & -9 \\
0 & 0 & 4 & 2 \\
0 & 0 & 0 & -1
\end{array} \right]\)
can be quickly found by expanding along columns from left to right.
Alternatively, we can expand along rows from bottom till top.
The matrix \(A\) is an example of what is called an upper triangular matrix.
A square matrix \(A\) is called an upper triangular matrix, if all the elements below the diagonal are 0. Formally, for an upper triangular matrix we have
In the same manner we can define lower triangular matrices.
A triangular matrix is a matrix that is either an upper triangular or a lower triangular matrix.
Note that a diagonal matrix is both upper and lower triangular.
The property we hinted at in Example 5.2.5 is captured in the following proposition.
For a triangular matrix the determinant is equal to the product of the entries on the diagonal.
Proof of Proposition 5.2.1
We can use the same strategy as in Example 5.2.5. That is, for an upper triangular matrix expand along the columns from left to right, for a lower triangular matrix expand along the rows from top to bottom.
In Section 5.1 we have seen that for \(2 \times 2\) and \(3 \times 3\) matrices \(A\) it holds that \(A\) is invertible if and only if \(\det{A} \neq 0\).
From Proposition 5.2.1 it follows that this property still holds for triangular matrices.
A triangular matrix is invertible if and only if it has a non-zero determinant.
Proof of Proposition 5.2.2
Let us first consider the case of an \(n \times n\) upper triangular matrix \(U\), with entries \(u_{ij}\). Such a matrix is an echelon matrix. It is invertible if and only if it has \(n\) linearly independent columns, which is the case if all diagonal elements \(u_{ii}\) are nonzero. And this last is equivalent to
For a lower triangular matrix \(L\) a similar argument can be given.
(Or we can consider \(L^T\), which is an upper triangular matrix, and make use of the upcoming Proposition 5.2.3.)
We need to know a little more about determinants to establish this connection with invertibility for all matrices.
5.2.3. The Determinant of the Transpose of a Matrix#
The last property that may be expected to hold as a consequence of Theorem 5.2.1 where the rows and the columns play interchangeable roles is the following.
For any \(n\times n\) matrix \(A\) the determinant of \(A\) is equal to the determinant of its transpose. In a formula
Take the matrix \(A = \left[\begin{array}{ccc} 1 & 3 & 4 \\ 5 & 6 & 7 \\ 2 & 1 & 0 \end{array} \right] \).
Expanding along the first row we find that
For the determinant of the transpose \(A^T\) we find, expanding along the first column:
Which gives the same value.
In fact, by looking at the structure rather than at the numbers, we see the example illustrates that the theorem holds for \(3 \times 3\) determinants since it holds for \(2 \times 2\) determinants. In a similar way, the property \(\text{det}\big(A^T\big) = \text{det}(A)\) for \(4 \times 4\) matrices follows from the correctness for \(3 \times 3\) matrices, and this can be (either formally or informally) lifted up to determinants of an arbitrary size.
5.2.4. Grasple Exercises#
To compute the determinant of 2x2 matrix
Show/Hide Content
To compute the determinant of 3x3 matrix
Show/Hide Content
To compute the determinant of 4x4 matrix (with many zeros)
Show/Hide Content
To compute the determinant of 4x4 matrix.
Show/Hide Content
To compute the determinant of an almost upper triangular 4x4 matrix
Show/Hide Content
To compute the determinant of 5x5 ‘structured’ matrix
Show/Hide Content
To compute the determinant of the products of certain matrices.