5.3. Determinants via Row Reduction#
In this section we will first consider the effect of row operations on the value of a determinant. This leads the way to a more efficient way to compute \(n\times n\) determinants.
It also leads the way to two very important properties of determinants, namely
-
The product rule: \(\det{(AB)} = \det{A}\cdot\det{B}\).
-
The matrix \(A\) is invertible if and only if \(\det{A} \neq 0\) .
5.3.1. How Row Operations affect a Determinant#
We have seen in Section 5.2 that the cofactor expansion of an \(n \times n\) determinant works best using a row (or a column) with many, preferably \(n-1\), zeros. When solving a linear system, or finding the inverse of a matrix, we have seen how to create zeros via row reduction. The important thing: row reducing an augmented matrix does not alter the solution(s) of the corresponding linear system. The next proposition describes the effects of row operations on a determinant.
(How row operations affect a determinant)
For the determinant of an \(n\times n\) matrix \(A\) the following rules apply.
-
If a row of \(A\) is scaled with a factor \(c\), the determinant is scaled with a factor \(c\).
-
If a multiple of one row of \(A\) is added to another row, the determinant does not change.
-
When two rows of \(A\) are swapped, the determinant changes sign.
We postpone the proof until the end of this section and first look at examples and a few consequences.
The following identities show what happens with a \(4\times 4\) determinant when
-
the second row is scaled with a factor \(c\):
\[\begin{split} \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ ca_{21} &ca_{22} &ca_{23} &ca_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right| = c \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right|, \end{split}\] -
the first row is added \((-k)\) times to the third row:
\[\begin{split} \left|\begin{array}{llll} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31}-ka_{11} &a_{32}-ka_{12} &a_{33}-ka_{13} &a_{34}-ka_{14} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right| = \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right|, \end{split}\] -
the first and the fourth row are swapped:
\[\begin{split} \left|\begin{array}{rrrr} a_{41} &a_{42} &a_{43} &a_{44} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{11} &a_{12} &a_{13} &a_{14} \end{array} \right| = - \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right|. \end{split}\]
Note that these properties can be expressed using elementary matrices (cf. Section 3.2) .
Let \(A\) be an arbitrary \(4\times 4\) matrix, and \(E_1, E_2\) and \(E_3\) the elementary matrices corresponding to the row operations in Example 5.3.1. So
Then we have
Since
we see that in all three cases we have that
Since every row operation can be performed using the product with an elementary matrix, a consequence of Proposition 5.3.1 is that Equation (5.3.1) holds for any product of an elementary \(n \times n\) matrix \(E\) with an arbitrary \(n \times n\) matrix \(A\).
These are the basics for the general product rule we will see later, which states that
for arbitrary \(n\times n\) matrices \(A\) and \(B\).
The next two examples illustrate the practical use of the three rules involving row operations.
The steps involved are:
- (1) take out a factor \(5\) from the first row,
- (2) subtract the first row \(3\) times from the second row and \(2\) times from the third row (or: add it \((-3)\) times and \((-2)\) times respectively)
- (3) expand along the first column.
Evaluating the \(2\times 2\) determinant at the end leads to the answer \(-100\).
Can you describe the row operations and cofactor expansions in the following computation?
Because of Proposition 5.2.3 that states
every rule involving row operations may be transformed into a rule about column operations. It is here that computing a determinant differs strikingly from the reduction of a (for instance augmented) matrix to an echelon matrix. Another, more subtle difference is that a row operation applied to a matrix leads to an equivalent matrix, which we denote by the symbol \(\sim\), whereas row or column operations on a determinant give equal values all the time. So then we write \(=\).
Note that in Rule i. of Proposition 5.3.1 the factor \(c\) may be zero. This is also a slight difference to the scaling operation we used when row reducing a matrix. There the scaling factor must be nonzero.
In the next example column operations are used.
And do you see what is happening here?
An interesting consequence of rule (3) of Proposition 5.3.1 is the following.
If a matrix \(A\) has two equal rows (or columns), then \(\det{A} = 0\).
Proof of Corollary 5.3.1
Suppose the \(i\)th and the \(j\)th row of \(A\) are equal, and let \(\det{A} = d\). Let \(B\) be the matrix \(A\) with the \(i\)th and \(j\)th row interchanged.
On the one hand, \(B = A\), so
on the other hand, because of Proposition 5.3.1, Rule 2, we have
We may conclude \(d = -d\), which is only possible if
5.3.2. Determinants versus Invertibility#
With the knowledge built so far we can show the important property that was already hinted at in Section 5.2.
For any square matrix \(A\):
The proof is – we think – quite instructive. (However, feel free to skip it.)
Proof of Theorem 5.3.1
In the previous section we have already seen that the statement is true for triangular matrices.
Now suppose \(A\) is any \(n \times n\) matrix. Via row reduction \(A\) can be brought to echelon form \(F\), and for a square matrix the echelon form is an upper triangular matrix (with possibly zeros on the diagonal).
From Proposition 5.2.2 we know that for a triangular matrix \(F\) we have
‘\(F\) is invertible’ is equivalent to ‘\(\det{F} \neq 0\)’.
The row operations transforming \(A\) to \(F\) can be performed by multiplications with elementary matrices \(E_1, \ldots E_k\).
We have seen (Equation (5.3.1) in Example 5.3.2) that
Furthermore, the determinant of an elementary matrix is nonzero. Namely, for a row scaling it is equal to \(c\), for a row swap it is equal to \((-1)\), and for adding a multiple of a row to another row it is equal to \(1\). Hence, if \(m\) of the row operations are row scalings and \(\ell\) of the row operations are row swaps, then
with \(\alpha = c_1\cdots c_m \cdot (-1)^{\ell} \neq 0\).
So if \(E_kE_{k-1}\cdots E_1A = F\), where \(F\) is an echelon matrix, we see that
We conclude that
For two \(n\times n\) matrices \(A\) and \(B\) it always holds that
The idea of the proof is to break it down to products of the form \(\det{(EA)} = \det{E}\cdot\det{A}\), where \(E\) is an elementary matrix (Equation(5.3.1)). For more details you open the proof below.
Proof of Theorem 5.3.2
We already know that the identity holds if \(A\) is an elementary matrix. It will also hold if \(A\) is not invertible, as in that case \(AB\) is also not invertible, and we know that the determinant of any non-invertible matrix is zero. So then
Hence suppose that the matrix \(A\) is invertible. In that case (cf. Theorem 3.4.1) \(A\) can be written as a product of elementary matrices.
So then, step by step we find that
and also
If the matrix \(A\) is invertible, then \(\text{det}\big(A^{-1}\big)= \dfrac{1}{\det{A}}\).
Proof of Corollary 5.3.2
We can combine the three properties
i. \(AA^{-1} = I\), ii. \(\det{(AA^{-1})} = \det{A}\det{\left(A^{-1}\right)}\) and iii. \(\det{I} = 1\)
as follows:
so indeed
For each of the following statements decide whether they are true or false. In case true, give an argument, in case false, give a counterexample.
-
For each \(n \times n\) matrix \(A\) it holds that
\[ \text{det}\big(A^k\big)= \big(\det{A}\big)^k. \] -
For each two \(n \times n\) matrices \(A\) and \(B\) it holds that
\[ \det{(A+B)} = \det{A}+\det{B}. \] -
For each \(n \times n\) matrix \(A\) it holds that
\[ \det{(-A)} = -\det{A}. \] -
For each \(n \times n\) matrix \(A\) and each real number \(k\) it holds that
\[ \det{(kA)} = k^n\det{A}. \]
Solution to Exercise 5.3.1
We treat the statements one by
-
Is it true that for each \(n \times n\) matrix \(A\) it holds that \(\text{det}\big(A^k\big)= \big(\det{A}\big)^k\)?
This is true, and follows from repeatedly using the property \(\det(AB) = \det(A)\det(B)\). Namely,
\[ \det(A^k) = \det(A\cdot A \cdot A \cdots A) = \det(A)\cdot\det(A)\cdot\det(A)\cdots\det(A). \] -
Is it true that for each two \(n \times n\) matrices \(A\) and \(B\) it holds that
\[ \det{(A+B)} = \det{A}+\det{B}? \]This statement is false. A trivial counterexample is given by \(A = B = I_n\), for \(n \geq 2\). Namely, for these matrices we see that
\[ \det{A} + \det{B} = 1 + 1 = 2 \neq \det{(A+B)} = \det{(2I)} = 2^n. \] -
Is it true that for each \(n \times n\) matrix \(A\) and each real number \(k\) it holds that
\[ \det{(kA)} = k^n\det{A}? \]This is true. One way to prove it is to write \(kA = (kI)A\), where
\[\begin{split} kI = \begin{bmatrix}k & 0 & 0 &\cdots & 0 \\ 0 & k & 0 &\cdots & 0 \\ 0 & 0 & k &\cdots & 0 \\ \vdots & \vdots &\vdots & \ddots & \vdots \\ 0 & 0 & 0 &\cdots & k \end{bmatrix}. \end{split}\]So we find
\[ \det{(kA)} = \det{(kI)}\cdot\det{(A)} = k^n \det{(A)}. \] -
Is it true that \(\text{det}(-A)= -\det{(A)}\) for each \(n \times n\) matrix \(A\)?
This is not true in general. Taking \(k = -1\) in the previous statement we see that
\[ \det{(-A)} = \det{(-1)A} = (-1)^n\det{(A)}. \]A specific example: for \(A = I\) it holds that
\[\begin{split} \text{det} (-A) = \begin{vmatrix} -1 & 0 \\ 0 & -1 \end{vmatrix} = (-1)^2 \begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} = \begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} = \text{det} (A). \end{split}\]
We will conclude this section, for the interested reader, with a proof of the properties of Proposition 5.3.1. In fact we will prove the column version, and we add one related rule that will be of use both immediately in the proof and also later on.
Suppose \(A\) is an \(n\times n\) matrix for which the \(k\)th column is the sum of two vectors in \(\R^n\). So
Then
Click on the symbol to the right below for the proof of Proposition 5.3.1 and Proposition 5.3.2.
Proof of Proposition 5.3.1 and Proposition 5.3.2
For typographical reasons we will prove the three rules stated as column operations. For an \(n \times n\) matrix
the rules can then be formulated as
- (1) \(\det{[\vect{a}_1 \, \vect{a}_2 \, \ldots \, c \vect{a}_k \, \ldots \, \vect{a}_n]} = c \det{A}\);
- (2) \(\det{[\vect{a}_1 \, \ldots \, \vect{a}_k \, \ldots \, \vect{a}_j \, \ldots \, \vect{a}_n]} = - \det{[\vect{a}_1 \, \ldots \, \vect{a}_j \, \ldots \, \vect{a}_k \, \ldots \, \vect{a}_n]}\);
- (3) \(\det{[\vect{a}_1 \, \ldots \, \vect{a}_j \, \ldots \, \vect{a}_k + c\vect{a}_j \, \ldots \, \vect{a}_n]} = \det{A}\).
So, let us consider them one by one.
- (1) If a column is scaled with a factor \(c\), then the determinant is also scaled with a factor \(c\).
Suppose \(\tilde{A}\) is the result of scaling the \(k\)th column of \(A\) with a factor \(c\).
Then expanding det\((\tilde{A})\) along its \(k\)th column, keeping in mind that \(\tilde{a}_{ik} = c {a}_{ik}\) and \(\tilde{A}_{ik} = {A}_{ik}\), yields
If we take out the constant factor \(c\) we see that
Proposition 5.3.2 is proved in much the same way as rule (1) by expansion along the \(k\)th column.
- (2) The proof of the swapping rule is more involved.
One way to set about is
-
first settle the rule when the first two columns are interchanged;
-
next consider the swapping of two arbitrary consecutive columns;
-
finally note that any column swap is the result of an odd number of swaps of consecutive columns.
We will consider the swapping of the first two columns in complete detail. Let \(\bar{\bar{A}}\) be the result of swapping the first two columns of the matrix \(A\). The crucial thing to note here is that, for each value of \(i\),
To make this explicit for a \(4\times 4\) matrix:
Expanding det\((\bar{\bar{A}})\) along its second column yields
Noting that \((-1)^{i+2} = (-1)\cdot (-1)^{i+1}\) and taking out one factor \((-1)\) from the sum yields
The same argument works for the interchanging of two arbitrary consecutive columns.
And the argument can even be generalized for two columns that are not necessarily neighbours. The notation with many indices becomes hard to read though. As stated the swapping of two arbitrary columns can be accomplished via an odd number of ‘consecutive swaps’, so then the determinant changes sign an odd number of times.
And for an odd number \(n\) we have that \((-1)^n = -1\).
In fact, to swap columns \(i\) and \(j\), with \(i < j\), we need \(j-i\) neighbour swaps to move
column \(i\) to position \(j\), and \(j-i-1\) swaps to move (the original) column \(j\) to position \(i\), which gives a total of \(n = 2(j-i)+1\) swaps.
For instance, to interchange column \(2\) and column \(5\) in a \(5 \times 5\) matrix the \((j-i) + (j-i-1) = 3+2 =5\) neigbour swaps can be visualized as follows
Lastly we have to prove
- (3) If the multiple of one column of \(A\) is added to another column, the determinant does not change.
First Rule (2) implies, as in Corollary 5.3.1, that a determinant with two equal columns has the value 0.
We then proceed as follows for Rule (3):
This settles all matters.
5.3.3. Grasple Exercises#
Effects of row operations on a 3x3 determinant
Show/Hide Content
Effects of row operations on a 3x3 determinant
Show/Hide Content
Effects of row and column operations on a 3x3 determinant
Show/Hide Content
Effects of several operations on a 4x4 determinant
Show/Hide Content
Effects of a column operation on a 4x4 determinant
Show/Hide Content
Effects of a column operation on a 4x4 determinant
Show/Hide Content
To compute a 3x3 determinant using row reduction
Show/Hide Content
To compute a 4x4 determinant with quite a few zeros
Show/Hide Content
To compute a 4x4 determinant via reduction and expansion
Show/Hide Content
To compute a ‘random’ 5x5 determinant with entries in {-2,-1,0,1,2}
Show/Hide Content
Computing a structured 5x5 determinant in a ‘smart’ way
Show/Hide Content
Finding a parameter \(h\) such that a determinant has a prescribed value
Show/Hide Content
Checking linear (in)dependence of \(\vect{a}_1,\vect{a}_2,\vect{a}_3\) in \(\mathbb{R}^3\) via determinants.
Show/Hide Content
Checking linear (in)dependence of \(\vect{a}_1,\vect{a}_2,\vect{a}_3,\vect{a}_4\) in \(\mathbb{R}^4\) via determinants.
Show/Hide Content
Checking invertibility of a matrix \(A\) via det(\(A\)).
Show/Hide Content
Find \(h\) (in matrix \(A\)) such that \(A\) is invertible.
Show/Hide Content
To find det\((PBP^{-1})\), for given \(P\) and \(B\).
Show/Hide Content
To combine several rules of determinants for a product involving three matrices \(A\), \(B\) and \(C\).
Show/Hide Content
To find det\(\left(A^3\right)\), for a given matrix \(A\)..
Show/Hide Content
To find det\(\left(kA^TB^{-1}\right)\), for matrices \(A\) and \(B\).
Show/Hide Content
What can det(\(A\)) be, if \(A^2 = kA\)?
Show/Hide Content
What about det(\(A+B\)) = det(\(A\)) + det(\(B\))?
Show/Hide Content
(True/False) det\((A) = 0 \iff A\) has a row that is a multiple of another row.
Show/Hide Content
What happens to det(A) if the last column of \(A\) becomes the first?
Show/Hide Content
What happens to det(\(A\)) if the order of the rows is reversed?
Show/Hide Content
At the end a non-Grasple exercise.
Give an alternative proof of Corollary 5.3.1 using Rule i. and Rule ii. of Proposition 5.3.1.
Solution to Exercise 5.3.2
Suppose \(A\) is a matrix with two equal rows, say row \(i\) and row \(j\) are equal.
If we subtract the \(i\)th row from the \(j\)th row, we get a matrix \(A_2\) with \(j\)th row equal to zero, and with det\((A_2) = \) det\((A)\). If we take the factor \(0\) out, we see that det\((A_2) = 0\).
For instance, with a \(4\times 4\) matrix with equal second and fourth row we would have
Expansion of the last determinant across its fourth row yields the value \(0\).