5.3. Determinants via Row Reduction#

In this section we will first consider the effect of row operations on the value of a determinant. This leads the way to a more efficient way to compute \(n\times n\) determinants.

It also leads the way to two very important properties of determinants, namely

  • The product rule: \(\det{(AB)} = \det{A}\cdot\det{B}\).

  • The matrix \(A\) is invertible if and only if \(\det{A} \neq 0\) .

5.3.1. How Row Operations affect a Determinant#

We have seen in Section 5.2 that the cofactor expansion of an \(n \times n\) determinant works best using a row (or a column) with many, preferably \(n-1\), zeros. When solving a linear system, or finding the inverse of a matrix, we have seen how to create zeros via row reduction. The important thing: row reducing an augmented matrix does not alter the solution(s) of the corresponding linear system. The next proposition describes the effects of row operations on a determinant.

Proposition 5.3.1 (How row operations affect a determinant)

For the determinant of an \(n\times n\) matrix \(A\) the following rules apply.

  1. If a row of \(A\) is scaled with a factor \(c\), the determinant is scaled with a factor \(c\).

  2. If a multiple of one row of \(A\) is added to another row, the determinant does not change.

  3. When two rows of \(A\) are swapped, the determinant changes sign.

We postpone the proof until the end of this section and first look at examples and a few consequences.

Example 5.3.1

The following identities show what happens with a \(4\times 4\) determinant when

  1. the second row is scaled with a factor \(c\):

    \[\begin{split} \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ ca_{21} &ca_{22} &ca_{23} &ca_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right| = c \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right|, \end{split}\]
  2. the first row is added \((-k)\) times to the third row:

    \[\begin{split} \left|\begin{array}{llll} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31}-ka_{11} &a_{32}-ka_{12} &a_{33}-ka_{13} &a_{34}-ka_{14} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right| = \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right|, \end{split}\]
  3. the first and the fourth row are swapped:

    \[\begin{split} \left|\begin{array}{rrrr} a_{41} &a_{42} &a_{43} &a_{44} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{11} &a_{12} &a_{13} &a_{14} \end{array} \right| = - \left|\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right|. \end{split}\]

Note that these properties can be expressed using elementary matrices (cf. Section 3.2) .

Example 5.3.2

Let \(A\) be an arbitrary \(4\times 4\) matrix, and \(E_1, E_2\) and \(E_3\) the elementary matrices corresponding to the row operations in Example 5.3.1. So

\[\begin{split} E_1 = \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & c & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] , \quad E_2 = \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ -k & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] , \quad E_3 = \left[\begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right] . \end{split}\]

Then we have

\[ \det{(E_1A)} = c \det{A}, \quad \det{(E_2A)} = \det{A}, \quad \det{(E_3A)} = -\det{A}. \]

Since

\[ \det{E_1} = c, \quad \det{E_2} = 1, \quad \det{E_3} = -1, \]

we see that in all three cases we have that

(5.3.1)#\[\det{(E_iA)} = \det{E_i} \cdot \det{A}. \]

Since every row operation can be performed using the product with an elementary matrix, a consequence of Proposition 5.3.1 is that Equation (5.3.1) holds for any product of an elementary \(n \times n\) matrix \(E\) with an arbitrary \(n \times n\) matrix \(A\).

These are the basics for the general product rule we will see later, which states that

\[ \text{det}(AB) = \text{det}(A) \text{det} (B) \]

for arbitrary \(n\times n\) matrices \(A\) and \(B\).

The next two examples illustrate the practical use of the three rules involving row operations.

Example 5.3.3

\[\begin{split} \left|\begin{array}{rrr} 5 & 10 & 15 \\ 3 & 8 & 6 \\ 2 & -2 & 5 \end{array} \right|\stackrel{(1)}{=} 5 \left|\begin{array}{rrr} 1 & 2 & 3 \\ 3 & 8 & 6 \\ 2 & -2 & 5 \end{array} \right|\stackrel{(2)}{=} 5 \left|\begin{array}{rrr} 1 & 2 & 3 \\ 0 & 2 & -3 \\ 0 & -6 & -1 \end{array} \right|\stackrel{(3)}{=} 5 \cdot 1 \cdot \left|\begin{array}{rr} 2 & -3 \\ -6 & -1 \end{array} \right|. \end{split}\]

The steps involved are:

  1. (1) take out a factor \(5\) from the first row,
  2. (2) subtract the first row \(3\) times from the second row and \(2\) times from the third row (or: add it \((-3)\) times and \((-2)\) times respectively)
  3. (3) expand along the first column.

Evaluating the \(2\times 2\) determinant at the end leads to the answer \(-100\).

Can you describe the row operations and cofactor expansions in the following computation?

\[\begin{split} \begin{array}{rcl} \left|\begin{array}{rrrr} 3 & 2 & 4 & 3 \\ 1 & 2 & 3 & -1 \\ 4&3&8&2 \\ 2&5&7&-4 \end{array} \right|&=& \left|\begin{array}{rrrr} 0 & -4 & -5 & 6 \\ 1 & 2 & 3 & -1 \\ 0 & -5 & -4 & 6 \\0 &1 & 1 & -2 \end{array} \right|= (-1)\cdot \left|\begin{array}{rrr} -4 & -5 & 6 \\ -5 & -4 & 6 \\ 1 & 1 & -2 \end{array} \right| \\ &=& (-1)\cdot \left|\begin{array}{rrr} 0 & -1 & -2 \\ 0 & 1 & -4 \\ 1 & 1 & -2 \end{array} \right|= - \left|\begin{array}{rr} -1 & -2 \\ 1 & -4 \end{array} \right| = -6. \end{array} \end{split}\]

Remark 5.3.1

Because of Proposition 5.2.3 that states

\[ \det{A} = \text{det}\big(A^T\big) \]

every rule involving row operations may be transformed into a rule about column operations. It is here that computing a determinant differs strikingly from the reduction of a (for instance augmented) matrix to an echelon matrix. Another, more subtle difference is that a row operation applied to a matrix leads to an equivalent matrix, which we denote by the symbol \(\sim\), whereas row or column operations on a determinant give equal values all the time. So then we write \(=\).

Note that in Rule i. of Proposition 5.3.1 the factor \(c\) may be zero. This is also a slight difference to the scaling operation we used when row reducing a matrix. There the scaling factor must be nonzero.

In the next example column operations are used.

Example 5.3.4

\[\begin{split} \left|\begin{array}{rrrr} 1 & 1 & 1 & -1 \\ 2 & 4 & 5 & 3 \\4 & 5 & 2 & -1 \\ 5 & 7 & 4 & -2 \end{array} \right|= \left|\begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 2 & 2 & 3 & 5 \\4 & 1 & -2 & 3 \\ 5 & 2 & -1 & 3 \end{array} \right|= \left|\begin{array}{rrr} 2 & 3 & 5 \\ 1 & -2 & 3 \\ 2 & -1 & 3 \end{array} \right|= \ldots \end{split}\]

And do you see what is happening here?

\[\begin{split} \left|\begin{array}{rrrr} 1 & 2 & 3 & 4 \\ -2 & 2 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ -5 & 0 & 0 & 5 \end{array} \right|= \left|\begin{array}{rrrr} 10& 2 & 3 & 4 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 5 \end{array} \right|= 10 \left|\begin{array}{rrrr} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 5 \end{array} \right|= 100. \end{split}\]

An interesting consequence of rule (3) of Proposition 5.3.1 is the following.

Corollary 5.3.1

If a matrix \(A\) has two equal rows (or columns), then \(\det{A} = 0\).

Proof of Corollary 5.3.1

Suppose the \(i\)th and the \(j\)th row of \(A\) are equal, and let \(\det{A} = d\). Let \(B\) be the matrix \(A\) with the \(i\)th and \(j\)th row interchanged.

On the one hand, \(B = A\), so

\[ \det{B} = \det{A} = d, \]

on the other hand, because of Proposition 5.3.1, Rule 2, we have

\[ \det{B} = -\det{A} = -d. \]

We may conclude \(d = -d\), which is only possible if

\[ d = \det{A} = 0. \]

5.3.2. Determinants versus Invertibility#

With the knowledge built so far we can show the important property that was already hinted at in Section 5.2.

Theorem 5.3.1

For any square matrix \(A\):

\[ A \,\,\text{is invertible} \quad \iff \quad \det{A} \neq 0. \]

The proof is – we think – quite instructive. (However, feel free to skip it.)

Theorem 5.3.2

For two \(n\times n\) matrices \(A\) and \(B\) it always holds that

\[ \det{(AB)} = \det{A}\cdot\det{B}. \]

The idea of the proof is to break it down to products of the form \(\det{(EA)} = \det{E}\cdot\det{A}\), where \(E\) is an elementary matrix (Equation(5.3.1)). For more details you open the proof below.

Corollary 5.3.2

If the matrix \(A\) is invertible, then \(\text{det}\big(A^{-1}\big)= \dfrac{1}{\det{A}}\).

Proof of Corollary 5.3.2

We can combine the three properties

i. \(AA^{-1} = I\),   ii. \(\det{(AA^{-1})} = \det{A}\det{\left(A^{-1}\right)}\)   and   iii. \(\det{I} = 1\)

as follows:

\[ \text{det}(A)\text{det}\left(A^{-1}\right) = \text{det}(AA^{-1}) = \text{det}(I) = 1, \]

so indeed

\[ \text{det}\left(A^{-1}\right) = \dfrac{1}{\text{det}(A)}. \]

Exercise 5.3.1

For each of the following statements decide whether they are true or false. In case true, give an argument, in case false, give a counterexample.

  1. For each \(n \times n\) matrix \(A\) it holds that


    \[ \text{det}\big(A^k\big)= \big(\det{A}\big)^k. \]
  2. For each two \(n \times n\) matrices \(A\) and \(B\) it holds that


    \[ \det{(A+B)} = \det{A}+\det{B}. \]
  3. For each \(n \times n\) matrix \(A\) it holds that


    \[ \det{(-A)} = -\det{A}. \]
  4. For each \(n \times n\) matrix \(A\) and each real number \(k\) it holds that


    \[ \det{(kA)} = k^n\det{A}. \]

We will conclude this section, for the interested reader, with a proof of the properties of Proposition 5.3.1. In fact we will prove the column version, and we add one related rule that will be of use both immediately in the proof and also later on.

Proposition 5.3.2

Suppose \(A\) is an \(n\times n\) matrix for which the \(k\)th column is the sum of two vectors in \(\R^n\). So

\[ \vect{a}_k = \vect{b} +\vect{c}. \]

Then

(5.3.2)#\[\begin{split}\begin{array}{l} \det{[\vect{a}_1 \,\, \ldots \,\, \vect{b}+\vect{c} \,\, \ldots \,\, \vect{a}_n]} = \\ \qquad \qquad \qquad \det{[\vect{a}_1 \,\, \ldots \,\, \vect{b} \,\, \ldots \,\, \vect{a}_n]} + \det{[\vect{a}_1 \,\, \ldots \,\, \vect{c} \,\, \ldots \,\, \vect{a}_n]} \end{array}\end{split}\]

Click on the symbol to the right below for the proof of Proposition 5.3.1 and Proposition 5.3.2.

5.3.3. Grasple Exercises#

Grasple Exercise 5.3.1

https://embed.grasple.com/exercises/b34a791a-3f42-4d10-9952-f6f5699a68fb?id=104164

Effects of row operations on a 3x3 determinant

Grasple Exercise 5.3.2

https://embed.grasple.com/exercises/1d3924d9-ea34-4a89-8b7c-33e385d144ba?id=104312

Effects of row operations on a 3x3 determinant

Grasple Exercise 5.3.3

https://embed.grasple.com/exercises/1fcb337d-f906-423a-acd5-8d8c69d4d04b?id=93158

Effects of row and column operations on a 3x3 determinant

Grasple Exercise 5.3.4

https://embed.grasple.com/exercises/cabb663b-7b86-4215-81aa-0a3da91a5688?id=103719

Effects of several operations on a 4x4 determinant

Grasple Exercise 5.3.5

https://embed.grasple.com/exercises/1354915d-4cf4-4559-8ac2-68573807199d?id=103702

Effects of a column operation on a 4x4 determinant

Grasple Exercise 5.3.6

https://embed.grasple.com/exercises/1354915d-4cf4-4559-8ac2-68573807199d?id=103702

Effects of a column operation on a 4x4 determinant

Grasple Exercise 5.3.7

https://embed.grasple.com/exercises/882506bb-6a5e-479f-b095-bb5b95be2467?id=104166

To compute a 3x3 determinant using row reduction

Grasple Exercise 5.3.8

https://embed.grasple.com/exercises/993b010f-3351-4b98-b9b7-1d04c1c959be?id=93143

To compute a 4x4 determinant with quite a few zeros

Grasple Exercise 5.3.9

https://embed.grasple.com/exercises/2d51357d-e56d-4de5-a882-493a795fd222?id=93144

To compute a 4x4 determinant via reduction and expansion

Grasple Exercise 5.3.10

https://embed.grasple.com/exercises/9974012a-1ac9-439f-919f-2647be1ba4ba?id=92965

To compute a ‘random’ 5x5 determinant with entries in {-2,-1,0,1,2}

Grasple Exercise 5.3.11

https://embed.grasple.com/exercises/4a01fc67-0acc-44aa-9ba2-18c1accae720?id=93145

Computing a structured 5x5 determinant in a ‘smart’ way

Grasple Exercise 5.3.12

https://embed.grasple.com/exercises/f2e09cfe-9d88-4f7b-a295-bad7feda89e5?id=93150

Finding a parameter \(h\) such that a determinant has a prescribed value

Grasple Exercise 5.3.13

https://embed.grasple.com/exercises/35bff21c-6434-4e4a-b154-965de08479c0?id=93146

Checking linear (in)dependence of \(\vect{a}_1,\vect{a}_2,\vect{a}_3\) in \(\mathbb{R}^3\) via determinants.

Grasple Exercise 5.3.14

https://embed.grasple.com/exercises/7c4c18ba-96ba-432a-97b0-0a269a0a9f55?id=93147

Checking linear (in)dependence of \(\vect{a}_1,\vect{a}_2,\vect{a}_3,\vect{a}_4\) in \(\mathbb{R}^4\) via determinants.

Grasple Exercise 5.3.15

https://embed.grasple.com/exercises/5deab9d8-20f3-4b59-b54e-3b61c981c8c7?id=93148

Checking invertibility of a matrix \(A\) via det(\(A\)).

Grasple Exercise 5.3.16

https://embed.grasple.com/exercises/c3f025b5-2ca4-48cb-a1f9-4e144c8bc258?id=93149

Find \(h\) (in matrix \(A\)) such that \(A\) is invertible.

Grasple Exercise 5.3.17

https://embed.grasple.com/exercises/a5713d1f-696b-42e5-ab74-553eec26b00b?id=93151

To find det\((PBP^{-1})\), for given \(P\) and \(B\).

Grasple Exercise 5.3.18

https://embed.grasple.com/exercises/9ab31fa4-6686-4865-8d43-602dc1fe670e?id=93152

To combine several rules of determinants for a product involving three matrices \(A\), \(B\) and \(C\).

Grasple Exercise 5.3.19

https://embed.grasple.com/exercises/9ae9228f-ab17-4853-9995-e38e16d87c22?id=93153

To find det\(\left(A^3\right)\), for a given matrix \(A\)..

Grasple Exercise 5.3.20

https://embed.grasple.com/exercises/8db6831f-2671-443a-af64-799d1d0d9179?id=93154

To find det\(\left(kA^TB^{-1}\right)\), for matrices \(A\) and \(B\).

Grasple Exercise 5.3.21

https://embed.grasple.com/exercises/116e83e9-1db7-47ce-a2f3-ad398aee0201?id=93155

What can det(\(A\)) be, if \(A^2 = kA\)?

Grasple Exercise 5.3.22

https://embed.grasple.com/exercises/821d81b1-2cec-4fa4-b4a7-b1f9c32d6e06?id=93156

What about det(\(A+B\)) = det(\(A\)) + det(\(B\))?

Grasple Exercise 5.3.23

https://embed.grasple.com/exercises/5b89a008-2e3d-48a5-a764-0b1b6a3ec4dc?id=93157

(True/False) det\((A) = 0 \iff A\) has a row that is a multiple of another row.

Grasple Exercise 5.3.24

https://embed.grasple.com/exercises/e0bfbb0c-002f-485f-9b2f-5249938b6e40?id=93162

What happens to det(A) if the last column of \(A\) becomes the first?

Grasple Exercise 5.3.25

https://embed.grasple.com/exercises/41f5ca17-ab3e-4487-b5fa-ee325cae85aa?id=93164

What happens to det(\(A\)) if the order of the rows is reversed?

At the end a non-Grasple exercise.

Exercise 5.3.2

Give an alternative proof of Corollary 5.3.1 using Rule i. and Rule ii. of Proposition 5.3.1.