5.4. Miscellaneous Applications of Determinants#
In this section we will address the following matters:
-
The determinant as a uniform scale factor for an arbitrary linear transformation from \(\R^n\) to \(\R^n\) .
-
Cramer’s rule. Seemingly the ultimate solution to almost all systems of \(n\) linear equations in \(n\) unknowns.
-
The generalization of the formula
\[\begin{split} \left[\begin{array}{cc} a & b \\ c & d\end{array} \right]^{-1} = \dfrac{1}{ad-bc} \left[\begin{array}{cc} d & -b \\ -c & a\end{array} \right] \end{split}\]to \(n\times n\) matrices.
-
A certain generalization of the cross product to \(n\) dimensions.
5.4.1. Volume and Orientation Revisited#
We have seen in Section 5.1 how determinants arise in the context of areas of parallelograms and volumes of parallelepipeds.
In Section 1.2 we used the dot product to define length, distance and orthogonality in \(\R^n\). Determinants permit to define the concepts of volume and orientation in \(n\) dimensions.
\(\R^n\))
(Volume inLet \(\{\vect{v_1}, \ldots, \vect{v}_n\}\) be a set of \(n\) vectors in \(\R^n\). The \(n\)-dimensional parallelepiped \(\mathcal{P}\) spanned by \(\vect{v_1}, \ldots, \vect{v}_n\) is the set
See Figure 5.4.1 for an illustration of such a set in \(\R^2\).
The volume of such a parallelepiped is defined by
So, it is the absolute value of a determinant.
Note that if the vectors \(\vect{v}_1, \ldots, \vect{v}_n\) in Definition 5.4.1 are linearly dependent the volume automatically becomes 0.
\(\R^n\))
(Orientation inSuppose the vectors \((\vect{v_1}, \ldots, \vect{v}_n)\) in \(\R^n\) are linearly independent.
Then we say that the ordered set \((\vect{v_1}, \ldots, \vect{v}_n)\) is positively oriented if \( \det{[\vect{v_1} \ldots \vect{v}_n]}>0\).
If this determinant is negative the set is called negatively oriented.
For vectors that are linearly dependent we do not define the orientation.
Suppose \(T\) is a linear transformation from \(\R^2\) to \(\R^2\), with standard matrix \(A = [\,\vect{a}_1 \,\, \vect{a}_2\,]\). So we have
Let \(R\) be any region in \(\R^2\) for which the area is well-defined, and let \(S\) be the image of \(R\) under \(T\).
Then for the area of \(S\) it holds that
Proof of Proposition 5.4.1
If the matrix \(A\) is not invertible, the range of \(T\), which is given by \(\text{span}\{\vect{a}_1, \vect{a}_2\}\), is contained in a line. Each region \(R\) is then mapped onto a subset \(S\) that is contained in this line, so
Next suppose that \(A\) is invertible. Then the unit grid is mapped onto a grid with as unit region the parallelogram with sides \(\vect{a}_1 = A\vect{e}_1\) and \(\vect{a}_2 = A\vect{e}_2\) . See Figure 5.4.2.
First we show that the formula holds if \(R\) is the unit square, i.e., the parallelogram generated by \(\vect{e}_1\) and \(\vect{e}_2\). The unit square is mapped onto the parallelogram \(S\) generated by \(T(\vect{e}_1)=\vect{a}_1\) and \(T(\vect{e}_2)=\vect{a}_2\). It follows that
and since the area of \(R\) is equal to 1, we have
This then also holds for any square \(R\) with sides of length \(r\) that are parallel to the axes. Namely, such a square has area \(r^2\) and can be described as the square with vertices
These are mapped to
This is a parallelogram with sides \(rA\vect{e}_1 = r\vect{a}_1\) and \(rA\vect{e}_2 =r \vect{a}_2\), which has area
See Figure 5.4.3
For a general (reasonable) region \(R\) we sketch the idea and omit the technical details.
The region \(R\) can be approximated arbitrarily close by a collection of smaller and smaller squares \(R_i\) of which the interiors do not overlap. See Figure 5.4.4. The limit of the areas of these approximations when the grids get finer and finer gives the area of \(R\).
The formula holds for each of the \(R_i\). Since \(T\) is one-to-one, the images \(S_i = T(R_i)\) will not overlap either, and the images taken together will approximate the image \(S = T(R)\) as well. We deduce that
By taking an appropriate limit one can show that in fact
Proposition 5.4.3 can be generalized to higher dimensions.
For \(n = 3\) area becomes volume, and for higher dimensions we use the definition of \(n\)-dimensional volume as in Definition 5.4.1.
Suppose \(T\) is a linear transformation from \(\R^n\) to \(\R^n\), with standard matrix \(A\).
Then for any region \(S\) in \(\R^n\) for which the volume is well-defined, it holds that
where \(S\) is the image of \(R\) under \(T\).
Proof of Proposition 5.4.2
If \(R\) is the \(n\)-dimensional parallelepiped \(\mathcal{P}\) generated by \(\{\vect{v}_1, \ldots, \vect{v}_n\}\) we have that \(T(\mathcal{P})\) is generated by \(\{T(\vect{v}_1), \ldots, T(\vect{v}_n)\}\).
Then
For a more general region \(R\) we would again have to work with approximations/subdivisions like in the proof of Proposition 5.4.3. Then we would first have to extend the definition of \(n\)-dimensional volume. We will not pursue that track.
To conclude our interpretation of the determinant of \(A\) regarding the linear transformation \(T(\vect{x}) = A\vect{x}\) we look at the orientation.
Suppose \(A = [\,\vect{a}_1\,\,\vect{a}_2\,\,\ldots\,\,\vect{a}_n\, ]\) is the standard matrix of the linear transformation \(T: \R^n \to R^n\). So we have
Suppose \((\vect{v}_1,\,\vect{v}_2,\,\ldots\,,\,\vect{v}_n)\) is an ordered set of vectors in \(\R^n\).
Then the following holds.
If \(\det{A} > 0\) the set \(\big(T(\vect{v}_1),\,T(\vect{v}_2),\,\ldots\,,\,T(\vect{v}_n)\big)\) has the same orientation as the set \((\vect{v}_1,\,\vect{v}_2,\,\ldots\,,\,\vect{v}_n)\).
If \(\det{A} < 0\) the set \(\big(T(\vect{v}_1),\,T(\vect{v}_2),\,\ldots\,,\,T(\vect{v}_n)\big)\) has the opposite orientation as the set \((\vect{v}_1,\,\vect{v}_2,\,\ldots\,,\,\vect{v}_n)\).
In short: the transformation \(T(\vect{x}) = A\vect{x}\) preserves the orientation if
\(\det{A} > 0\) and reverses the orientation if \(\det{A} < 0\).
If the determinant is 0, then the set \(\{T(\vect{v}_1), \ldots,T(\vect{v}_n) \}\) will be linearly dependent, and for such a set the orientation is not defined.
Proof of Proposition 5.4.3
This too follows immediately from the product rule of determinants.
A nice illustration of what this means in \(\R^2\) is given by the following example.
Consider the two linear transformations from \(\R^2\) to \(\R^2\) with matrices
Note that
Figure 5.4.5 visualizes what is going on.
The images of a unit vector that rotates counterclockwise under transformation \(A\) move around clockwise, i.e., in the opposite orientation/direction. Under transformation \(B\) the images will go around the origin counterclockwise, i.e., in the same direction as the original vectors.
5.4.2. Cramer’s rule#
We first introduce a new notation that will help to simplify formulas later.
Let \(A\) be an \(n\times n\) matrix, and \(\vect{v}\) a vector in \(\R^n\). Then \(A^{(i)}(\vect{v})\) denotes the matrix that results when the \(i\)th column of \(A\) is replaced by the vector \(\vect{v}\).
For the matrix \(A = \begin{bmatrix} 1 & 3 & 1 \\ 1 & 4 & 2 \\ 3 & 1 & 5 \end{bmatrix}\) and the vector \(\vect{v} = \begin{bmatrix} \class{blue}6 \\ \class{blue}7 \\ \class{blue}8 \end{bmatrix}\) we have that
Suppose that \(A\) is an invertible \(n \times n\) matrix. Then we know that the linear system \(A\vect{x} = \vect{b}\) has a unique solution for each \(\vect{b}\) in \(\R^n\). And we also know that the determinant of \(A\) is not equal to zero.
The next proposition gives a ‘ready made’ formula for the solution.
(Cramer’s Rule)
Suppose \(A\) is an invertible \(n \times n\) matrix, and \(\vect{b}\) a vector in \(\R^n\). The entries of \(x_i\) of the unique solution \(\vect{x}\) of the linear system
are given by
We use Cramer’s rule to solve the system
First of all, the determinant of \(A\) can be computed as follows (in the first step we use column reduction, with the boxed 1 as a pivot):
so the coefficient matrix is invertible and consequently the system has a unique solution.
According to Cramer’s rule we find the first entry of the solution as follows (again we use the boxed 1 as a pivot):
Likewise we can compute the other two entries of the solution.
The following proof of Cramer’s rule rests rather nicely on properties of the determinant function. But feel free to skip it.
Proof of Theorem 5.4.1
Suppose \(\vect{x} = \vect{c} = \left[\begin{array}{c} c_1 \\ \vdots\\ c_n\end{array} \right] \) is the unique solution of the linear system \(A\vect{x} = \vect{b}\), with the invertible matrix \(A = [ \vect{a}_1 \, \, \vect{a}_2 \, \ldots \,\vect{a}_n ]\).
We show that Formula (5.4.1) holds for \(c_1\). The argument can be copied for the other \(c_i\).
We first note that
The smart next move is to replace the first column of \(A\) by the zero column disguised as
So we have
By the linearity property (in all of the columns) of the determinant (Proposition 5.3.2) we may deduce
Now we note that
since in the matrix \(A^{(1)}(\vect{a}_i)\) the first column and the \(i\)th column are identical. Hence all but the first and last determinant in Equation (5.4.2) drop out and we can conclude that indeed
Cramer’s formula seems the solution to all your linear systems. However, it has its drawbacks.
Disclaimer 1 Cramer’s formula can only be used for a square linear system with an invertible matrix.
Disclaimer 2 For a system with two equations in two unknowns Cramer’s rule may come in handy, but for solving larger systems it is highly inefficient. For instance, for a system of four equations in four unknowns, to find the solution using Cramer’s rule, one needs to compute five \(4 \times 4\) determinants. The good-old method using the augmented matrix \([\,A\,|\,\vect{b}\,]\) only asks for one row reduction process.
5.4.3. The inverse of a matrix in terms of determinants#
As an interesting corollary of Cramer’s Theorem we can give a ready-made formula for the inverse of an invertible matrix. The following proposition considers the notation of the previous section for a special case.
Let \(A\) be an \(n\times n\) matrix, and \(\vect{e}_j\) the \(j\)th vector of the standard basis of \(\R^n\). Then
where \(A_{ji}\) is the submatrix and \(C_{ji} = (-1)^{j+i} \det{\left(A_{ji}\right)} \) the cofactor as introduced in the definition of the \(n \times n\) determinant (Definition 5.2.2).
The following example serves as an illustration of what is going on here.
Let \(A = \left[\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &a_{14} \\ a_{21} &a_{22} &a_{23} &a_{24} \\ a_{31} &a_{32} &a_{33} &a_{34} \\ a_{41} &a_{42} &a_{43} &a_{44} \end{array} \right] \) be any \(4 \times 4\) matrix.
Then \( A^{(4)}(\vect{e}_2) = \left[\begin{array}{rrrr} a_{11} &a_{12} &a_{13} &0 \\ a_{21} &a_{22} &a_{23} &1 \\ a_{31} &a_{32} &a_{33} &0 \\ a_{41} &a_{42} &a_{43} &0 \end{array} \right].\)
Expanding along the fourth column gives
If \(A\) is an invertible \(n \times n\) matrix then the inverse \(B\) of \(A\) is given by
Proof of Proposition 5.4.5
The \(j\)th column \(\vect{b}_j\) of \(B = A^{-1}\) is the solution of the linear system \(A\vect{x} = \vect{e}_j\).
Cramer’s rule then gives that \(b_{ij}\), the \(i\)th entry of this column, is equal to
For the last step we used Proposition 5.4.4.
For an \(n \times n\) matrix \(A\) the matrix
is called its cofactor matrix.
The adjugate matrix of \(A\) is defined as the transpose of the cofactor matrix. So
Thus Proposition 5.4.5 states that
provided that \(A\) is invertible. In fact a slightly more general formula holds for any square matrix.
For any square matrix \(A\) the following identity holds:
For clarity we used dots to indicate products. Note that the first two products are matrix products and the third product is a scalar times a matrix.
The proof we think, is short and instructive.
Proof of Proposition 5.4.6
For an invertible matrix the statement follows immediately from Proposition 5.4.5.
However, we can give an ‘elementary’ proof, that includes the non-invertible case where \(\det{A}=0\). We will use two properties of determinants from earlier sections. First Theorem 5.2.1, that states that the determinant of a matrix can be found by expansion along an arbitrary column
And second Corollary 5.3.1: the determinant of a matrix with two equal rows (or columns) is equal to 0.
Let us consider the product \(\text{Adj}(A) \cdot A\) very carefully:
On the diagonal we see that the \(j\)th entry is equal to
For the off-diagonal elements we find as product of the \(j\)th row of \(\text{Adj}(A)\) with the \(k\)th column of \(A\) the sum
This expression can be interpreted as the expansion along the \(k\)th row of the determinant of the matrix \(A^{(j)}(\vect{a}_k)\) that results if the \(j\)th column of \(A\) is replaced by the \(k\)th column of \(A\). Since this matrix has two equal columns, its determinant must be zero!
For \(n = 2\) Proposition 5.4.5 gives us back the old formula for the inverse. That is, if we define the determinant of a \(1 \times 1\) matrix \(A = [a]\) as the number \(a\) .
For an arbitrary invertible \(3 \times 3\) matrix \(A=\left[\begin{array}{ccc} a_{11} &a_{12} &a_{13} \\ a_{21} &a_{22} &a_{23} \\ a_{31} &a_{32} &a_{33} \end{array} \right] \) the formula yields
Like Cramer’s rule, the formula for the inverse is highly inefficient. The comparison between the efforts required to compute the inverse via the adjugate matrix versus row reduction of the augmented matrix \([\,A\,|\,I\,]\) works out rather favorably for the latter.
5.4.4. Determinant and cross product#
In Section 1.3 the cross product of two vectors \(\mathbf{u}\) and \(\mathbf{v}\) in \(\R^3\) is defined. It is the unique vector \(\mathbf{w}\) that is (1) orthogonal to \(\mathbf{u}\) and \(\mathbf{v}\), with (2) length equal to the area of the parallelogram with sides \(\mathbf{u}\) and \(\mathbf{v}\), and (3) such that the triple \((\mathbf{u},\mathbf{v},\mathbf{w})\) is ‘righthanded’ (= positively oriented).
In Section 5.1 we defined the determinant of the ordered set \((\vect{a},\vect{b},\vect{c})\) in \(\R^3\) via
Conversely, we can write the cross product in terms containing determinants.
The last expression can formally be written as
In exactly the same fashion, we can, for \(n-1\) vectors \(\vect{a}_1, \ldots, \vect{a}_{n-1}\) in \(\R^n\), say
define
Here \(\vect{e}_1, \ldots , \vect{e}_n\) denote the vectors of the standard basis for \(\R^n\).
With some effort it can be shown that the following properties hold.
Suppose that \(\vect{a}_1, \ldots, \vect{a}_{n-1}\) are vectors in \(\R^n\) and \(\vect{a}^{\ast}_n\) is defined as in Equation (5.4.5). Then the following properties hold.
-
\(\vect{a}^{\ast}_n \perp \vect{a}_i\), for \(i = 1,2,\ldots, n-1\) .
-
\( \{\vect{a}_1, \, \ldots, \,\vect{a}_{n-1}\}\) is linearly dependent if and only if \(\det{\left[\,\vect{a}_1\,\, \ldots\,\, \vect{a}_{n-1}\,\, \vect{a}^{\ast}_n\,\right] } = 0\) .
-
If \( \{\vect{a}_1, \ldots, \vect{a}_{n-1}\}\) is linearly independent, then \(\det{\left[\,\vect{a}_1, \ldots, \vect{a}_{n-1}, \vect{a}^{\ast}_n\,\right] } > 0\) .
-
The norm of the vector \(\vect{a}^{\ast}_n\) is equal to the \((n-1)\)-dimensional volume of the \((n-1)\)-dimensional parallelepiped generated by \(\vect{a}_1, \ldots, \vect{a}_{n-1}\) .
For an independent set of vectors \(\{\vect{a}_1, \ldots, \vect{a}_{n-1}\}\) in \(\R^n\), the properties of Proposition 5.4.7 uniquely determine \(\vect{a}^{\ast}_n\)
as the vector \(\vect{v}\) that is orthogonal to \( \vect{a}_1, \ldots, \vect{a}_{n-1}\), has a prescribed length, and makes the ordered set
\((\vect{a}_1, \ldots, \vect{a}_{n-1}, \vect{v}) \) positively oriented.
For a linearly dependent set of vectors property iv. implies that \(\vect{a}^{\ast}_n = \vect{0}\).
For \(n = 2\) we get, for an arbitrary vector \(\vect{v} = \left[\begin{array}{c} a \\ b \end{array}\right] \neq \left[\begin{array}{c} 0\\0 \end{array}\right] \):
This is indeed a vector orthogonal to \(\vect{v}\) with the same ‘one-dimensional volume’, i.e., length, as the vector \(\vect{v}\).
Moreover, \(\left(\vect{v}, \vect{w}\right) = \left(\left[\begin{array}{c} a \\ b \end{array}\right] , \left[\begin{array}{c} -b \\ a \end{array}\right] \right) \) is positively oriented, as can be seen by making a sketch.
This shows that the construction also works in \(\R^2\).
We will find the vector \(\vect{n} = N(\vect{a}_1, \vect{a}_2, \vect{a}_3)\) for the columns of the matrix
The first entry \(n_1\) is computed as
All in all we find
By taking inner products, or by computing \(A^T\vect{n}\), it is checked that indeed \(\vect{n} \perp \vect{a}_i\) for each column \(\vect{a}_i\). So property i. of Proposition 5.4.7 is satisfied.
Since the three columns are orthogonal, the ‘rectangular box’ in \(\R^4\) they generate will have 3d-volume
This is indeed equal to
so property iv. is satisfied too.
We end the chapter with a proof of Proposition 5.4.7.
So, if you are interested, push the button on the right.
Proof of Proposition 5.4.7
The properties follow from the observation that for each vector \(\vect{v}\) in \(\R^n\)
This immediate generalization of the identity \((\vect{a}\times\vect{b})\ip\vect{c} = \det{[\,\vect{a}\,\,\vect{b}\,\,\vect{c}\,] }\) follows if we write Equation (5.4.5) as in equation (5.4.4).
-
Take any of the vectors \(\vect{a}_j\) . Then (by Equation (5.4.6))
\[ \vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \ip \vect{a}_j= \det{ \left[\,\vect{a}_1\,\, \ldots\,\, \vect{a}_{n-1}\,\,\vect{a}_j\, \right] } = 0, \]since the determinant has two equal columns. So indeed
\[ \vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \perp \vect{a}_j,\quad j = 1, \ldots, n-1. \] -
First suppose that the columns of the matrix
\[\begin{split} \begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1,(n-1)} \\ a_{21} & a_{22} & \ldots & a_{2,(n-1)} \\ \vdots & \vdots & & \vdots \\ a_{n1} & a_{n2} & \ldots & a_{n,(n-1)} \end{bmatrix} \end{split}\]are linearly dependent. Then for each vector \(\vect{v}\) in \(\R^n\)
\[ \vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \ip \vect{v} = \det{ \left[\,\vect{a}_1\,\, \ldots\,\, \vect{a}_{n-1}\,\,\vect{v}\, \right] } = 0. \]Namely, the first \(n-1\) columns in the determinant are already linearly dependent. This implies that \(\vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \) must be the zero vector.
Conversely, if the vectors \(\{ \vect{a}_1, \,\ldots\, \, , \vect{a}_{n-1} \}\) are linearly independent, then the \(n \times (n-1)\) matrix \(A = [ \,\vect{a}_1 \,\, \ldots \,\, \vect{a}_{n-1} \,] \) has rank \(n-1\) . The matrix \(A\) must have \(n-1\) linearly independent rows. Say, if we delete the \(k\)th row we have an \((n-1) \times (n-1)\) sub-matrix with independent rows. Then the coefficient of \(\vect{e}_k\) in the expansion of \( \vect{N} ( \vect{a}_1, \ldots, \vect{a}_{n-1} ) \) , which by the defining Equation (5.4.5) is precisely (plus or minus) the determinant of this submatrix, is nonzero.
-
This is a consequence of the observation (again using (5.4.6))
\[\begin{split} \begin{array}{rcl} \det{\left[\,\vect{a}_1\,\, \ldots\,\, \vect{a}_{n-1}\,\,\vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1})\, \right]} &=& \vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \ip \vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1})\\ &=& \norm{\vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1})}^2 \geq 0, \end{array} \end{split}\]and the already established fact that \(\vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \neq \vect{0}\) if \(\{\vect{a}_1,\, \ldots\,,\, \vect{a}_{n-1}\}\) is linearly independent.
-
We sketch the idea, which we borrow from volume versus area considerations in \(\R^2\) and \(\R^3\) . We defined the volume of the \(n\)-dimensional parallelepiped \(\mathcal{P} \left(\vect{a}_1, \ldots, \vect{a}_{n} \right) \) generated by the \(n\) vectors \(\vect{a}_1, \ldots, \vect{a}_{n}\) as the absolute value of a determinant:
\[ \text{Vol}_n\left(\mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n}) \right) = |\det{\left[\,\vect{a}_1\,\, \ldots\,\, \,\vect{a}_{n}\,\right] }|. \]The height times base principle in \(\R^n\) must be:
if
\[ a_{n} \perp \mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \]then
\[ \text{Vol}_n\left(\mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n}) \right) = \text{Vol}_{n-1} \left(\mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \right) \cdot \norm{\vect{a}_{n}}. \]where \(\text{Vol}_{n-1}\) denotes the \((n-1)\)-dimensional volume of an \((n-1)\)-dimensional subset of \(\R^n\).
We apply this principle to the vector \(\vect{a}_{n} = \vect{a}^{\ast}_n = \vect{N}(\vect{a}_1, \ldots, \vect{a}_{n-1})\).
We know that \(\vect{a}^{\ast}_n\) is orthogonal to all vectors \(\vect{a}_1, \ldots, \vect{a}_{n-1}\). So the ‘height’ of \(\mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n-1}, \vect{a}^{\ast}_n)\) is equal to \(\norm{\vect{a}^{\ast}_n}\).
On the one hand we then have that
\[ \text{Vol}_n\left(\mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n-1}, \vect{a}^{\ast}_n) \right) = \text{Vol}_{n-1}\left(\mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n-1}) \right) \cdot \norm{\vect{a}^{\ast}_n} \]and on the other hand
\[\begin{split} \begin{array}{rcl} \text{Vol}_n\left(\mathcal{P}(\vect{a}_1, \ldots, \vect{a}_{n-1}, \vect{a}^{\ast}_n) \right) &=& |\det{ \left[\,\vect{a}_1\,\, \ldots\,\, \vect{a}_{n-1}\,\, \vect{a}^{\ast}_n\, \right] }| \\ &=& | \vect{a}^{\ast}_n\ip \vect{a}^{\ast}_n| = \norm{\vect{a}^{\ast}_n}^2. \end{array}. \end{split}\]Equating the two expressions for \(\text{Vol}_n \left(\mathcal{P} (\vect{a}\_1, \ldots, \vect{a}_{n-1}, \vect{a}^{\ast}\_n) \right) \)
we conclude that indeed
\[ \norm{\vect{a}^{\ast}_n} = \text{Vol}_{n-1} \left(\mathcal{P} (\vect{a}_1, \ldots, \vect{a}_{n-1}) \right).\]
5.4.5. Grasple Exercises#
To compute the area of a triangle with sides \(\vect{u}\) and \(\vect{v}\) in the plane.
Show/Hide Content
To find a point \(C\) on a line, such that the area of a triangle \(ABC\) has a given value.
Show/Hide Content
Which points lie on the same side of a plane?
Show/Hide Content
To solve a 3x3 system using Cramer’s rule.
Show/Hide Content
Finding two entries in the inverse of a 4x4 matrix (using the adjoint matrix).
Show/Hide Content
To find a vector orthogonal to \(\vect{v}_1,\vect{v}_2,\vect{v}_3\) in \(\mathbb{R}^4\), with good orientation.
Show/Hide Content
To compute the normal vector \(N(\vect{a}_1,\vect{a}_2,\vect{a}_3)\) as in Subsection 5.4.4