4.1. Subspaces of \(\R^n\)#
4.1.1. Introduction#
Subspaces are structures that appear in many different subfields of linear algebra. For instance, they appear as solution sets of homogeneous systems of linear equations, and as ranges of linear transformations, to mention two situations that we have already come across. In this section we will define them and analyze their basic properties. In Section 4.2 we will consider the important attributes basis and dimension.
4.1.2. Definition of Subspace and Basic Properties#
A (linear) subspace of \(\R^n\) is a subset \(S\) of \(\R^n\) with the following three properties:
-
\(S\) contains the zero vector.
-
If two vectors \(\vect{u}\) and \(\vect{v}\) are in \(S\), then their sum is in \(S\) too:
\[ \vect{u} \in S, \vect{v} \in S \quad \Longrightarrow \quad \vect{u}+ \vect{v} \in S. \] -
If a vector \(\vect{u}\) is in \(S\), then every scalar multiple of \(\vect{u}\) is in \(S\) too:
\[ \vect{u} \in S, c \in \R \quad \Longrightarrow \quad c\vect{u} \in S. \]
Property (ii) is also expressed as: a subspace is closed under sums. Likewise property (iii) says that a subspace is closed under taking scalar multiples.
The set in \(\R^n\) that consists of only the zero vector, i.e. \(S = \{\vect{0}\}\), is a subspace.
We will check that it has the three properties mentioned in the definition:
-
\(S\) certainly contains the zero vector.
-
If two vectors \(\vect{u}\) and \(\vect{v}\) are in \(S\), then their sum is in \(S\) too:
\[\vect{u} \in S, \vect{v} \in S \,\, \Longrightarrow \,\, \vect{u} = \vect{v} = \vect{0} \,\, \Longrightarrow \,\, \vect{u} + \vect{v} = \vect{0} + \vect{0} = \vect{0} \in S. \] -
If a vector \(\vect{u}\) is in \(S\), then every scalar multiple of \(\vect{u}\) is in \(S\) too:
\[ \vect{u} \in S\quad \Longrightarrow \quad \vect{u} = \vect{0} \quad \Longrightarrow \quad c\vect{u} = c\vect{0} = \vect{0} \in S. \]
The set that only consists of the zero vector is sometimes called a trivial subspace. There is one other subspace that is worthy of that name:
The trivial subspaces of \(\R^n\) are the set \(\{\vect{0}\}\) and the set \(\R^n\) itself.
In \(\R^2\), a line through the origin is a non-trivial subspace. A line not containing the origin is not. In fact, the latter does not satisfy any of the three properties of a subspace, as may be clear from Figure 4.1.1. In the picture on the right, for two vectors \(\vect{u}\) and \(\vect{v}\) on the line \(\mathcal L\),
Examples of subspaces in \(\R^3\) are lines and planes through the origin. Try to visualize that these sets do satisfy the properties of a subspace. A sketch may help. It is good practice to keep these examples in mind as typical examples of subspaces.
A disk \(D\) specified by the inequality \(x^2 + y^2 \leq a^2\), where \(a\) is some positive number, is not a subspace of \(\R^2\). It has neither of the properties (ii) and (iii). See Figure 4.1.2.
-
Give an example of a subset in \(\R^2\) that has property i and ii, but not property iii.
-
Also give a set with only the properties i and iii.
Solution to Exercise 4.1.1 (click to show)
We first give an example of a subset of \(\R^2\) that only has properties i. and ii.
Let \(S_1\) be the vectors in \(\R^2\) with non-negative entries. So, \(S_1\) is the first quadrant of the \(x_1\)-\(x_2\)-plane.
For two vectors in \(S_1\) their sum still lies in \(S_1\).
However, if \(\vect{v}\neq \vect{0}\) lies in \(S_1\) and \(c\) is negative, then \(c\vect{v}\) is not in \(S_1\).
An example of a subset of \(\R^2\) that only has properties i. and iii. is the following:
So \(S_2\) consists of the two coordinate axes.
\(S_2\) contains the origin, and is closed under taking multiples.
However for the two vectors \(\vect{e}_1, \vect{e}_2\) in \(S_2\) the sum is not:
A non-empty subset \(S\) of \(\R^n\) is a subspace if and only if
Proof of Proposition 4.1.1
To show that a subspace satisfies property (4.1.1), suppose that \(S\) is a subspace, \(\vect{u}\) and \(\vect{v}\) are vectors in \(S\) and \(c_1,c_2\) are real numbers.
From property (iii) it follows that
Next property (ii) implies that
Conversely, assume \(S\) is non-empty and satisfies property (4.1.1).
Taking \(c_1 = c_2 = 1\) it follows that for \(\vect{u},\vect{v} \in S\)
taking \(c_1 = c\), \(c_2 = 0\) it follows that for \(\vect{u} \in S\)
Finally, to show that \(S\) contains the zero vector, let \(\vect{u}\) be any vector in \(S\), which is possible since \(S\) is non-empty. Then from property (iii), taking \(c = 0\), it follows that
By repeatedly applying the last proposition, we find for any subspace \(S\)
So we can more generally say that a subspace is closed under taking linear combinations.
This also means that if \(\vect{u}_1, \ldots , \vect{u}_k \) are vectors in a subspace \(S\),
then \(\Span{\vect{u}_1, \ldots , \vect{u}_k} \) is contained in \(S\).
In fact, the standard example of a subspace is as given in the next proposition.
If \(\vect{v}_1,\vect{v}_2, \ldots , \vect{v}_r\) are vectors in \(\R^n\), then
In this situation the vectors are said to generate the subspace, or to be a set of generators for the subspace. Recall Definition 2.2.2: the span of zero vectors in \(\R^n\) (in other words, the span of the empty set) is defined to be the set \(\{\vect{0}\}\).
Proof of Proposition 4.1.2
If the number of vectors \(r\) is equal to \(0\), the span is equal to \(\{\vect{0}\}\), the trivial subspace.
Next let us check the three properties in Definition 4.1.1 in case \(r \geq 1\).
Property (i):
For property (ii) we just have to note that the sum of two linear combinations
of a set of vectors \( \{ \vect{v}_1,\vect{v}_2, \ldots , \vect{v}_r \}\) is again a linear combination of these vectors. This is quite straightforward:
Likewise you can check property (iii). This is Exercise 4.1.2.
Give a proof of property (iii).
In the previous proposition we do not impose any restrictions on the set of vectors \(\{ \vect{v}_1,\vect{v}_2, \ldots , \vect{v}_r \}\). In the sequel we will see that it will be advantageous to have a linear independent set of generators.
Each subspace \(S\) in \(\R^3\) has one of the following forms:
(A) the single vector \(\vect{0}\), |
(B) a line through the origin, |
(C) a plane through the origin, |
(D) the whole \(\R^3\). |
In other words
and we may assume that the vectors \(\vect{v}_i\) are linearly independent.
Once more we recall the convention that the span of zero vectors (i.e., when \(r = 0\)) is the set only containing the zero vector.
Proof of Proposition 4.1.3
We build it up from small to large.
Suppose \(S\) is a subspace of \(\R^3\).
\(S\) will at least contain the zero vector. This may be all, i.e., \(S = \{\vect{0}\}\). Then we are in case (A). Case closed.
If \(S \neq \{\vect{0}\}\), then \(S\) contains at least one nonzero vector \(\vect{v}_1\). By property (iii) \(S\) then contains all multiples \(c\vect{v}_1\). If that is all, if all vectors in \(S\) are multiples of \(\vect{v}_1\), then \(S = \Span{\vect{v}_1}\), a line through the origin, and we are in case (B).
If \(S\) is larger than \(\Span{\vect{v}_1}\) we continue our enumeration of possible subspaces. So suppose there is a vector \(\vect{v}_2\) in \(S\) that is not in \(\Span{\vect{v}_1}\). By Theorem 2.5.1 the set \(\{\vect{v}_1,\vect{v}_2\}\) is linearly independent, and by virtue of Proposition 4.1.1, \(S\) then contains \(\Span{\vect{v}_1,\vect{v}_2}\). Again, this may the end point, \(S = \Span{\vect{v}_1,\vect{v}_2}\), and then we are in case (C).
If not, \(S\) must contain a third linearly independent vector \(\vect{v}_3\), and the same argument as before gives that \(S\) contains \(\text{Span}\{\vect{v}_1,\vect{v}_2,\vect{v}_3\}\). We claim that this implies that
For, if not, there must be a vector \( \vect{v}_4 \in \R^3\) not in \(\Span{\vect{v}_1,\vect{v}_2,\vect{v}_3}\). Then \(\{\vect{v}_1,\vect{v}_2,\vect{v}_3, \vect{v}_4\}\) would be a set of four linearly independent vectors in \(\R^3\), which by Corollary 2.5.2 is impossible.
The argument can be generalized to prove the following theorem.
Every subspace of \(\R^n\) is of the form
where
It may seem that with the above complete description of all possible subspaces in \(\R^n\) the story of subspaces can be closed. However, subspaces will appear in different contexts in various guises, each valuable in its own right. One of these we will focus on immediately.
4.1.3. Column Space and Null Space of a Matrix#
We now turn our attention to two important subspaces closely related to an \(m\times n\) matrix \(A\).
The column space of an \(m\times n\) matrix \(A= \begin{bmatrix} \vect{a}_1 & \vect{a}_2 & \ldots & \vect{a}_n \end{bmatrix}\) is the span of the columns of \(A\):
The null space of an \(m\times n\) matrix \(A\) is the solution set of the homogeneous equation \(A\vect{x} = \vect{0}\):
For an \(m\times n\) matrix \(A\), Col \(A\) is the set of all vectors of the form \(A\vect{x}\), for \(\vect{x}\in\R^n\). The column space Col \({A}\) can also be interpreted as the range of the linear transformation \(T:\R^n \to \R^m\) defined via \(T(\vect{x}) = A\vect{x}\). (Cf. Proposition 3.1.1.)
Note that for an \(m\times n \) matrix \(A\) the column space is a subset of \(\R^m\) and the null space lives in \(\R^n\). In short,
The next proposition shows that the designation ‘space’ in the above definition is well justified.
Let \(A\) be an \(m\times n\) matrix.
-
The column space of \(A\) is a subspace of \(\R^m\).
-
The null space of \(A\) is a subspace of \(\R^n\).
Proof of Proposition 4.1.4
Let \(A\) be an \(m\times n\) matrix.
-
The columns of \(A\) are vectors in \(\R^m\). As we have seen Proposition 4.1.2
the span of a set of vectors in \(\R^m\) is indeed a subspace of \(\R^m\).
-
To show that the null space is a subspace, we check the requirements of the definition.
First, for \(\mathbf{v} = 0\),
\[ A\vect{v} = A\vect{0} = \vect{0}, \]so \(\vect{v} = \vect{0}\) is contained in the null space.
Second, to show that \(\Nul{A}\) is closed under sums, suppose that \(\vect{u}\) and \(\vect{v}\) are two vectors in \(\Nul{A}\). Then from
\[ A\vect{u} = \vect{0} \quad \text{and} \quad A\vect{v} = \vect{0}, \]we deduce
\[ A(\vect{u}+\vect{v}) = A\vect{u}+ A\vect{v} = \vect{0} +\vect{0} = \vect{0}, \]which implies that
\[ \vect{u}+ \vect{v} \text{ also lies in } \Nul{A}. \]Third, to show that \(\Nul{A}\) is closed under taking scalar multiples, suppose that \(\vect{u}\) is a vector in \(\Nul{A}\), i.e.
\[ A\vect{u} = \vect{0} \]and \(c\) is a real number.
Then
\[A(c\vect{u}) = c\,A(\vect{u}) = c\,\vect{0} = \vect{0},\]which proves that
\[ c\vect{u} \text{ also lies in } \text{Nul} A. \]Hence \(\Nul{A}\) has all the properties of a subspace.
The above proof, that the null space is a subspace, is as basic as possible. That is, we started from the definitions (of null space and subspace) and used properties of the matrix product to connect the two.
Alternatively we could have used knowledge already acquired earlier. In Section 2.3 we have seen that the solution set of a homogeneous system
can be written in parametric vector form
Thus: it is the span of a set of vectors, and as such by Proposition 4.1.2 it is a subspace.
Suppose that \(A\) and \(B\) are matrices for which the product \(AB\) is defined.
-
Show that the column space of \(AB\) is a subset of the column space of \(A\), i.e.
\[ \Col{AB} \subseteq \Col{A}. \] -
Can you find a similar formula relating the null space of \(AB\) to the null space of either \(A\) or \(B\) (or both)?
Solution to Exercise 4.1.3 (click to show)
Suppose that \(A\) is an \(m\times n\) and \(B\) an \(n \times p\) matrix. Thus \(AB\) is an \(m\times p\) matrix.
-
The column space of an \(m\times n\) matrix \(M\), consists of all vectors \(\vect{w} = M\vect{v}\), where \(\vect{v}\) is a vector in \(\R^n\).
Suppose \(\vect{w}\) is a vector in Col\((AB)\), so \(\vect{w} = AB\vect{v}\) for some vector \(\vect{v}\) in \(\R^p\).
Then also \(\vect{w} = A(B\vect{v})\), where \(B\vect{v}\) is a vector in \(\R^n\), which proves that \(\vect{w} \in \) Col\((A)\).With this we have shown that every vector in Col\((AB)\) also lies in Col\((A)\), i.e.,
\[ \Col{(AB)} \subseteq \Col{(A)}. \] -
The nulspace of an \(m\times n\) matrix \(M\) consists of all vectors \(\vect{v}\) in \(\R^n\) for which \(M\vect{v} = \vect{0}\). We show that
\[ \Nul{(B)} \subseteq \Nul{(AB)}. \]Suppose \(\vect{v}\) is an element of \(\Nul{(B)}\). Then \(B\vect{v}= \vect{0}\), so a fortiori \(AB\vect{v}= A\vect{0} =\vect{0}\), and so \(\vect{v}\) lies in \(\Nul{(AB)}\).
For an \(n\times n\) matrix \(A\), the null space and the column space are both subspaces of (the same) \(\R^n\). Prove or disprove the following statement.
For a square matrix \(A\):
Solution to Exercise 4.1.4 (click to show)
First we show that
Let \(\vect{w}\in\Col{(A)}\).
Then there is a vector \(\vect{v}\) in \(\R^n\) for which \(\vect{w} = A\vect{v}\).
It follows that \(A\vect{w} = A^2\vect{v} = O\vect{v} = \vect{0}\). Thus \(\vect{w} \in \Nul{(A)}\).
Next we have to we show that
If we can show that \(A^2\vect{x}= \vect{0}\) for every vector \(\vect{x}\) in \(\R^n\), we’re done.
So let \(\vect{x}\) be any vector in \(\R^n\). Then \(\vect{y} =A\vect{x}\) lies in the column space of \(A\), which is contained in the nulspace of \(A\).
So \(A\vect{y} = A^2\vect{x} = \vect{0}\), and we may conclude that indeed \(A^2 = O\).
4.1.4. Grasple Exercises#
To check whether a vector is in a subspace spanned by two vectors.
Show/Hide Content
To check whether a vector is in a subspace spanned by two vectors.
Show/Hide Content
To decide whether a vector \(\vect{p}\) is in Col\((A)\).
Show/Hide Content
To decide whether a vector \(\vect{p}\) is in Col\((A)\).
Show/Hide Content
To give a vector in Nul\((A)\) and a vector not in Nul\((A)\).
Show/Hide Content
To decide whether a vector \(\vect{p}\) is in Nul\((A)\).
Show/Hide Content
To decide whether a vector \(\vect{p}\) is in Nul\((A)\).
Show/Hide Content
Can two subspaces of \(\R^n\) be disjoint?
Show/Hide Content
For an \(m\times n\) matrix, in which \(\R^p\) lies Nul\((A)\)? And Col\((A)\)?
Show/Hide Content
To find \(p\) such that Nul\((A)\) lies in \(\R^p\).
Show/Hide Content
To find a parameter such that Nul\((A)=\) Col\((A)\) for a \(2\times2\) matrix \(A\).
Show/Hide Content
To find a parameter such that Nul\((A)=\) Col\((A)\) for a \(2\times2\) matrix \(A\).
Show/Hide Content
To check whether certain subsets \(S_i\) of \(\mathbb{R}^3\) are subspaces.
Show/Hide Content
To check whether certain subsets \(S_i\) of \(\mathbb{R}^3\) are subspaces.