4.1. Subspaces of \(\R^n\)#

4.1.1. Introduction#

Subspaces are structures that appear in many different subfields of linear algebra. For instance, they appear as solution sets of homogeneous systems of linear equations, and as ranges of linear transformations, to mention two situations that we have already come across. In this section we will define them and analyze their basic properties. In Section 4.2 we will consider the important attributes basis and dimension.

4.1.2. Definition of Subspace and Basic Properties#

Definition 4.1.1

A (linear) subspace of \(\R^n\) is a subset \(S\) of \(\R^n\) with the following three properties:

  1. \(S\) contains the zero vector.

  2. If two vectors \(\vect{u}\) and \(\vect{v}\) are in \(S\), then their sum is in \(S\) too:


    \[ \vect{u} \in S, \vect{v} \in S \quad \Longrightarrow \quad \vect{u}+ \vect{v} \in S. \]
  3. If a vector \(\vect{u}\) is in \(S\), then every scalar multiple of \(\vect{u}\) is in \(S\) too:


    \[ \vect{u} \in S, c \in \R \quad \Longrightarrow \quad c\vect{u} \in S. \]

Remark 4.1.1

Property (ii) is also expressed as: a subspace is closed under sums. Likewise property (iii) says that a subspace is closed under taking scalar multiples.

Example 4.1.1

The set in \(\R^n\) that consists of only the zero vector, i.e. \(S = \{\vect{0}\}\), is a subspace.

We will check that it has the three properties mentioned in the definition:

  1. \(S\) certainly contains the zero vector.

  2. If two vectors \(\vect{u}\) and \(\vect{v}\) are in \(S\), then their sum is in \(S\) too:


    \[\vect{u} \in S, \vect{v} \in S \,\, \Longrightarrow \,\, \vect{u} = \vect{v} = \vect{0} \,\, \Longrightarrow \,\, \vect{u} + \vect{v} = \vect{0} + \vect{0} = \vect{0} \in S. \]
  3. If a vector \(\vect{u}\) is in \(S\), then every scalar multiple of \(\vect{u}\) is in \(S\) too:


    \[ \vect{u} \in S\quad \Longrightarrow \quad \vect{u} = \vect{0} \quad \Longrightarrow \quad c\vect{u} = c\vect{0} = \vect{0} \in S. \]

The set that only consists of the zero vector is sometimes called a trivial subspace. There is one other subspace that is worthy of that name:

Definition 4.1.2

The trivial subspaces of \(\R^n\) are the set \(\{\vect{0}\}\) and the set \(\R^n\) itself.

Example 4.1.2

In \(\R^2\), a line through the origin is a non-trivial subspace. A line not containing the origin is not. In fact, the latter does not satisfy any of the three properties of a subspace, as may be clear from Figure 4.1.1. In the picture on the right, for two vectors \(\vect{u}\) and \(\vect{v}\) on the line \(\mathcal L\),

\[ \vect{u}+\vect{v} \, \text{ and } \, -\tfrac32\vect{u} \, \text{ do not lie on } \,{\mathcal L} \]
../_images/Fig-Subspaces-Lines.svg

Fig. 4.1.1 A line is a subspace of \(\R^2\) if and only if it goes through (0,0)#

Example 4.1.3

Examples of subspaces in \(\R^3\) are lines and planes through the origin. Try to visualize that these sets do satisfy the properties of a subspace. A sketch may help. It is good practice to keep these examples in mind as typical examples of subspaces.

Example 4.1.4

A disk \(D\) specified by the inequality \(x^2 + y^2 \leq a^2\), where \(a\) is some positive number, is not a subspace of \(\R^2\). It has neither of the properties (ii) and (iii). See Figure 4.1.2.

../_images/Fig-Subspaces-Disk.svg

Fig. 4.1.2 A disk is not a subspace of \(\R^2\).#

Exercise 4.1.1

  1. Give an example of a subset in \(\R^2\) that has property i and ii, but not property iii.

  2. Also give a set with only the properties i and iii.

Solution to Exercise 4.1.1 (click to show)

We first give an example of a subset of \(\R^2\) that only has properties i. and ii.

Let \(S_1\) be the vectors in \(\R^2\) with non-negative entries. So, \(S_1\) is the first quadrant of the \(x_1\)-\(x_2\)-plane.
For two vectors in \(S_1\) their sum still lies in \(S_1\). However, if \(\vect{v}\neq \vect{0}\) lies in \(S_1\) and \(c\) is negative, then \(c\vect{v}\) is not in \(S_1\).

An example of a subset of \(\R^2\) that only has properties i. and iii. is the following:

\[\begin{split} S_2 = \left\{ \begin{bmatrix}x_1 \\ x_2 \end{bmatrix}\,:\,\, x_1x_2 = 0 \right\}. \end{split}\]

So \(S_2\) consists of the two coordinate axes.
\(S_2\) contains the origin, and is closed under taking multiples.
However for the two vectors \(\vect{e}_1, \vect{e}_2\) in \(S_2\) the sum is not:

\[\begin{split} \vect{e}_1 + \vect{e}_2 = \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}. \end{split}\]

Proposition 4.1.1

A non-empty subset \(S\) of \(\R^n\) is a subspace if and only if

(4.1.1)#\[\text{for all } \vect{u}, \vect{v} \in S, c_1, c_2 \in \R\,\,\text{ it holds that }\,\, c_1\vect{u}+ c_2 \vect{v} \in S.\]

Proof. To show that a subspace satisfies property (4.1.1), suppose that \(S\) is a subspace, \(\vect{u}\) and \(\vect{v}\) are vectors in \(S\) and \(c_1,c_2\) are real numbers.

From property (iii) it follows that

\[ c_1\vect{u} \in S \quad \text{and} \quad c_2\vect{v} \in S. \]

Next property (ii) implies that

\[ c_1\vect{u} + c_2\vect{v} \in S. \]

Conversely, assume \(S\) is non-empty and satisfies property (4.1.1).

Taking \(c_1 = c_2 = 1\) it follows that for \(\vect{u},\vect{v} \in S\)

\[ \vect{u}+ \vect{v} = 1\vect{u}+1\vect{v} \in S, \text{ so }S\text{ has property (ii)} \]

taking \(c_1 = c\), \(c_2 = 0\) it follows that for \(\vect{u} \in S\)

\[ c\vect{u} = c\vect{u}+0\vect{u} \in S, \text{ so }S\text{ has property (iii)}. \]

Finally, to show that \(S\) contains the zero vector, let \(\vect{u}\) be any vector in \(S\), which is possible since \(S\) is non-empty. Then from property (iii), taking \(c = 0\), it follows that

\[\vect{0} = 0\vect{u}, \quad \text{so } \,\,\vect{0} \text{ lies in }S.\]

Remark 4.1.2

By repeatedly applying the last proposition, we find for any subspace \(S\)

\[ \vect{u}_1, \ldots , \vect{u}_k \in S, c_1, \ldots , c_k \in \R \quad \Longrightarrow \quad c_1\vect{u}_1+ \ldots + c_k\vect{u}_k \in S. \]

So we can more generally say that a subspace is closed under taking linear combinations.
This also means that if \(\vect{u}_1, \ldots , \vect{u}_k \) are vectors in a subspace \(S\),
then \(\Span{\vect{u}_1, \ldots , \vect{u}_k} \) is contained in \(S\).

In fact, the standard example of a subspace is as given in the next proposition.

Proposition 4.1.2

If \(\vect{v}_1,\vect{v}_2, \ldots , \vect{v}_r\) are vectors in \(\R^n\), then

\[ \Span{\vect{v}_1,\vect{v}_2, \ldots , \vect{v}_r} \quad \text{is a subspace in } \R^n. \]

In this situation the vectors are said to generate the subspace, or to be a set of generators for the subspace. Recall Definition 2.2.2: the span of zero vectors in \(\R^n\) (in other words, the span of the empty set) is defined to be the set \(\{\vect{0}\}\).

Proof. If the number of vectors \(r\) is equal to \(0\), the span is equal to \(\{\vect{0}\}\), the trivial subspace.

Next let us check the three properties in Definition 4.1.1 in case \(r \geq 1\).

Property (i):

\[ \vect{0} = 0\vect{v}_1+0\vect{v}_2+ \ldots + 0\vect{v}_r, \quad \text{so} \quad \vect{0} \in \text{Span} \{ \vect{v}_1,\vect{v}_2, \ldots , \vect{v}_r \}. \]

For property (ii) we just have to note that the sum of two linear combinations

\[ (c_1\vect{v}_1+ \ldots + c_r\vect{v}_r)\quad \text{and} \quad (d_1\vect{v}_1+ \ldots + d_r\vect{v}_r) \]

of a set of vectors \( \{ \vect{v}\_1,\vect{v}\_2, \ldots , \vect{v}\_r \}\) is again a linear combination of these vectors. This is quite straightforward:

\[ (c_1\vect{v}_1+ \ldots + c_r\vect{v}_r) + (d_1\vect{v}_1+ \ldots + d_r\vect{v}_r) = (c_1+d_1)\vect{v}_1+ \ldots + (c_r+d_r)\vect{v}_r. \]

Likewise you can check property (iii). This is Exercise 4.1.2.

Exercise 4.1.2

Give a proof of property (iii).

Remark 4.1.3

In the previous proposition we do not impose any restrictions on the set of vectors \(\{ \vect{v}_1,\vect{v}_2, \ldots , \vect{v}_r \}\). In the sequel we will see that it will be advantageous to have a linear independent set of generators.

Proposition 4.1.3

Each subspace \(S\) in \(\R^3\) has one of the following forms:

(A) the single vector \(\vect{0}\),

(B) a line through the origin,

(C) a plane through the origin,

(D) the whole \(\R^3\).

In other words

\[ S = \text{Span}\{\vect{v}_i\, |\,\ i = 1,\ldots, r\} \quad \text{where }\, r = 0, 1, 2 \text{ or } 3, \]

and we may assume that the vectors \(\vect{v}_i\) are linearly independent.

Once more we recall the convention that the span of zero vectors (i.e., when \(r = 0\)) is the set only containing the zero vector.

Proof. of Proposition 4.1.3.

We build it up from small to large.

Suppose \(S\) is a subspace of \(\R^3\).

\(S\) will at least contain the zero vector. This may be all, i.e., \(S = \{\vect{0}\}\). Then we are in case (A). Case closed.

If \(S \neq \{\vect{0}\}\), then \(S\) contains at least one nonzero vector \(\vect{v}_1\). By property (iii) \(S\) then contains all multiples \(c\vect{v}_1\). If that is all, if all vectors in \(S\) are multiples of \(\vect{v}_1\), then \(S = \Span{\vect{v}_1}\), a line through the origin, and we are in case (B).

If \(S\) is larger than \(\Span{\vect{v}_1}\) we continue our enumeration of possible subspaces. So suppose there is a vector \(\vect{v}_2\) in \(S\) that is not in \(\Span{\vect{v}_1}\). By Theorem 2.5.1 the set \(\{\vect{v}_1,\vect{v}_2\}\) is linearly independent, and by virtue of Proposition 4.1.1, \(S\) then contains \(\Span{\vect{v}_1,\vect{v}_2}\). Again, this may the end point, \(S = \Span{\vect{v}_1,\vect{v}_2}\), and then we are in case (C).

If not, \(S\) must contain a third linearly independent vector \(\vect{v}_3\), and the same argument as before gives that \(S\) contains \(\text{Span}\{\vect{v}_1,\vect{v}_2,\vect{v}_3\}\). We claim that this implies that

\[ S = \Span{\vect{v}_1,\vect{v}_2,\vect{v}_3} = \R^3, \text{ i.e., we are in case (D)} \]

For, if not, there must be a vector \( \vect{v}\_4 \in \R^3\) not in \(\Span{\vect{v}_1,\vect{v}_2,\vect{v}_3}\). Then \(\{\vect{v}_1,\vect{v}_2,\vect{v}_3, \vect{v}_4\}\) would be a set of four linearly independent vectors in \(\R^3\), which by Corollary 2.5.2 is impossible.

The argument can be generalized to prove the following theorem.

Theorem 4.1.1

Every subspace of \(\R^n\) is of the form

\[ S = \Span{\vect{v}_1, \ldots , \vect{v}_r} \quad \text{for some } \, r \leq n, \]

where

\[ \{\vect{v}_1, \ldots , \vect{v}_r\} \,\, \text{is linearly independent.} \]

It may seem that with the above complete description of all possible subspaces in \(\R^n\) the story of subspaces can be closed. However, subspaces will appear in different contexts in various guises, each valuable in its own right. One of these we will focus on immediately.

4.1.3. Column Space and Null Space of a Matrix#

We now turn our attention to two important subspaces closely related to an \(m\times n\) matrix \(A\).

Definition 4.1.3

The column space of an \(m\times n\) matrix \(A= \begin{bmatrix} \vect{a}_1 & \vect{a}_2 & \ldots & \vect{a}_n \end{bmatrix}\) is the span of the columns of \(A\):

\[ \Col{A} = \Span{\vect{a}_1,\vect{a}_2,\ldots,\vect{a}_n}. \]

The null space of an \(m\times n\) matrix \(A\) is the solution set of the homogeneous equation \(A\vect{x} = \vect{0}\):

\[ \Nul{A} = \{\vect{x} \in \mathbb{R}^n \,|\, A\vect{x} = \vect{0}\}. \]

Remark 4.1.4

For an \(m\times n\) matrix \(A\), Col \(A\) is the set of all vectors of the form \(A\vect{x}\), for \(\vect{x}\in\R^n\). The column space Col \({A}\) can also be interpreted as the range of the linear transformation \(T:\R^n \to \R^m\) defined via \(T(\vect{x}) = A\vect{x}\).   (Cf. Proposition 3.1.1.)

Remark 4.1.5

Note that for an \(m\times n \) matrix \(A\) the column space is a subset of \(\R^m\) and the null space lives in \(\R^n\). In short,

\[ \Col{A} \subseteq \R^m ,\quad \Nul{A} \subseteq \R^n. \]

The next proposition shows that the designation ‘space’ in the above definition is well justified.

Proposition 4.1.4

Let \(A\) be an \(m\times n\) matrix.

  1. The column space of \(A\) is a subspace of \(\R^m\).


  2. The null space of \(A\) is a subspace of \(\R^n\).

Proof. Let \(A\) be an \(m\times n\) matrix.

  1. The columns of \(A\) are vectors in \(\R^m\). As we have seen Proposition 4.1.2:

    the span of a set of vectors in \(\R^m\) is indeed a subspace of \(\R^m\).

  2. To show that the null space is a subspace, we check the requirements of the definition.

    First, for \(\mathbf{v} = 0\),

    \[ A\vect{v} = A\vect{0} = \vect{0}, \]

    so \(\vect{v} = \vect{0}\) is contained in the null space.

    Second, to show that \(\Nul{A}\) is closed under sums, suppose that \(\vect{u}\) and \(\vect{v}\) are two vectors in \(\Nul{A}\). Then from


    \[ A\vect{u} = \vect{0} \quad \text{and} \quad A\vect{v} = \vect{0}, \]

    we deduce

    \[ A(\vect{u}+\vect{v}) = A\vect{u}+ A\vect{v} = \vect{0} +\vect{0} = \vect{0}, \]

    which implies that

    \[ \vect{u}+ \vect{v} \text{ also lies in } \Nul{A}. \]

    Third, to show that \(\Nul{A}\) is closed under taking scalar multiples, suppose that \(\vect{u}\) is a vector in \(\Nul{A}\), i.e.


    \[ A\vect{u} = \vect{0} \]

    and \(c\) is a real number.

    Then

    \[A(c\vect{u}) = c\,A(\vect{u}) = c\,\vect{0} = \vect{0},\]

    which proves that

    \[ c\vect{u} \text{ also lies in } \text{Nul} A. \]

    Hence \(\Nul{A}\) has all the properties of a subspace.

Remark 4.1.6

The above proof, that the null space is a subspace, is as basic as possible. That is, we started from the definitions (of null space and subspace) and used properties of the matrix product to connect the two.

Alternatively we could have used knowledge already acquired earlier. In Section 2.3 we have seen that the solution set of a homogeneous system

\[ A\vect{x} = \vect{0} \]

can be written in parametric vector form

\[ \vect{x} = c_1\vect{u}_1 + c_2\vect{u}_2 + \ldots + c_k\vect{u}_k. \]

Thus: it is the span of a set of vectors, and as such by Proposition 4.1.2 it is a subspace.

Exercise 4.1.3

Suppose that \(A\) and \(B\) are matrices for which the product \(AB\) is defined.

  1. Show that the column space of \(AB\) is a subset of the column space of \(A\), i.e.

    \[ \Col{AB} \subseteq \Col{A}. \]
  2. Can you find a similar formula relating the null space of \(AB\) to the null space of either \(A\) or \(B\) (or both)?

Solution to Exercise 4.1.3 (click to show)

Suppose that \(A\) is an \(m\times n\) and \(B\) an \(n \times p\) matrix. Thus \(AB\) is an \(m\times p\) matrix.

  1. The column space of an \(m\times n\) matrix \(M\), consists of all vectors \(\vect{w} = M\vect{v}\), where \(\vect{v}\) is a vector in \(\R^n\).

    Suppose \(\vect{w}\) is a vector in Col\((AB)\), so \(\vect{w} = AB\vect{v}\) for some vector \(\vect{v}\) in \(\R^p\).
    Then also \(\vect{w} = A(B\vect{v})\), where \(B\vect{v}\) is a vector in \(\R^n\), which proves that \(\vect{w} \in \) Col\((A)\).

    With this we have shown that every vector in Col\((AB)\) also lies in Col\((A)\), i.e.,

    \[ \Col{(AB)} \subseteq \Col{(A)}. \]
  2. The nulspace of an \(m\times n\) matrix \(M\) consists of all vectors \(\vect{v}\) in \(\R^n\) for which \(M\vect{v} = \vect{0}\).   We show that

    \[ \Nul{(B)} \subseteq \Nul{(AB)}. \]

    Suppose \(\vect{v}\) is an element of \(\Nul{(B)}\).   Then \(B\vect{v}= \vect{0}\), so a fortiori \(AB\vect{v}= A\vect{0} =\vect{0}\), and so \(\vect{v}\) lies in \(\Nul{(AB)}\).

Exercise 4.1.4

For an \(n\times n\) matrix \(A\), the null space and the column space are both subspaces of (the same) \(\R^n\). Prove or disprove the following statement.

For a square matrix \(A\):

\[ A^2 = O \quad \iff \quad \Col{(A)} \subseteq \Nul{(A)}. \]
Solution to Exercise 4.1.4 (click to show)

First we show that

\[ A^2 = O \quad \Rightarrow \quad \Col{(A)} \subseteq \Nul{(A)}. \]

Let \(\vect{w}\in\Col{(A)}\).
Then there is a vector \(\vect{v}\) in \(\R^n\) for which \(\vect{w} = A\vect{v}\).
It follows that \(A\vect{w} = A^2\vect{v} = O\vect{v} = \vect{0}\). Thus \(\vect{w} \in \Nul{(A)}\).

Next we have to we show that

\[ \Col{(A)} \subseteq \Nul{(A)} \quad \Rightarrow \quad A^2 = O. \]

If we can show that \(A^2\vect{x}= \vect{0}\) for every vector \(\vect{x}\) in \(\R^n\), we’re done.
So let \(\vect{x}\) be any vector in \(\R^n\). Then \(\vect{y} =A\vect{x}\) lies in the column space of \(A\), which is contained in the nulspace of \(A\).
So \(A\vect{y} = A^2\vect{x} = \vect{0}\), and we may conclude that indeed \(A^2 = O\).

4.1.4. Grasple Exercises#

Grasple Exercise 4.1.1

https://embed.grasple.com/exercises/66b4134c-20e3-4a38-8f14-a32aa472aece?id=70616

To check whether a vector is in a subspace spanned by two vectors.

Grasple Exercise 4.1.2

https://embed.grasple.com/exercises/aa71ac1a-d82d-4c6d-a6af-7af0c25422b1?id=70617

To check whether a vector is in a subspace spanned by two vectors.

Grasple Exercise 4.1.3

https://embed.grasple.com/exercises/2a08d069-ac34-4f9f-8479-85896ade75da?id=70621

To decide whether a vector \(\vect{p}\) is in Col\((A)\).

Grasple Exercise 4.1.4

https://embed.grasple.com/exercises/9470136c-b9ce-4664-937c-fad9da7963cb?id=70622

To decide whether a vector \(\vect{p}\) is in Col\((A)\).

Grasple Exercise 4.1.5

https://embed.grasple.com/exercises/8756aa45-07b2-40f1-8fbe-aae7c140ae19?id=70625

To give a vector in Nul\((A)\) and a vector not in Nul\((A)\).

Grasple Exercise 4.1.6

https://embed.grasple.com/exercises/c32e1656-5d38-4708-a55d-22ced9a9b254?id=70623

To decide whether a vector \(\vect{p}\) is in Nul\((A)\).

Grasple Exercise 4.1.7

https://embed.grasple.com/exercises/ab566408-ef8d-4b99-9f96-ceb29dcc234b?id=70624

To decide whether a vector \(\vect{p}\) is in Nul\((A)\).

Grasple Exercise 4.1.8

https://embed.grasple.com/exercises/2a3d5aaf-c0f1-4596-a3e7-3876c786544a?id=70615

Can two subspaces of \(\R^n\) be disjoint?

Grasple Exercise 4.1.9

https://embed.grasple.com/exercises/f880df03-c9b6-4c69-bc94-ea0c6d273b24?id=70627

For an \(m\times n\) matrix, in which \(\R^p\) lies Nul\((A)\)? And Col\((A)\)?

Grasple Exercise 4.1.10

https://embed.grasple.com/exercises/8bf246d1-8aad-448f-842a-8cc20c21b99a?id=70629

To find \(p\) such that Nul\((A)\) lies in \(\R^p\).

Grasple Exercise 4.1.11

https://embed.grasple.com/exercises/3eb1c09d-b39f-4eb8-8968-804469666617?id=83365

To find a parameter such that Nul\((A)=\) Col\((A)\) for a \(2\times2\) matrix \(A\).

Grasple Exercise 4.1.12

https://embed.grasple.com/exercises/958bc91a-84e2-48e8-8cdf-b26514c41df0?id=83371

To find a parameter such that Nul\((A)=\) Col\((A)\) for a \(2\times2\) matrix \(A\).

Grasple Exercise 4.1.13

https://embed.grasple.com/exercises/3b5196d2-1219-494e-a445-9dcadd8f19a0?id=88181

To check whether certain subsets \(S_i\) of \(\mathbb{R}^3\) are subspaces.

Grasple Exercise 4.1.14

https://embed.grasple.com/exercises/66eb42d3-ed92-45aa-8576-d6c4b86c8502?id=88184

To check whether certain subsets \(S_i\) of \(\mathbb{R}^3\) are subspaces.