2.5. Linear Independence#
As we have seen (Example 2.2.4 and Example 2.2.5), the multiples of a non-zero vector form a line. We have also seen there that, if we consider the set of all vectors of the form \(c_{1}\mathbf{v}_{1}+c_{2}\mathbf{v}_{2}\), for some vectors \(\mathbf{v}_{1},\mathbf{v}_{2}\) and constants \(c_{1},c_{2}\), we usually get a plane. But sometimes we don’t! For example, if \(d\mathbf{v}_{1}=\mathbf{v}_{2}\) for some constant \(d\), then all vectors of the given form can be rewritten as \((c_{1}+c_{2}d)\mathbf{v}_{1}\), so they are all contained in the line through the origin and in the direction of \(\mathbf{v}_{1}\). Every vector we can make as a linear combination of \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) can also be made with \(\mathbf{v}_{1}\) alone. The vector \(\mathbf{v}_{2}\) is superfluous. This situation can be seen in Figure 2.5.1.
We will now formalise this concept of superfluous vectors.
We will call a set \(S\) of vectors linearly dependent if there is some \(\mathbf{v}\) in \(S\) such that \(\Span{S}=\Span{S\setminus\left\lbrace\mathbf{v}\right\rbrace}\). In this case, we say that \(\mathbf{v}\) is linearly dependent on \(S\setminus\left\lbrace\mathbf{v}\right\rbrace\). If \(S\) is not linearly dependent, we say \(S\) is linearly independent.
In other words, a set \(S\) is linearly dependent if it contains at least one superfluous vector. It may very well contain infinitely many. Let us briefly investigate linear dependence for very small sets before we get to more substantial examples.
Let \(S\) be a subset of \(\mathbb{R}^{n}\) containing
-
precisely one vector, say \(\mathbf{v}\). Then \(S\) is linearly dependent precisely when \(\mathbf{v}=\mathbf{0}\).
-
precisely two vectors, say \(\mathbf{u}\) and \(\mathbf{v}\). Then \(S\) is linearly independent unless one of these vectors is a multiple of the other.
Proof of Proposition 2.5.1
-
Assume \(S=\left\lbrace\mathbf{v}\right\rbrace\). The span of \(S\setminus\left\lbrace\mathbf{v}\right\rbrace\) is the span of the empty set, which is precisely \(\left\lbrace\mathbf{0}\right\rbrace\). This is equal to \(\Span{S}\) if and only if \(\mathbf{v}=\mathbf{0}\).
-
If \(\Span{S}=\Span{\mathbf{v}}\) then \(\mathbf{u}\) is in \(\Span{\mathbf{v}}\) so it is a multiple of \(\mathbf{v}\). Similarly, if \(\Span{S}=\Span{\mathbf{u}}\) then \(\mathbf{v}\) is in \(\Span{\mathbf{u}}\) so it is a multiple of \(\mathbf{u}\).
As you can see from the proof of Proposition 2.5.1, our definition of linear depence, while intuitive, is a bit hard to work with. In Proposition 2.5.3/Corollary 2.5.1, we will see a more convenient way to determine whether a given set of vectors is linearly dependent or not. But let us first consider some examples.
-
Consider the vectors
\[\begin{split} \mathbf{v}_{1}= \begin{bmatrix}1\\0\end{bmatrix}\quad\mathbf{v}_{2}= \begin{bmatrix}0\\1\end{bmatrix} \quad\mathbf{v}_{3}= \begin{bmatrix}1\\1\end{bmatrix}, \end{split}\]which are shown on the left in Figure 2.5.2. The set \(S=\left\lbrace\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}\right\rbrace\) is linearly dependent in view of the following equalities:
(2.5.1)#\[ \mathbf{v}_{1}=-\mathbf{v}_{2}+\mathbf{v}_{3},\](2.5.2)#\[ \mathbf{v}_{2}=-\mathbf{v}_{1}+\mathbf{v}_{3},\](2.5.3)#\[ \mathbf{v}_{3}=\mathbf{v}_{1}+\mathbf{v}_{2}.\]Indeed, if we take an arbitrary vector \(\mathbf{v}\) in \(\Span{S}\), we can write it as
\[\begin{align*} \mathbf{v}&=c_{1}\mathbf{v}_{1}+c_{2}\mathbf{v}_{2}+c_{3}\mathbf{v}_{3}\\ &=(c_{2}-c_{1})\mathbf{v}_{2}+(c_{3}+c_{1})\mathbf{v}_{3} \end{align*}\]in view of equation (2.5.1). This means that \(\mathbf{v}\) is also in \(\Span{S\setminus\left\lbrace\mathbf{v}_{1}\right\rbrace}\) and consequently that \(\mathbf{v}_{1}\) is linearly dependent on \(\mathbf{v}_{2}\) and \(\mathbf{v}_{3}\). Similarly, equation (2.5.2) shows that \(\mathbf{v}_{2}\) is linearly dependent on \(\mathbf{v}_{1}\) and \(\mathbf{v}_{3}\) and equation (2.5.3) shows that \(\mathbf{v}_{3}\) is linearly dependent on \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) .
However, every subset of \(S\) containing precisely two vectors will be linearly independent, as \(S\) contains no two vectors that are multiples of each other.
-
Consider now the vectors
\[\begin{split} \mathbf{v}_{1}= \begin{bmatrix}1\\0\end{bmatrix}\quad \mathbf{v}_{2}= \begin{bmatrix}0\\1\end{bmatrix}\quad\mathbf{v}_{4}= \begin{bmatrix}2\\0\end{bmatrix} \end{split}\]which are shown on the right in Figure 2.5.2. The set \(S=\left\lbrace\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{4}\right\rbrace\) is again linearly dependent since
\[ \mathbf{v}_{4}=2\mathbf{v}_{1}+0\mathbf{v}_{2}\nonumber \]but now the subset \(\left\lbrace\mathbf{v}_{1},\mathbf{v}_{4}\right\rbrace\) is a linearly dependent subset of \(S\). On the other hand, the subsets \(\left\lbrace\mathbf{v}_{1},\mathbf{v}_{2}\right\rbrace\) and \(\left\lbrace\mathbf{v}_{2},\mathbf{v}_{4}\right\rbrace\) are linearly independent.
-
Put
\[\begin{split} \mathbf{w}_{1}= \begin{bmatrix}1\\0\\0\end{bmatrix},\quad\mathbf{w}_{2}= \begin{bmatrix}0\\1\\0\end{bmatrix},\quad\mathbf{w}_{3}= \begin{bmatrix}1\\2\\0\end{bmatrix},\quad \text{and}\quad\mathbf{w}_{4}= \begin{bmatrix}1\\2\\1\end{bmatrix}. \end{split}\]The set \(\left\lbrace\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{w}_{3}\right\rbrace\) is linearly dependent. The set \(\left\lbrace\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{w}_{4}\right\rbrace\), however, is not. This is illustrated in Figure 2.5.3.
To verify whether a set \(\{\vect{u}, \vect{v}\}\) is linearly independent.
Show/Hide Content
To verify whether a set \(\{\vect{u}, \vect{v}\}\) is linearly independent.
Show/Hide Content
To verify whether a set \(\{\vect{u}, \vect{v}\}\) is linearly independent.
Show/Hide Content
The following elementary properties of linear dependence and linear independence will be used throughout the text, often tacitly.
Let \(S\) be a subset of \(\mathbb{R}^{n}\).
-
If \(S\) contains \(\mathbf{0}\), then it is a linearly dependent set.
-
If \(S\) is linearly dependent and \(S\subseteq T\), then \(T\) is linearly dependent.
-
If \(T\) is linearly independent and \(S\subseteq T\), then \(S\) is linearly independent.
We leave the verifications of these statements to the reader.
Prove Proposition 2.5.2.
But how do you determine whether a set of vectors is linearly independent or not? Like so many problems in linear algebra, it comes down to solving a system of linear equations, as Proposition 2.5.3 shows.
A set \(\left\lbrace\mathbf{v}_{1},...,\mathbf{v}_{k}\right\rbrace\) of vectors in \(\mathbb{R}^{n}\) is linearly dependent if and only if the vector equation
has a non-trivial solution. That is, a solution where not all \(c_i\) are equal to \(0\).
Proof of Proposition 2.5.3
If \(\left\lbrace\mathbf{v}_{1},...,\mathbf{v}_{k}\right\rbrace\) is linearly dependent, one of these vectors, say \(\mathbf{v}_{i}\), is linearly dependent on the others, i.e. it is in \(\Span{\mathbf{v}_{1},...,\mathbf{v}_{i-1},\mathbf{v}_{i+1},...\mathbf{v}_{k}}\). Therefore, there exist some scalars \(c_{1},...,c_{i-1},c_{i+1},...,c_{k}\) such that
or equivalently
This means that \((c_{1},...,c_{i-1},-1,c_{i+1},...,c_{k})\) is a solution of the equation (2.5.4). It is a non-trivial one since the \(i\)-th coefficient is \(-1\) which is non-zero.
If (2.5.4) has a non-trivial solution then there are \(c_{1},...,c_{k}\), not all \(0\), such that \(c_{1}\mathbf{v}_{1}+\cdots +c_{k}\mathbf{v}_{k}=\mathbf{0}\). Take any \(i\) such that \(c_{i}\neq0\). Then
This implies \(\mathbf{v}_{i}\) is in \(\Span{\mathbf{v}_{1},...,\mathbf{v}_{i-1},\mathbf{v}_{i+1},...,\mathbf{v}_{k}}\) so \(\left\lbrace\mathbf{v}_{1},...,\mathbf{v}_{k}\right\rbrace\) is linearly dependent.
A set \(\left\lbrace\mathbf{v}_{1},\dots,\mathbf{v}_{k}\right\rbrace\) of vectors in \(\mathbb{R}^{n}\) is linearly dependent if and only if the matrix equation \(A\mathbf{x}=\mathbf{0}\) with \(A=\begin{bmatrix}\mathbf{v}_{1}& \cdots &\mathbf{v}_{k}\end{bmatrix}\) has a non-trivial solution, i.e. if \(A\) has a column without a pivot.
Again we leave the verification to the diligent reader.
Prove Corollary 2.5.1
Consider the following three vectors in \(\mathbb{R}^{4}\):
Do these vectors form a linearly dependent set? How do we find out? Well, we use the vectors as the columns of a matrix \(A\) and compute an echelon form using standard techniques
The third column has no pivot, so the system \(A\mathbf{x}=\mathbf{0}\) has infinitely many solutions. In particular, it therefore has a non-trivial one. Consequently, the set \(\left\lbrace\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}\right\rbrace\) is linearly dependent.
From the reduced echelon form, we can easily find a way to write \(\mathbf{v}_{3}\) as a linear combination of \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\). In this case, the reduced echelon form is
If we put the free variable \(x_{3}\) equal to 1, we find \(x_{1}=-3\) and \(x_{2}=1\), which gives:
To verify whether a set \(\{\vect{a}_1, \vect{a}_2,\vect{a}_3 \}\) is linearly independent.
Show/Hide Content
To verify whether a set \(\{\vect{a}_1, \vect{a}_2,\vect{a}_3 \}\) is linearly independent.
Show/Hide Content
There now follow a couple of statements which can be helpful in determining whether a given set of vectors is linearly dependent or not. The first one tells us that an ordered set is linearly dependent precisely when there is a vector which depends on the preceding vectors.
An ordered set \(S=(\mathbf{v}_{1},...,\mathbf{v}_{n})\) is linearly dependent if and only if there is a \(k\) such that \(\mathbf{v}_{k}\) is a linear combination of \(\mathbf{v}_{1},...,\mathbf{v}_{k-1}\).
Proof of Theorem 2.5.1
Let us assume \(\mathbf{v}_{k}=c_{1}\mathbf{v}_{1}+\cdots+c_{k-1}\mathbf{v}_{k-1}\) for some scalars \(c_{1},...,c_{k-1}\). An arbitrary element \(\mathbf{v}\) of \(\Span{S}\) is a linear combination of \(\mathbf{v}_{1},...,\mathbf{v}_{n}\), so it is
for certain scalars \(d_{1},...,d_{n}\). We can now rewrite \(\mathbf{v}\) as
so \(\mathbf{v}\) is in \(\Span{S\setminus\left\lbrace\mathbf{v}_{k}\right\rbrace}\).
Suppose now that \(S\) is linearly dependent. Let \(k\) be maximal such that \(\Span{S}=\Span{S\setminus\left\lbrace\mathbf{v}_{k}\right\rbrace}\). Since \(\mathbf{v}_{k}\) is in \(S\), it is in \(\Span{S\setminus\left\lbrace\mathbf{v}_{k}\right\rbrace}\). So there exist scalars \(c_{1},..,c_{k-1},c_{k+1},...,c_{n}\) such that
If we can show that \(c_{k+1}=...=c_{n}=0\) we are done, because then we have written \(\mathbf{v}_{k}\) as a linear combination of \(\mathbf{v}_{1},...,\mathbf{v}_{k-1}\). We will prove by contraposition that this is impossible. Assume \(c_{j}\neq0\) for some \(j\) greater than \(k\). Then Equation (2.5.5) yields
Consequently, any linear combination of \(S\) can be rewritten as a linear combination of \(\mathbf{v}_{1},...,\mathbf{v}_{j-1},\mathbf{v}_{j+1},...,\mathbf{v}_{n}\), i.e. \(\Span{S}=\Span{S\setminus\left\lbrace\mathbf{v}_{j}\right\rbrace}\). But \(j\) is larger than \(k\) and we have assumed \(k\) to be maximal with this property! This is impossible, so \(c_{j}=0\) for all \(j\) greater than \(k\).
Theorem 2.5.1 together with Corollary 2.5.1 implies that the columns of a matrix are linearly dependent if and only if there is some column which is linearly dependent on the preceding ones. This observation will allow us to simplify certain arguments. The following result essentially claims that if \(k\) vectors suffice to span a set \(S\) and you have more than \(k\) vectors in \(S\), then at least one of them must be superfluous.
Suppose \(\mathbf{u}_{1},...,\mathbf{u}_{k}\) and \(\mathbf{v}_{1},...,\mathbf{v}_{l}\) are all vectors in \(\mathbb{R}^{n}\). If \(k<l\) and \(\Span{\mathbf{u}_{1},...,\mathbf{u}_{k}}\) contains \(\Span{\mathbf{v}_{1},...,\mathbf{v}_{l}}\) then the set \(\left\lbrace\mathbf{v}_{1},...,\mathbf{v}_{l}\right\rbrace\) is linearly dependent.
Proof of Theorem 2.5.2
Consider the matrices
Bringing \(C\) in echelon form gives
where \(D\) is the echelon form of \(C\), \(E\) is an echelon form of \(A\), and \(F\) is equivalent to \(B\).
We claim that all of the pivot positions of \(D\) are in \(E\). Indeed, suppose that the \(i\)-th column of \(F\), let’s call it \(f_{i}\), contains a pivot. Then \(E\mathbf{x}=\mathbf{f}_{i}\) is inconsistent and therefore \(A\mathbf{x}=\mathbf{v}_{i}\) is also inconsistent since the elementary row operations preserve linear combinations. But this implies that \(\mathbf{v}_{i}\) is not a linear combination of \(\mathbf{u}_{1},...,\mathbf{u}_{k}\) hence it is not in \(\Span{\mathbf{u}_{1},...,\mathbf{u}_{k}}\). This is a contradiction.
Since \(F\) contains no pivot positions of \(D\), it has at least as many zero rows as \(E\). This implies that an echelon form \(G\) of \(B\), which is necessarily also an echelon form of \(F\), must also have at least as many zero rows as \(E\). Therefore, \(G\) has no more pivots than \(E\). Since \(E\) has at most \(k\) pivots and \(k<l\), \(G\) must have a column without pivot. So \(B\mathbf{x}=\mathbf{0}\) has a non-trivial solution and by Corollary 2.5.1 the set \(\left\lbrace\mathbf{v}_{1},...,\mathbf{v}_{l}\right\rbrace\) must be linearly dependent.
Let \(S\) be a subset of \(\mathbb{R}^{n}\). If there are more than \(n\) vectors in \(S\), then \(S\) is linearly dependent.
Proof of Corollary 2.5.2
Take distinct vectors \(\mathbf{v}_{1},...,\mathbf{v}_{n+1}\) in \(S\). \(\Span{\mathbf{v}_{1},...,\mathbf{v}_{n+1}}\) is contained in \(\Span{\mathbf{e}_{1},..,\mathbf{e}_{n}}\) and \(n+1>n\), so \(\left\lbrace\mathbf{v}_{1},..,\mathbf{v}_{n+1}\right\rbrace\) is linearly dependent by Theorem 2.5.2. Since this set is contained in \(S\), \(S\) must be linearly dependent, too, by Proposition 2.5.2.
To illustrate the strength of Corollary 2.5.2, consider the following set of vectors in \(\mathbb{R}^{5}\):
If we had to bring the matrix with these six vectors as columns to echelon form, we would have our work cut out for us! Fortunately, we can just remark that there are six vectors with five entries each. Since \(6>5\), Corollary 2.5.2 guarantees that this set is linearly dependent.
(Application)
Often, on news sites or in newspapers, you might see the standings of a football tournament displayed in a large table, as in Table 2.5.1. Quite a lot of the information in such a table is redundant because some of the columns are linearly dependent.
Team |
Pld |
W |
D |
L |
GF |
GA |
GD |
Pts |
|
---|---|---|---|---|---|---|---|---|---|
1 |
AFC Ajax |
34 |
22 |
5 |
7 |
64 |
40 |
24 |
49 |
2 |
Fortuna ‘54 |
34 |
20 |
5 |
9 |
76 |
48 |
28 |
45 |
3 |
SC Enschede |
34 |
15 |
11 |
8 |
81 |
47 |
34 |
41 |
4 |
MVV Maastricht |
34 |
15 |
10 |
9 |
53 |
42 |
11 |
40 |
5 |
PSV Eindhoven |
34 |
18 |
3 |
13 |
93 |
71 |
22 |
39 |
6 |
Feijenoord |
34 |
15 |
9 |
10 |
79 |
58 |
21 |
39 |
7 |
VVV ‘03 |
34 |
16 |
6 |
12 |
50 |
53 |
-3 |
38 |
8 |
Sparta Rotterdam |
34 |
12 |
12 |
10 |
66 |
59 |
7 |
36 |
9 |
NAC |
34 |
14 |
8 |
12 |
59 |
61 |
-2 |
36 |
10 |
DOS |
34 |
17 |
1 |
16 |
79 |
75 |
4 |
35 |
11 |
Rapid JC |
34 |
13 |
7 |
14 |
64 |
63 |
1 |
33 |
12 |
NOAD |
34 |
12 |
7 |
15 |
54 |
64 |
-10 |
31 |
13 |
BVC Amsterdam |
34 |
11 |
8 |
15 |
49 |
67 |
-18 |
30 |
14 |
GVAV |
34 |
9 |
10 |
15 |
52 |
66 |
-14 |
28 |
15 |
BVV |
34 |
11 |
4 |
19 |
70 |
76 |
-6 |
26 |
16 |
Elinkwijk |
34 |
10 |
4 |
20 |
52 |
87 |
-35 |
24 |
17 |
Willem II |
34 |
8 |
6 |
20 |
59 |
79 |
-20 |
22 |
18 |
FC Eindhoven |
34 |
8 |
4 |
22 |
39 |
83 |
-44 |
20 |
Each of the eight columns on the right can be interpreted as a vector with one entry per team, so a vector in \(\mathbb{R}^{18}\). Using the column headers from Table 2.5.1 as the names of these vectors, we find:
since a win yielded 2 points, a draw 1 point, and a loss 0 points. This means that \(\left\lbrace\mathbf{W},\mathbf{D},\mathbf{L},\mathbf{Pts}\right\rbrace\) is a linearly dependent subset of \(\mathbb{R}^{18}\). In fact, the smaller set \(\left\lbrace\mathbf{W},\mathbf{D},\mathbf{Pts}\right\rbrace\) is already linearly dependent. Similarly, the column \(\mathbf{GD}\), which gives the goal difference for each team, can be obtained by subtracting the column \(\mathbf{GA}\), which gives the goals conceded, from \(\mathbf{GF}\), which gives the goals scored.
2.5.1. Grasple Exercises#
To verify whether a set \(\{\vect{a}_1, \vect{a}_2,\vect{a}_3 \}\) (in \(\mathbb{R}^3\)) is linearly independent.
Show/Hide Content
Finding a parameter \(h\) such that a set \(\{\vect{a}_1, \vect{a}_2,\vect{b}\}\) is linearly independent.
Show/Hide Content
Like the previous question
Show/Hide Content
Like the previous question
Show/Hide Content
To verify whether three vectors in \(\mathbb{R}^4\) are linearly independent.
Show/Hide Content
To verify whether the columns of a matrix \(A\) are linearly independent.
Show/Hide Content
Like the previous question.
Show/Hide Content
Verifying linear (in)dependence of a set of vectors.
Show/Hide Content
Verifying linear (in)dependence of a set of vectors once more
Show/Hide Content
For which types of objects is linear independence defined?
Show/Hide Content
Getting straight the definition of linear independence.
Show/Hide Content
Can … . . be linearly independent?
Show/Hide Content
True/False question about linear (in)dependence.
Show/Hide Content
True/False question about linear (in)dependence.
Show/Hide Content
True/False question about linear (in)dependence.
Show/Hide Content
About the connection between pivots and linearly (in)dependent columns.
Show/Hide Content
About the connection between pivots and linearly (in)dependent columns.
Show/Hide Content
One more about pivots and linearly (in)dependent columns.
Show/Hide Content
True/False question about linear (in)dependence.
Show/Hide Content
True/False question about linear (in)dependence.
Show/Hide Content
True/False question about linear (in)dependence.
Show/Hide Content
What can not be concluded from \(A\mathbf{x} = \mathbf{b}\) is (in)consistent?
Show/Hide Content
What about linear combinations of (three) linearly independent vectors?
Show/Hide Content
What about linear combinations of (four) linearly independent vectors?
Show/Hide Content
What about subsets and unions of sets of linearly independent vectors?