Learn Linear Algebra

Theorem - Parallelogram Rule for Addition

If u,vR2\vec{u}, \vec{v} \in \mathbb{R}^2 are represented by two vectors, the parallelogram formed by these two vectors has a diagonal that represents the sum of the vectors.
Proof: Let u\vec{u} and v\vec{v} be vectors in R2\mathbb{R}^2. Lets denote u\vec{u} as [u1u2]\begin{bmatrix} u_1 \\ u_2 \end{bmatrix} and similarly, lets denote v\vec{v} as [v1v2]\begin{bmatrix} v_1 \\ v_2 \end{bmatrix}. Recall that we can add two vectors in the same dimensional space, and the sum of the two vectors is the sum of their corresponding entries. Thus, u+v=[u1u2]+[v1v2]=[u1+v1u2+v2]. \vec{u} + \vec{v} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} + \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} u_1 + v_1 \\ u_2 + v_2 \end{bmatrix}. In the plane, both vectors u\vec{u} and v\vec{v} can be represented as points with coordinates P(u1,u2) P(u_1, u_2) and Q(v1,v2) Q(v_1, v_2) , both starting from the origin O(0,0) O(0, 0) . Consider the parallelogram with vertices at O(0,0) O(0, 0) , P(u1,u2) P(u_1, u_2) , Q(v1,v2) Q(v_1, v_2) , and R(x,y) R(x, y) . We aim to show that the point R R is the sum of both vectors, i.e., R=u+v R = \vec{u} + \vec{v} . To prove this, we will demonstrate that the diagonals of the parallelogram bisect each other, which is a defining property of parallelograms. Let the midpoint of the diagonal PQ PQ be denoted by M1 M_1 , where: M1=(u1+v12,u2+v22). M_1 = \left( \frac{u_1 + v_1}{2}, \frac{u_2 + v_2}{2} \right). Similarly, let the midpoint of the diagonal OR OR be denoted by M2 M_2 , where: M2=(0+x2,0+y2)=(x2,y2). M_2 = \left( \frac{0 + x}{2}, \frac{0 + y}{2} \right) = \left( \frac{x}{2}, \frac{y}{2} \right). Since the diagonals of a parallelogram bisect each other, we must have M1=M2 M_1 = M_2 . Therefore, by equating the components of M1 M_1 and M2 M_2 , we obtain: u1+v12=x2andu2+v22=y2. \frac{u_1 + v_1}{2} = \frac{x}{2} \quad \text{and} \quad \frac{u_2 + v_2}{2} = \frac{y}{2}. Solving these equations, we find: x=u1+v1andy=u2+v2. x = u_1 + v_1 \quad \text{and} \quad y = u_2 + v_2. Thus, the coordinates of the point R R are R(x,y)=(u1+v1,u2+v2) R(x, y) = (u_1 + v_1, u_2 + v_2) , which confirms that R=u+v R = \vec{u} + \vec{v} . This proves that the fourth vertex R R of the parallelogram corresponds to the sum of the two vectors u \vec{u} and v \vec{v} .

Theorem - Algebraic Properties of Vectors in Rn \mathbb{R}^n

Let u,v,wRn \vec{u}, \vec{v}, \vec{w} \in \mathbb{R}^n and scalars a,bR a, b \in \mathbb{R} . The following vector properties are extensions of the axiomatic basis for the real number system.

1. Commutative Property of Addition for Vectors
u+v=v+u \vec{u} + \vec{v} = \vec{v} + \vec{u}

2. Associative Property of Addition for Vectors
(u+v)+w=u+(v+w) (\vec{u} + \vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w})

3. Additive Identity of Vectors
u+0=u \vec{u} + \vec{0} = \vec{u}

4. Additive Inverse of Vectors
u+(u)=0 \vec{u} + (-\vec{u}) = \vec{0}

5. Scalar Distributive Property onto Vectors
a(u+v)=au+av a(\vec{u} + \vec{v}) = a\vec{u} + a\vec{v}

6. Scalar Associative Property of Multiplication with a Vector
(ab)u=a(bu) (ab)\vec{u} = a(b\vec{u})

7. Multiplicative Identity of Vectors
1u=u 1\vec{u} = \vec{u}

Proof - Commutative Property of Addition for Vectors:

Let u=(u1,u2,,un) \vec{u} = (u_{1}, u_{2}, \dots, u_{n}) and let v=(v1,v2,,vn) \vec{v} = (v_{1}, v_{2}, \dots, v_{n}) .

u+v=(u1+v1,u2+v2,,un+vn) \vec{u} + \vec{v} = (u_{1} + v_{1}, u_{2} + v_{2}, \dots, u_{n} + v_{n}) (Definition of vector addition)

=(v1+u1,v2+u2,,vn+un) = (v_{1} + u_{1}, v_{2} + u_{2}, \dots, v_{n} + u_{n}) (Commutativity of addition in R \mathbb{R} )

=v+u = \vec{v} + \vec{u} (Definition of vector addition)

Proof - Associative Property of Addition for Vectors:

Let u=(u1,u2,,un) \vec{u} = (u_1, u_2, \dots, u_n) , v=(v1,v2,,vn) \vec{v} = (v_1, v_2, \dots, v_n) , and w=(w1,w2,,wn) \vec{w} = (w_1, w_2, \dots, w_n) .

(u+v)+w=((u1+v1)+w1,(u2+v2)+w2,,(un+vn)+wn)(\vec{u} + \vec{v}) + \vec{w} = ((u_1 + v_1) + w_1, (u_2 + v_2) + w_2, \dots, (u_n + v_n) + w_n) (Definition of vector addition)

 =(u1+(v1+w1),u2+(v2+w2),,un+(vn+wn))\ = (u_1 + (v_1 + w_1), u_2 + (v_2 + w_2), \dots, u_n + (v_n + w_n)) (Commutativity of addition in R\mathbb{R})

 =u+(v+w)\ = \vec{u} + (\vec{v} + \vec{w})

Proof - Additive Identity of Vectors:

Let u=(u1,u2,,un) \vec{u} = (u_1, u_2, \dots, u_n) and we know that the 0=(0,0,,0)\vec{0} = (0 ,0 ,\dots, 0) (For this proof, the zero vector is of the same size as u\vec{u}).

u+0=(u1+0,u2+0,,un+0)\vec{u} + \vec{0}= (u_1 + 0, u_2 +0, \dots, u_n + 0) (Definition of vector addition)

 =(u1,u2,,un)\ = (u_1, u_2, \dots, u_n)

=u= \vec{u}

Proof - Additive Inverse of Vectors:

Let u=(u1,u2,,un) \vec{u} = (u_1, u_2, \dots, u_n) and let u=(u1,u2,,un) \vec{-u} = (-u_1, -u_2, \dots, -u_n)

u+(u) =(u1+(u1),u2+(u2),,(un+(un))\vec{u} + (\vec{-u})\ = (u_1 + (-u_1), u_2 + (-u_2),\dots, (u_n + (-u_n)) (Definition of vector addition)

=(0,0,,0)= (0,0,\dots,0) =0=\vec{0}

Proof - Scalar Distributive Property onto Vectors:

Let u=(u1,u2,,un) \vec{u} = (u_1, u_2, \dots, u_n) , w=(w1,w2,,wn) \vec{w} = (w_1, w_2, \dots, w_n) and let some scalar aRa \in \mathbb{R}

a(u+w)=a(u1+w1,u2+w2,,un+wn)a(\vec{u} + \vec{w}) = a(u_{1} + w_{1}, u_{2} + w_{2}, \dots, u_{n} + w_{n}) (Definition of vector addition)

=(au1+aw1,au2+aw2,,aun+awn)= (au_1 + aw_1, au_2 + aw_2,\dots,au_n + aw_n)

Proof - Scalar Associative Property of Multiplication with a Vector:

Let a,bRna, b \in \mathbb{R}^n and let u=(u1,u2,,un)\vec{u} = (u_1, u_2, \dots, u_n)

Then we have that

ab(u)=ab(u1,u2,,un)ab(\vec{u})= ab(u_1, u_2, \dots, u_n)

=(abu1,abu2,,abun)= (abu_1, abu_2, \dots, abu_n)

=(a(bu1),a(bu2),,a(bun))= (a(bu_1), a(bu_2), \dots, a(bu_n))

=a(bu1,bu2,,bun)= a(bu_1, bu_2, \dots, bu_n)

=a(bu)=a(b\vec{u})

Proof - Multiplicative Identity of Vectors:

Let u=(u1,u2,,un)\vec{u} = (u_1, u_2, \dots, u_n)

If we multiply u\vec{u} by 1 we get 1 \cdot u\vec{u}

=1(u1,u2,,un)= 1(u_1, u_2,\dots,u_n)

=(1u1,1u2,,1un)=(1u_1, 1u_2,\dots, 1u_n)

=(u1,u2,,un)=(u_1, u_2,\dots,u_n)

=u= \vec{u}

Theorem

A vector equation c1v1+c2v2++cnvn =bc_1\vec{v}_{1} + c_2\vec{v}_{2} + \dots + c_n\vec{v}_{n}\ = \vec{b} has the same solution set as the linear system whose augmented matrix is [v1,v2,,vn,b][\vec{v}_{1}, \vec{v}_{2},\dots,\vec{v}_{n}, \vec{b}]
Proof: Given a valid solution c1,c2,,cpRc_1, c_2,\dots,c_p \in \mathbb{R} exists for b=c1v1+c2v2++cpvp\vec{b} = c_1\vec{v}_{1} + c_2\vec{v}_{2} + \dots + c_p\vec{v}_{p} where v1,v2,,vpRn\vec{v}_{1},\vec{v}_{2},\dots,\vec{v}_{p} \in \mathbb{R}^{n}. Recall that we can write b=c1v1+c2v2++cpvp\vec{b} = c_1\vec{v}_{1} + c_2\vec{v}_{2} + \dots + c_p\vec{v}_{p} as

c1(v11v12v1n)+c2(v21v22v2n)++cp(vp1vp2vpn)=(b1b2bn)c_1 \left( \begin{array}{c} v_{11} \\ v_{12} \\ \vdots \\ v_{1n} \end{array} \right) + c_2\left( \begin{array}{c} v_{21} \\ v_{22} \\ \vdots \\ v_{2n} \end{array} \right) + \cdots + c_p\left( \begin{array}{c} v_{p1} \\ v_{p2} \\ \vdots \\ v_{pn} \end{array} \right) = \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_n \end{array} \right)

We can express this as a system of linear equations now:

{c1v11+c2v21++cpvp1=b1c2v12+c2v22++cpvp2=b2cpv1n+c2v2n++cpvpn=bn \left\{ \begin{array}{l} c_{1}v_{11} + c_{2}v_{21} + \dots + c_{p}v_{p1} = b_1 \\\\ c_{2}v_{12} + c_{2}v_{22} + \dots + c_{p}v_{p2} = b_2 \\\\ \dots \\\\ c_{p}v_{1n} + c_{2}v_{2n} + \dots + c_{p}v_{pn} = b_n \\\\ \end{array} \right.

Since we have a system of equations now we can create an augmented matrix for this system:

[v11v21vp1b1v12v22vp2b2v1nv2nvpnbn] \left[ \begin{array}{cccc|c} v_{11} & v_{21} & \cdots & v_{p1} & b_1 \\\\ v_{12} & v_{22} & \cdots & v_{p2} & b_2 \\\\ \vdots & \vdots & \ddots & \vdots & \vdots \\\\ v_{1n} & v_{2n} & \cdots & v_{pn} & b_n \end{array} \right]

This in turn can now be written in the following form: [v1v2vpbn] \left[ \begin{array}{cccc|c} \vec{v}_{1} & \vec{v}_{2} & \cdots & \vec{v}_{p} & b_n \end{array} \right]

Since the solution c1,c2,,cpc_1, c_2,\dots,c_p holds true for b=c1v1+c2v2++cpvp\vec{b} = c_1\vec{v}_{1} + c_2\vec{v}_{2} + \dots + c_p\vec{v}_{p} then it must also follow that the solution holds true for [v1v2vpbn\begin{array}{cccc|c} \vec{v}_{1} & \vec{v}_{2} & \cdots & \vec{v}_{p} & b_n \end{array}] since we showed that they are both equivelant.

Theorem

b\vec{b} can be generated by a linear combination of the vectors {v1,v2,,vp\vec{v}_1,\vec{v}_2,\dots,\vec{v}_p} if and only if there is a solution to the vector equation x1v1+x2v2++xpvp=bx_1\vec{v}_1 + x_2\vec{v}_2+\cdots+x_p\vec{v}_p = \vec{b}

\Leftrightarrow b\vec{b} can be generated by a linear combination of the vectors {v1,v2,,vp\vec{v}_1,\vec{v}_2,\cdots,\vec{v}_{p}}.

\Leftrightarrow There exist some weights (c1,c2,,cp)(c_1, c_2,\cdots, c_p) such that c1v1+c2v2++cpvp=bc_1\vec{v}_1 + c_2\vec{v}_2 +\cdots+c_p\vec{v}_p = \vec{b}.

\Leftrightarrow There is a solution to the vector equation x1v1+x2v2++xpvp=bx_1\vec{v}_1 + x_2\vec{v}_2 + \cdots + x_p\vec{v}_p = \vec{b} where x1=c1,x2=c2,,xp=cpx_1 = c_1, x_2 = c_2, \cdots, x_p = c_p