Learn Linear Algebra

Theorem

Let T:RnRnT: \mathbb{R}^n \mapsto \mathbb{R}^n be a linear transformation. Then TT is one-to-one if and only if the equation T(x)=0T(\vec{x})=\vec{0} has only the trivial solution.

Proof:

Let T:RnRn T: \mathbb{R}^n \to \mathbb{R}^n be a linear transformation.

\Rightarrow Suppose that T T is one-to-one. By definition, this means that no two distinct vectors in Rn \mathbb{R}^n map to the same vector under T T . Suppose T(x)=0 T(\vec{x}) = \vec{0} for some x0 \vec{x} \neq \vec{0} . Then T(0)=T(x)=0 T(\vec{0}) = T(\vec{x}) = \vec{0} , which contradicts the one-to-one property. Thus, T(x)=0 T(\vec{x}) = \vec{0} implies x=0 \vec{x} = \vec{0} .

\Leftarrow Suppose that T(x)=0 T(\vec{x}) = \vec{0} has only the trivial solution. This means that if T(x1)=T(x2) T(\vec{x}_1) = T(\vec{x}_2) , then T(x1x2)=0 T(\vec{x}_1 - \vec{x}_2) = \vec{0} . By assumption, x1x2=0 \vec{x}_1 - \vec{x}_2 = \vec{0} , so x1=x2 \vec{x}_1 = \vec{x}_2 . Hence, T T is one-to-one.

Therefore, T T is one-to-one if and only if the equation T(x)=0 T(\vec{x}) = \vec{0} has only the trivial solution.

Theorem

Let T:RnRm T: \mathbb{R}^n \to \mathbb{R}^m be a linear transformation and let A A be the standard matrix for T T . Then:

a. T T maps Rn \mathbb{R}^n onto Rm \mathbb{R}^m if and only if the columns of A A span Rm \mathbb{R}^m ;

b. T T is one-to-one if and only if the columns of A A are linearly independent.

Proof (a):

Suppose T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m is a linear transformation. Let AA be the standard matrix for TT.

\Leftrightarrow T maps Rn onto RmT \text{ maps } \mathbb{R}^n \text{ onto } \mathbb{R}^m.

\Leftrightarrow bRm,Ax=b has a solution.\forall \vec{b} \in \mathbb{R}^m, A\vec{x} = \vec{b} \text{ has a solution.}

\Leftrightarrow The augmented matrix [Ab] is consistent for all b.\text{The augmented matrix } [A|\vec{b}] \text{ is consistent for all } \vec{b}.

\Leftrightarrow A has a pivot in every row.A \text{ has a pivot in every row.}

\Leftrightarrow The columns of A span Rm.\text{The columns of } A \text{ span } \mathbb{R}^m.

Proof (b):

Suppose T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m is a linear transformation. Let AA be the standard matrix for TT.

\Leftrightarrow T is one-to-one.T \text{ is one-to-one.}

\Leftrightarrow Ax=b has at most one solution.A\vec{x} = \vec{b} \text{ has at most one solution.}

\Leftrightarrow Ax=0 has only the trivial solution.A\vec{x} = \vec{0} \text{ has only the trivial solution.}

\Leftrightarrow The columns of A are linearly independent.\text{The columns of } A \text{ are linearly independent.}

Theorem

Let T:RnRm T: \mathbb{R}^n \to \mathbb{R}^m be a linear transformation. Then there exists a unique matrix A A such that T(x)=Ax for all xRn. T(x) = Ax \text{ for all } x \in \mathbb{R}^n. In fact, A A is the m×n m \times n matrix whose j j -th column is the vector T(ej) T(e_j) , where ej e_j is the j j -th column of the identity
matrix in Rn \mathbb{R}^n : A=[T(e1)  T(en)]. A = [T(e_1) \ \cdots \ T(e_n)].

Proof:

Let x=Inx=[e1  en]x=x1e1++xnen\vec{x} = I_n \vec{x} = [\vec{e}_1 \ \cdots \ \vec{e}_n]\vec{x} = x_1\vec{e}_1 + \cdots + x_n\vec{e}_n, where {e1,,en}\{\vec{e}_1, \cdots, \vec{e}_n\} are the standard basis vectors of Rn\mathbb{R}^n.

Since TT is linear:

T(x)=T(x1e1++xnen)=x1T(e1)++xnT(en). T(\vec{x}) = T(x_1\vec{e}_1 + \cdots + x_n\vec{e}_n) = x_1T(\vec{e}_1) + \cdots + x_nT(\vec{e}_n).

This can be expressed as:

T(x)=[T(e1)  T(en)][x1xn]. T(\vec{x}) = [T(\vec{e}_1) \ \cdots \ T(\vec{e}_n)] \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}.

Now let A=[T(e1)  T(en)]A = [T(\vec{e}_1) \ \cdots \ T(\vec{e}_n)]. Then:

T(x)=Ax. T(\vec{x}) = A\vec{x}.

Since TT is a linear transformation, the columns of AA are uniquely determined by T(ej)T(\vec{e}_j) for j=1,,nj = 1, \dots, n. Therefore, AA is unique.

Thus, T(x)=AxT(\vec{x}) = A\vec{x} for all xRn\vec{x} \in \mathbb{R}^n, where A=[T(e1)  T(en)]A = [T(\vec{e}_1) \ \cdots \ T(\vec{e}_n)].