Learn Linear Algebra

Theorem

Let A=[abcd] A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} . If adbc0 ad - bc \neq 0 , then A A is invertible and A1=1adbc[dbca] A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} If adbc=0 ad - bc = 0 , then A A is not invertible.

Proof:

Let A=[abcd] A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} . To determine if A A is invertible, augment A A with the identity matrix I=[1001] I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} and row reduce: [ab10cd01]. \left[ \begin{array}{cc|cc} a & b & 1 & 0 \\ c & d & 0 & 1 \end{array} \right].

First, divide row 1 by a a (assuming a0 a \neq 0 ): [1ba1a0cd01]. \left[ \begin{array}{cc|cc} 1 & \frac{b}{a} & \frac{1}{a} & 0 \\ c & d & 0 & 1 \end{array} \right]. Next, subtract c×row 1 c \times \text{row 1} from row 2: [1ba1a00dbcaca1]. \left[ \begin{array}{cc|cc} 1 & \frac{b}{a} & \frac{1}{a} & 0 \\ 0 & d - \frac{bc}{a} & -\frac{c}{a} & 1 \end{array} \right].

Now, divide row 2 by dbca d - \frac{bc}{a} : [1ba1a001cadbcaadbc]. \left[ \begin{array}{cc|cc} 1 & \frac{b}{a} & \frac{1}{a} & 0 \\ 0 & 1 & \frac{-c}{ad - bc} & \frac{a}{ad - bc} \end{array} \right]. Finally, subtract ba×row 2 \frac{b}{a} \times \text{row 2} from row 1: [10dadbcbadbc01cadbcaadbc]. \left[ \begin{array}{cc|cc} 1 & 0 & \frac{d}{ad - bc} & \frac{-b}{ad - bc} \\ 0 & 1 & \frac{-c}{ad - bc} & \frac{a}{ad - bc} \end{array} \right].

The left side is now the identity matrix, and since AA can be row reduced to the identity matrix, we know that A is invertible.

A1=1adbc[dbca]. A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}. We also know that if adbc=0 ad - bc = 0 , the denominator is undefined, and A A is not invertible. Thus, A A is invertible if and only if adbc0 ad - bc \neq 0 .

Theorem

An n×n n \times n matrix A A is invertible if and only if A A is row equivalent to In I_n , and in this case, any sequence of elementary row operations that reduces A A to In I_n also transforms In I_n into A1 A^{-1} .

Proof:

Let A A be an n×n n \times n matrix. Suppose A A is invertible. Since A A is invertible then there exists a sequence of invertible elementary matrices Ep,,E1 E_p, \dots, E_1 such that: EpE1A=In. E_p \cdots E_1 A = I_n. Each Ei E_i corresponds to an elementary row operation, which is invertible. Applying this sequence of operations reduces A A to In I_n , so A A is row-equivalent to In I_n .

Conversely, suppose A A is row-equivalent to In I_n . Then there exists a sequence of invertible elementary row operation matrices Ep,,E1 E_p, \dots, E_1 such that: EpE1A=In. E_p \cdots E_1 A = I_n. Since the product of invertible matrices is invertible, A A must also be invertible.

To find A1 A^{-1} , augment A A with In I_n to form: [AIn]. \left[ \begin{array}{c|c} A & I_n \end{array} \right]. Apply the same sequence of elementary row operations Ep,,E1 E_p, \dots, E_1 that reduces A A to In I_n . These operations simultaneously transform In I_n into A1 A^{-1} , because the row operations correspond to multiplying A A and In I_n by the same sequence of invertible matrices. At the end of the process, the augmented matrix becomes: [InA1]. \left[ \begin{array}{c|c} I_n & A^{-1} \end{array} \right]. Thus, the sequence of row operations Ep,,E1 E_p, \dots, E_1 that reduces A A to In I_n simultaneously transforms In I_n into A1 A^{-1} .

Therefore, A A is invertible if and only if A A is row-equivalent to In I_n , and the sequence of elementary row operations Ep,,E1 E_p, \dots, E_1 transforms In I_n into A1 A^{-1} .

Theorem

a. If A A is an invertible matrix, then A1 A^{-1} is invertible and (A1)1=A (A^{-1})^{-1} = A b. If A A and B B are n×n n \times n invertible matrices, then so is AB AB , and the inverse of AB AB is the product of the inverses of A A and B B in the reverse order. That is, (AB)1=B1A1 (AB)^{-1} = B^{-1}A^{-1} c. If A A is an invertible matrix, then so is AT A^T , and the inverse of AT A^T is the transpose of A1 A^{-1} . That is, (AT)1=(A1)T (A^T)^{-1} = (A^{-1})^T

Proof:

(a): Suppose A A is an invertible matrix. Then there exists a matrix A1 A^{-1} such that: AA1=IandA1A=I. A A^{-1} = I \quad \text{and} \quad A^{-1} A = I. Let B=(A1)1 B = (A^{-1})^{-1} . Then A1B=I A^{-1} B = I and BA1=I B A^{-1} = I . So then B B must equal A A . Therefore, (A1)1=A (A^{-1})^{-1} = A .

(b): Let A A and B B be n×n n \times n invertible matrices. Suppose C=B1A1 C = B^{-1} A^{-1} . Then we have that: (AB)(B1A1)=A(BB1)A1=AIA1=AA1=I, (AB)(B^{-1} A^{-1}) = A(BB^{-1})A^{-1}\\ = A I A^{-1} = A A^{-1} = I, and: (B1A1)(AB)=B1(A1A)B=B1IB=B1B=I. (B^{-1} A^{-1})(AB) = B^{-1} (A^{-1} A) B\\ = B^{-1} I B = B^{-1} B = I. Since both conditions hold, C=B1A1 C = B^{-1} A^{-1} . Thus, (AB)1=B1A1 (AB)^{-1} = B^{-1} A^{-1} .

(c): Suppose A A is an invertible matrix. Then A1 A^{-1} exists such that: AA1=IandA1A=I. A A^{-1} = I \quad \text{and} \quad A^{-1} A = I. Taking the transpose of both sides: (AA1)T=ITand(A1A)T=IT. (A A^{-1})^T = I^T \quad \text{and} \quad (A^{-1} A)^T = I^T. By the properties of transposes: (A1)TAT=IandAT(A1)T=I. (A^{-1})^T A^T = I \quad \text{and} \quad A^T (A^{-1})^T = I. Thus, (AT)1=(A1)T (A^T)^{-1} = (A^{-1})^T .

Theorem

If A A is an invertible n×n n \times n matrix, then for each b \vec{b} in Rn \mathbb{R}^n , the equation Ax=b A\vec{x} = \vec{b} has the unique solution x=A1b \vec{x} = A^{-1}\vec{b} .

Proof:

If A A is an invertible n×n n \times n matrix, then by definition, there exists A1 A^{-1} such that: AA1=IandA1A=I. A A^{-1} = I \quad \text{and} \quad A^{-1} A = I. Suppose we are solving the equation Ax=b A \vec{x} = \vec{b} for x \vec{x} . Since A A is invertible, we can multiply both sides by A1 A^{-1} on the left: A1(Ax)=A1b. A^{-1}(A \vec{x}) = A^{-1} \vec{b}. By associativity of matrix multiplication, the left-hand side simplifies to: (A1A)x=A1b. (A^{-1} A) \vec{x} = A^{-1} \vec{b}. Since A1A=I A^{-1} A = I , this becomes: Ix=A1b. I \vec{x} = A^{-1} \vec{b}. Thus, x=A1b \vec{x} = A^{-1} \vec{b} .

To show uniqueness, suppose there exist two solutions, x1 \vec{x}_1 and x2 \vec{x}_2 , such that: Ax1=bandAx2=b. A \vec{x}_1 = \vec{b} \quad \text{and} \quad A \vec{x}_2 = \vec{b}. Subtracting these equations gives: A(x1x2)=0. A (\vec{x}_1 - \vec{x}_2) = \vec{0}. Since A A is invertible, the only solution to Az=0 A \vec{z} = \vec{0} is z=0 \vec{z} = \vec{0} . Therefore: x1x2=0, \vec{x}_1 - \vec{x}_2 = \vec{0}, which implies x1=x2 \vec{x}_1 = \vec{x}_2 .

Thus, Ax=b A \vec{x} = \vec{b} has the unique solution x=A1b \vec{x} = A^{-1} \vec{b} .