Some reminders on Linear Algebra
Published:
In this post I’ll mention a handful of reminders on Linear Algebra.
- We know that for an eigenvalue $\lambda$ of a matrix $A$ we have that $Ax = \lambda x$, or equivalently $(A - \lambda I)x = 0$. To find $\lambda$, we solve $\mathrm{det}(A - \lambda I) = 0$. But, why?
The reason is that in order to find non-zero solutions of $(A - \lambda I)x = 0$, $(A - \lambda I)$ must be non-invertible, hence its determinant should be zero!
Facts:
- $\mathrm{trace}(A) = \lambda_1 + \lambda_2 + \dots$
$\mathrm{det}(A) = \lambda_1 \cdot \lambda_2 \dots$
- Why $\mathrm{Det}(A) = \mathrm{Det}(A^\top)$ for any matrix $A$?
We know that $PA = LU$ (using factorization by elimination), where $P$ is a permutation matrix (each row only has one element equal to one and the rest are zero), $L$ is a lower triangular matrix with its diagonal all equal to one, and $U$ is an upper triangular matrix with the pivots of $A$ on its diagonal. For$(PA)^\top = (LU)^\top$, we have that $A^\top P^\top = U^\top L^\top$. So,
\[\begin{align*} \mathrm{Det}(PA) & = \mathrm{Det}(P)\cdot \mathrm{Det}(A) = \mathrm{Det}(L)\cdot \mathrm{Det}(U), \quad \mathrm{and} \\ \mathrm{Det}\left((PA)^\top\right) & = \mathrm{Det}\left(A^\top\right) \cdot \mathrm{Det}\left(P^\top\right) = \mathrm{Det}\left(U^\top\right)\cdot \mathrm{Det}\left(L^T\right). \end{align*}\]We know since $L, U$ are triangular $\mathrm{Det}(L), \mathrm{Det}(U) = \mathrm{Det}\left(L^\top\right), \mathrm{Det}\left(U^\top\right)$. If $\mathrm{Det}(P) = \mathrm{Det}\left(P^\top\right)$, then it must be the case that $\mathrm{Det}(A) = \mathrm{Det}\left(A^\top\right)$. Now we prove that $\mathrm{Det}(P) = \mathrm{Det}\left(P^\top\right)$:
Let $\pi \in S_n$ be a permutation and $P_\pi$ be the permutation matrix that is obtained by applying $\pi$ to the identity matrix. Then, $(P_\pi)_{ij} = 1$ if $i = \pi(j)$, and otherwise zero. Using Leibniz formula we have:
\[\mathrm{Det}(P_\pi) = \sum_{\sigma \in S_n}\mathrm{sgn}(\sigma) \Pi_{i = 1}^n (P_\pi)_{i\sigma(i)}.\]They only way the inner product is non-zero is that if $\sigma = \pi$.
Hence, $\mathrm{Det}(P_\pi) = \mathrm{sgn}(\pi)$. $\mathrm{sgn}(\pi)$ is equal to $(-1)^k$,
where $k = \sum_{i = 1}^n \mathbb{I}${$i \neq \pi(i)$}. Hence, $\mathrm{Det}(P_\pi) = \pm 1$ .
Using the identity $PP^\top = I$, we have that $\mathrm{Det}(P)\cdot\mathrm{Det}(P^\top) = 1$.
Hence, $\mathrm{Det}(P)$ must be equal to $\mathrm{Det}\left(P^\top\right)$, so the identity holds. $\square$
- Only full-rank matrices are invertible.
Because if the matrix isn’t full-rank, then it means there is a null space which means there is an eigenvalue of zero,
which means the determinant of the matrix is zero.
- If $A$ is an $n \times m$ matrix and $B$ is an $m \times k$ matrix, then
Let $S$ be the column space of $AB$, that is the space of vector that linear combinations of columns of $AB$. Let $s \in S$, then there exists a $k\times 1$ vector $v$ such that
\[\begin{equation*} s = (AB)v. \end{equation*}\]By the associativity of the matrix-vector product, we can write
\[\begin{equation*} s = A(Bv), \end{equation*}\]where now $Bv$ is an $m \times 1$ vector. Thus, we ended up having $s$ in the column space of $A$. We know that $\mathsf{rank}(A)$ is equal to the dimension of its column space and since we showed that $S$ is a subset of the column space of $A$, its dimension is less than the rank of $A$, and the dimension of $S$ is equal to the rank of $AB$, hence
\[\begin{equation*} \mathsf{rank}(AB) \leq \mathsf{rank}(A). \end{equation*}\]With an analogous argument regarding the row space of $B$, we will get that
\[\begin{equation*} \mathsf{rank}(AB) \leq \mathsf{rank}(B). \end{equation*}\]Therefore, we must have
\[\begin{equation*} \mathsf{rank}(AB) \leq \min\{\mathsf{rank}(A) \cdot \mathsf{rank}(B)\}. \tag*{$\square$} \end{equation*}\]Reference
- Introduction to Linear Algebra
- Google Gemini