Subsection

{\sc\large This Section is a Draft, Subject to Changes}

\bigskip Our first decomposition applies only to diagonalizable (Definition DZM) matrices, and yields a decomposition into a sum of very simple matrices.
Theorem ROD Rank One Decomposition
Suppose that $A$ is a diagonalizable matrix of size $n$ and rank $r$. Then there are $r$ square matrices $A_1,\,A_2,\,A_3,\,\dots,\,A_r$, each of size $n$ and rank $1$ such that \begin{align*} A&=A_1+A_2+A_3+\dots+A_r \end{align*} Furthermore, if $\scalarlist{\lambda}{r}$ are the nonzero eigenvalues of $A$, then there are two sets of $r$ linearly independent vectors from $\complex{n}$, \begin{align*} X&=\set{\vectorlist{x}{r}} & Y&=\set{\vectorlist{y}{r}} \end{align*} such that $A_k=\lambda_k\vect{x}_k\transpose{\vect{y}_k}$, $1\leq k\leq r$.

Proof

We record two observations that was not stated in our theorem above. First, the vectors in $X$, chosen as columns of $S$, are eigenvectors of $A$. Second, the product of two vectors from $X$ and $Y$ in the opposite order, by which we mean $\transpose{\vect{y}_i}\vect{x}_j$, is the entry in row $i$ and column $j$ of the matrix product $\inverse{S}S=I_n$ (Theorem EMP). In particular, \begin{align*} \transpose{\vect{y}_i}\vect{x}_j &= \begin{cases} 1&\text{if $i=j$}\\ 0&\text{if $i\neq j$} \end{cases} \end{align*} We give two computational examples. One small, one a bit bigger.
Example ROD2 Rank one decomposition, size 2
Here's a slightly larger example, and the matrix does not have full rank.
Example ROD4 Rank one decomposition, size 4