You may have once thought that the natural definition for matrix
multiplication would be entrywise multiplication, much in the same way that a
young child might say, “I writed my name.” The mistake is understandable, but it
still makes us cringe. Unlike poor grammar, however, entrywise matrix
multiplication has reason to be studied; it has nice properties in matrix analysis
and additionally plays a role with relative gain arrays in chemical engineering,
covariance matrices in probability and serves as an inertia preserver for Hermitian
matrices in physics. Here we will only explore the properties of the Hadamard
product in matrix analysis.
Definition HP Hadamard Product Let A and
B be
m × n matrices. The
Hadamard Product of A
and B is
defined by {\left [A ∘ B\right ]}_{ij} ={ \left [A\right ]}_{ij}{\left [B\right ]}_{ij}
for all 1 ≤ i ≤ m,
1 ≤ j ≤ n.
As we can see, the Hadamard product is simply “entrywise multiplication”. Because of
this, the Hadamard product inherits the same benefits (and restrictions) of multiplication
in ℂ. Note also
that both A
and B
need to be the same size, but not necessarily square. To avoid confusion,
juxtaposition of matrices will imply the “usual” matrix multiplication, and we will
use “∘”
for the Hadamard product.
With equality of each entry of the matrices being equal we
know by Definition ME that the two matrices are equal.
■
Definition HI Hadamard Inverse Let A be an
m × n matrix and
suppose {\left [A\right ]}_{ij}≠0
for all 1 ≤ i ≤ m,
1 ≤ j ≤ n. Then the
Hadamard Inverse, \widehat{A}
, is given by {\left [\widehat{A}\right ]}_{ij} = {({\left [A\right ]}_{ij})}^{−1}
for all 1 ≤ i ≤ m,
1 ≤ j ≤ n.
Theorem HPHI Hadamard Product with Hadamard Inverses Let A be an
m × n matrix
such that {\left [A\right ]}_{ij}≠0
for all 1 ≤ i ≤ m,
1 ≤ j ≤ n. Then
A ∘\widehat{ A} =\widehat{ A} ∘ A = Jmn.
□
With equality of each entry of the matrices being equal we
know by Definition ME that the two matrices are equal.
■
Since matrices have a different inverse and identity under the Hadamard product,
we have used special notation to distinguish them from what we have been using
with “normal” matrix multiplication. That is, compare “usual” matrix inverse,
{A}^{−1}, with the Hadamard
inverse \widehat{A}, and the “usual”
matrix identity, {I}_{n}, with
the Hadamard identity, Jmn.
The Hadamard identity matrix and the Hadamard inverse are both more
limiting than helpful, so we will not explore their use further. One last fun
fact for those of you who may be familiar with group theory: the set of
m × n
matrices with nonzero entries form an abelian (commutative) group under the
Hadamard product (prove this!).
With equality of each entry of the matrices being equal we
know by Definition ME that the two matrices are equal.
■
Subsection DMHP: Diagonal Matrices and the Hadamard Product
We can relate the Hadamard product with matrix multiplication by considering diagonal
matrices, since A ∘ B = AB if
and only if both A
and B
are diagonal (Citation!!!). For example, a simple calculation reveals that the
Hadamard product relates the diagonal values of a diagonalizable matrix
A with
its eigenvalues:
Theorem DMHP Diagonalizable Matrices and the Hadamard Product Let A be a diagonalizable
matrix of size n
with eigenvalues {λ}_{1},\kern 1.95872pt {λ}_{2},\kern 1.95872pt {λ}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {λ}_{n}.
Let D
be a diagonal matrix from the diagonalization of
A,
A = SD{S}^{−1}, and
d be a vector
such that {\left [D\right ]}_{ii} ={\left [d\right ]}_{i} = {λ}_{i}
for all 1 ≤ i ≤ n.
Then
\eqalignno{
{\left [A\right ]}_{ii} & ={ \left [S ∘ {({S}^{−1})}^{t}d\right ]}_{
i} &\text{for all }1 ≤ i ≤ n. & & & &
}
With equality of each entry of the matrices being equal we
know by Definition ME that the two matrices are equal.
■
We obtain a similar result when we look at the singular value decomposition of
square matrices (see exercises).
Theorem DMMP Diagonal Matrices and Matrix Products Suppose A,
B are
m × n matrices,
and D and
E are diagonal
matrices of size m
and n,
respectively. Then,
T30 Let A be a square
matrix of size n with
singular values {σ}_{1},\kern 1.95872pt {σ}_{2},\kern 1.95872pt {σ}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {σ}_{n}.
Let D
be a diagonal matrix from the singular value decomposition of
A,
A = UD{V }^{∗} (Theorem SVD).
Define the vector d
by {\left [d\right ]}_{i} ={ \left [D\right ]}_{ii} = {σ}_{i},
1 ≤ i ≤ n.
Prove the following equality,
Furthermore, suppose A,
B are
m × n matrices.
Prove that {\left [AD{B}^{t}\right ]}_{
ii} ={ \left [(A ∘ B)x\right ]}_{i}
for all 1 ≤ i ≤ m.
Contributed by Elizabeth Million