A Matrix-Valued Generalization of Cross Product for Two Vectors in R3 with Application to a Matrix Operator for Computing the Curl of the Curl of a Vector Field


Donald R. Burleson, Ph.D.

Copyright (c) 2012; all rights reserved.



Let    be any two vectors in the space R3.  It is well known that the cross product



can alternatively be computed as a matrix product of the form



where as is customary we have made use of the notion of regarding the vector and the 1x3 row matrix as isomorphic; we shall in fact use their respective notations (i.e. with or without the separating commas) interchangeably.  One readily shows that the matrix product is equivalent to the usual Laplace expansion of the determinant defining the cross product .


Since  we have also  .  (This and similar results can also be demonstrated using matrix transposes, since the matrices being discussed here are skew-symmetric and AT, for example, can be replaced with -A.)


These facts motivate the following


DEFINITION:  The function γ  from the vector space R3 to the set M3 of all 3x3 matrices over the field of real numbers will be defined as



Given a vector  the image matrix γ() for purposes of this discussion will be denoted simply A.



Immediately we may prove:


THEOREM:  For any matrix A in the range of the function γ,

spectrum(A) =

PROOF:  The characteristic polynomial of A is

det(A - λI)   =  


=   - λ ( λ2 +  a12 ) - (-a3)( -a3λ - a1a2) + a2(a1a3 - a2λ)


=   - λ ( λ2 + a12 + a22 + a32 )


=   - λ ( λ2 + )


the zeros of which (i.e. the eigenvalues of A) give the spectrum as claimed. 



COROLLARY:   Every matrix in the range of the function γ  is singular.


PROOF:  As shown, the spectrum of A always includes the eigenvalue   λ = 0.  


REMARK:  It makes good operational sense for these matrices to be singular, since, for example, if such a matrix B had an inverse, the relation   would imply that one could right-multiply both sides by B -1 to solve for a unique vector  enabling the given cross product, but the uniqueness of this vector is not generally the case.


The equivalence of    with     (where  is now regarded as a matrix) allows the recasting of many cross product expressions in terms of matrix products.  In particular, iterated cross products involving three vectors can be rewritten in a variety of ways, e.g.



since the operation now is matrix multiplication, which, unlike vector cross product, is associative.  Thus, oddly enough, the resulting expression is "re-associated" when compared with the original iterated cross product grouping.


Since B replaces  in   =   and since A replaces  after a fashion in  =  (i.e. the vectors get replaced by matrices one at a time), and especially since a product of two γ-image matrices right-multiplies onto one of the vectors to produce the desired three-vector cross product, it seems reasonable to dignify such a matrix product by defining it as a matrix-valued operation on the two underlying vectors, a sort of generalization of the concept of cross product.  Hence:


DEFINITION:  For    and   ,  the matrix-valued operation combining   with     to produce AB, i.e, the function from R3xR3 to M3 under which the image of  is AB, will be denoted Π.  That is,   Π   =   AB.


Thus, for example, one can write:   Π 


Explicitly, the generalized product   is the matrix



from which it is easy to prove:


THEOREM:  For matrices A and B in the range of the function γ, if the pre-image vectors   are non-zero orthogonal vectors, then trace(AB) = trace() = 0, and conversely.

PROOF:  By inspection the trace of the matrix AB is

and the trace is zero if and only if this dot product is zero, which is the case if the vectors are orthogonal. 

It turns out that the primary interest in this generalized cross product resides in the matter of three-vector cross products.  To that effect:


LEMMA:  For three vectors and  :

Π    =  Π 

and   Π  Π 

PROOF:  The first equality has already been established.  Further:



=   .

And finally  




EXAMPLES:  Let A, B, and C respectively be the matrices corresponding to the vectors (1,3,-2), (2,-1,4), and (-3,2,1).


By the customary  computation using Laplace expansion of determinants,


By the results of the lemma, this is also computable as

=   (1,3,-2)

as before.  Likewise, by determinants

and by the lemma we may also compute this result as


One particularly intriguing application for the notion of a matrix operator's being employed in three-vector iterated cross product expressions is the matter of the curl of the curl of a vector field, a concept having applications in the theory of electromagnetism and elsewhere.


As usual the "del" symbol will indicate the gradient operator thought of as an "operator-valued vector."


Since the curl of a vector field  is formally defined as

and is a vector field itself, the curl of this vector field in turn, i.e. the curl of the curl of the given vector field , is

The previously proven lemma provides a way of expressing this iterated cross product, since the function   γ   naturally extends to the "operator-valued" gradient operator vector.  It turns out that the curl of the curl of the given vector field  can be computed by way of an operator-valued matrix simply applied to the vector field  on the right.  As one would intuitively expect, this is done by a matrix operator M such that

so that when the matrix operator M is applied twice in succession the result is

To that effect:


THEOREM:  The curl of the curl of a vector field  = (f,g,h)  is given by applying, to the right of the vector field by matrix multiplication, the operator



PROOF:  By the lemma,

which routinely computes out to the desired operator-valued matrix form.