Dyadic Generalized Semi-inverses of a Nonsingular Matrix

Donald R. Burleson, Ph.D.

Copyright © 2017 Donald R. Burleson. All rights reserved.

In my previous articles "The Semi-inversion Operator and the Pauli
Particle-spin Matrices" (at www.blackmesapress.com/Semi-inverses.htm),
"Semi-inversion of a Diagonalizable Nonsingular Matrix" (at
www.blackmesapress.com/Diagonalizable.htm), and "Semi-inverting a Non-diagonalizable Matrix" (at
www.blackmesapress.com/Nondiagonalizable.htm) I showed that for a
nonsingular matrix A, a semi-inversion operator (here denoted ) could be
defined such that and . This has a
companion semi-inversion operator again having the
property , the semi-inverses of A being A^{i} and A^{-i},

which in principle are easy to compute in the case that A is diagonalizable since then the necessary computations may be done by exponentiations on the eigenvalues, either by way of the canonical diagonalization

or the principal idempotent decomposition (spectral decomposition) . Even if A is not diagonalizable there are other computational methods that may avail.

Similarly, instead of employing the two square roots of -1, namely i
and -i, one may define more general classes of matrix-valued operators
based upon the four fourth roots of -1, the eight eighth roots of -1, etc., i.e.
the dyadic roots, n-th roots of the form n = 2^{K} for K any positive integer. For
a nonsingular matrix A (whatever the computational challenges when the
matrix is not diagonalizable) it is clear at least that such matrix powers can
always be meaningfully defined and rather easily computed if the matrix is
diagonalizable. Thus:

**THEOREM:** All the dyadic generalized semi-inverses of all orders exist for
any diagonalizable nonsingular matrix.

**PROOF:** Either in canonical diagonalization form or by way of the principal
idempotent decomposition (which always exists for a diagonalizable matrix),
each such matrix power A^{w} can be computed (with details as suggested by the
previous articles cited) by exponentiations on the eigenvalues, which
are always nonzero when A is nonsingular.

In particular for the four fourth roots of -1, which are

and the four operators

produce matrix
results which we may call "demi-semi-inverses," with patterns as outlined in
this diagram, which shows the progression (through repeated applications of
each operator) from A to A^{-1} through the intermediate demi-semi-inverse
matrix forms:

Here is a similar diagram showing not only the progressions from A to
A^{-1} but also the "return" progressions from A^{-1} back to A as well:

The individual "out and back" for particular operators are shown here:

Finally, this more general mapping diagram shows the effects of further applications of all the demi-semi-inverse operators to the various matrix forms, which in effect are the states in a finite-state machine:

Further groups of such operators are of course possible, producing e.g. "hemi-demi-semi-inverses" by employing the eight eighth roots of -1:

and likewise for dyadic configurations of higher order, as well as other, non-dyadic systems using e.g. the three cube roots of -1, since proof of the above theorem would equally well apply to exponentiation of the eigenvalues by any of these numbers. The dyadic cases of course have the property (as do some other similar scenarios) that the mapping configurations at each level (e.g. semi-demi-inverses for fourth roots of -1) are isomorphically embedded in the mapping configurations at all higher levels (e.g. hemi-demi-semi-inverses).