Semi-Inversion of a Diagonalizable Nonsingular Matrix


Donald R. Burleson, Ph.D.

Copyright © June 2017, April 2021 by Donald R. Burleson. All rights reserved.


          In my previous article “The Semi-Inversion Operator and the Pauli Particle-Spin Matrices,” at www.blackmesapress.com/Semi-inverses.htm , I proposed the concept of the (principal) semi-inverse of a nonsingular matrix A, so designated because by the proposed transform φ(A) = Ai , a successive application of the transform formally gives ole.gif the ordinary inverse of A. This semi-inversion operator was described as essentially being handled computationally by the eigenvalue relation ole1.gif extended to exponentiation of the eigenvalues ole2.gif by the power ole3.gif .

          In the earlier article I developed a formula for semi-inverting a 2X2 nonsingular matrix with distinct eigenvalues, where the same general approach (more laboriously) can be pursued for larger matrices.

          A considerable gain in ease of computation of the semi-inverse is at hand when the square nonsingular matrix, of whatever size, is diagonalizable, whether its eigenvalues are distinct or not (i.e. provided the vector space Fn X 1 has an eigenvector basis, a necessary and sufficient condition for diagonalizability). It seems prudent at this point simply to state and prove the following:


THEOREM: Every diagonalizable nonsingular matrix is semi-invertible.


PROOF: Let A be any diagonalizable and invertible matrix. Then as A is diagonalizable there is a nonsingular modal matrix M such that ole4.gif , where Dg(λj) denotes the diagonal matrix having the eigenvalues of A on the diagonal, and where M, as is customary, is formed by taking respective eigenvectors as columns. But then exponentiation of A by semi-inversion merely consists of formally computing Ai as ole5.gif                                     Alternatively, since a matrix A is diagonalizable if and only if it has a spectral decomposition ole6.gif                              (where the matrices Ej are the principal idempotents of A and the summation is taken over the distinct eigenvalues comprising the spectrum of A), and since for any holomorphic function f the matrix f(A) (with A diagonalizable) can be computed as ole7.gif , it follows that the semi-inverse of a diagonalizable nonsingular matrix A may be computed as ole8.gif , where the exponentiations on the eigenvalues (nonzero since A is invertible) are defined by Euler’s formula ole9.gif , i.e. since ole10.gif we have, for ole11.gif , (λj) i = cos ln (λj) + i sin ln (λj) , and likewise if ole12.gif we take ole13.gif which in principal complex value as implied by Euler’s relation ole14.gif may be characterized as ole15.gif , ole16.gif . The result of this process is the semi-inverse of A given itself already in diagonalized form, and in this form it is evident that a second exponentiation of the same kind produces eigenvalues that are the reciprocals of the original ones, in keeping with the fact that the inverse of A has reciprocal eigenvalues. That is to say, the semi-inverse of the semi-inverse is the inverse. This completes the proof.


COROLLARY: If a matrix A is diagonalizable and nonsingular and if z is any complex number, then the matrix Az exists.

PROOF: By the theorem, the semi-inverse of A is well-defined, and if z = x+yi one may compute ole17.gif by diagonalization.


EXAMPLE 1: Let A = ole18.gif

with eigenvalues ole19.gif . Since A is 3X3 and has three distinct (and nonzero) eigenvalues, A is diagonalizable (and nonsingular). For the diagonalization of A, the modal matrix M will have as its columns the respective eigenvectors (1,0,0)T, (1,0,-1)T, and (2,3,1)T so that

ole20.gif

The canonical diagonalization of A is then

ole21.gif  and if φ denotes the matrix-valued transform that maps A to its semi-inverse, then the semi-inverse of A is found by simply replacing each eigenvalue in the diagonal matrix with its exponentiation, where in principal complex value ole22.gif The result would work out to be

ole23.gif

Alternatively, from the diagonalization A = M[Dg(λj)]M-1 we could have determined the principal idempotent decomposition (spectral decomposition)

ole24.gif

ole25.gif


and the semi-inversion is performed by simply exponentiating the eigenvalues

1, -1, and 2 with the same results as before.


EXAMPLE 2: Even if the eigenvalues are not all distinct (so long as there is still an eigenbasis so that the matrix is diagonalizable) the semi-inverse can be computed the same way as in the previous example. E.g. for

ole26.gif  with eigenvalues ole27.gif and eigenvectors

(1,2,1)T , (-1,1,0)T , and (-1,0,1)T respectively, we have the modal matrix


ole28.gif    with inverse ole29.gif

so that from the resulting diagonalization the desired semi-inverse is


ole30.gif

= ole31.gif .


It should be mentioned that in terms of the techniques described here, a diagonalizable matrix A always needs also to be invertible (diagonalizability and invertibility being independent conditions) because otherwise if one of the eigenvalues were ole32.gif the corresponding eigenvalue exponentiations would have to include the undefined matrix element ole33.gif and if such an element were to occur then a further exponentiation would yield ole34.gif                                     which of course is meaningless. But the combined requirements of invertibility and diagonalizability are always sufficient for the semi-inverse of a matrix to exist.


          However, consider also the following.


EXAMPLE 3: Let ole35.gif The eigenvalues of A are

ole36.gif , and since one may verify that the only eigenvectors are those belonging to the eigenspace consisting of multiples of the vector ole37.gif so that there is no eigenbasis for the column space of A, the matrix A is not diagonalizable. However, there is a way to semi-invert A. If we examine powers of A:

ole38.gif    etc., we observe an emerging pattern suggesting that for any positive integer n,


ole39.gif   That this is indeed the case can easily be shown by mathematical induction: if k is any value for which the pattern holds, then


ole40.gif

which is the matrix A to the power n for n = k+1. Now when we put


ole41.gif   to formally produce ole42.gif , we may verify that this result is consistent with the definition of the principal semi-inverse of A by examining, as before, the powers of this semi-inverse itself:


ole43.gif   etc. producing a pattern giving (as provable again by mathematical induction)


ole44.gif  in which putting


ole45.gif  produces ole46.gif which is in fact the inverse of A as needed. Thus A has worked out to be semi-invertible although not diagonalizable.



Altogether we have proved the following:


THEOREM: For nonsingular matrices, diagonalizability is a sufficient but not in general a necessary condition for semi-invertibility.