So this is probably not what you expected. No it’s not about bookish and socially awkward people, nor is it about candy. It is a paper (at the bottom of this page) about linear algebra and some analysis from which I derive a dimension reduction technique and an interesting matrix inversion algorithm.

While this work is original with me, I have since discovered that most (if not all) of the significant results were known and published long ago, though perhaps the methods by which I arrive at the results are uniquely mine. The matrix inversion technique that I derive is an application of the Sherman-Morrison formula known as matrix inversion by annihilation of rank. Perhaps someone will find my derivation of the Sherman-Morrison formula helpful. It does not require anything more than a basic knowledge of linear algebra. Also, I come within a whisker of deriving the more general Woodbury matrix identity. Again, my (almost) derivation of the Woodbury matrix identity might be more approachable than the usual derivations. If you are aware of any other work that I am duplicating I would be grateful if you would use my contact form to bring it to my attention.

The dimension reduction technique can lead to significant efficiencies in solving linear equations under the right conditions. However, finding problems that satisfy the conditions for NeRD reduction is an art not a science. In essense it is a solution in search of a problem. It is my hope that someone with a real world problem that satisfies the NeRD reduction conditions will find this and put it to good use. If you find this useful I would be grateful if you would use my contact form to let me know about your application.

The matrix inversion method actually has potential to be useful in general, not just under specific conditions. I present a sketch of an algorithm in the paper that inverts a matrix one row at a time. Probably the most interesting feature of this routine is that it can invert the matrix in place. Its memory use grows by N, not N2 (not counting the memory used for the original matrix). It turns out that this is almost identical to the method proposed by David J. Edelblute in a paper published in 1966 in the Journal of the American Mathematical Society. The main difference is that he proposed updating the inverse one column at a time, presumably because that is the way FORTRAN stores 2 dimensional arrays. I have implemented the algorithm sketched in my paper for inverting a matrix by the general NeRD inversion technique (A.K.A. matrix inversion by annihilation of rank) in C.

Here are the source files for the matrix inversion routine. There is a header file mostly to provide a function prototype, but also to provide some documentation on how to call the function. This is a real world implementation with partial pivoting to avoid division by 0 errors. My testing suggests that the routine is quite robust.

NeRDS.c                   NeRDS.h

As with all direct linear algebra algorithms on digital computers with limited precision arithmetic, this algorithm is prone to run away round-off error for certain types of data. I have made the detection of ill-tempered data as robust as I know how without significantly impacting the execution speed. While this routine will return either a divide by zero or non-finite result error on many types of ill-tempered data there is no guarantee it will catch all cases of ill-tempered data. The only reliable check is to compute the difference between the identity matrix and the product of the result and the original matrix.

Here is a paper describing the math in detail.

Full Window

Click here for the document.
Last updated: Monday, December 28, 2015 02:17:03 PM