I'm looking for a fast way to compute the k smallest eigenvalues of a large, real, symmetric, tridiagonal matrix, and the orthonormal eigenvectors associated with those eigenvalues. By large, I mean 1000x1000 or larger (possibly much larger). There's also the additional knowledge that all of the subdiagonal elements have the same value. Like:

There are a lot of sites that mention that there exists a faster-than-general algorithm to handle symmetric, tridiagonal matrices using something called the "implicit QR method" based on QR factorization, but few actually say anything about how it works. I found one very convoluted matlab implementation with barely any documentation, but I couldn't get it to work correctly after attempting to port it to Java. It was also more general than needed, since it did a full singular value decomposition, i.e. also handling complex matrices that are not diagonalizable.

If it helps any, the eigenvalues are known to all be positive. Also, although n might be large, as mentioned above, k (the number of smallest eigenvalues needed) might regularly be 10 or smaller.

Thanks for any insight anyone might be able to give. I realise that this is a bit in-depth considering that this is an ASM forum, not an algorithms forum, but it's the sort of thing that would definitely lend itself toward optimization in ASM. :)

a_{0} | b | 0 | 0 | ... | 0 | 0 | 0 |

b | a_{1} | b | 0 | ... | 0 | 0 | 0 |

0 | b | a_{2} | b | ... | 0 | 0 | 0 |

0 | 0 | b | a_{3} | ... | 0 | 0 | 0 |

... | ... | ... | ... | ... | ... | ... | ... |

0 | 0 | 0 | 0 | ... | a_{n-3} | b | 0 |

0 | 0 | 0 | 0 | ... | b | a_{n-2} | b |

0 | 0 | 0 | 0 | ... | 0 | b | a_{n-1} |

There are a lot of sites that mention that there exists a faster-than-general algorithm to handle symmetric, tridiagonal matrices using something called the "implicit QR method" based on QR factorization, but few actually say anything about how it works. I found one very convoluted matlab implementation with barely any documentation, but I couldn't get it to work correctly after attempting to port it to Java. It was also more general than needed, since it did a full singular value decomposition, i.e. also handling complex matrices that are not diagonalizable.

If it helps any, the eigenvalues are known to all be positive. Also, although n might be large, as mentioned above, k (the number of smallest eigenvalues needed) might regularly be 10 or smaller.

Thanks for any insight anyone might be able to give. I realise that this is a bit in-depth considering that this is an ASM forum, not an algorithms forum, but it's the sort of thing that would definitely lend itself toward optimization in ASM. :)

Hi,

First: if your subdiagonals are controllable, Gersgorin Discs might be of help...

Second: You could also try the inverse Power method:

x_i+1 = A^-1 * x_i ;it converges to the to the smallest eigenvalue estimated by x_i+1^T * x_i, the start vector can be almost random, a good guess might speed it up or chose (1 1 1 1...1 1); i don't know right now but it could be that you need to normalize x_i each step.

it is much easier to program than a householder transformation or a givens rotation, we don't event talk about a schmidt-gram approach.

First: if your subdiagonals are controllable, Gersgorin Discs might be of help...

Second: You could also try the inverse Power method:

x_i+1 = A^-1 * x_i ;it converges to the to the smallest eigenvalue estimated by x_i+1^T * x_i, the start vector can be almost random, a good guess might speed it up or chose (1 1 1 1...1 1); i don't know right now but it could be that you need to normalize x_i each step.

it is much easier to program than a householder transformation or a givens rotation, we don't event talk about a schmidt-gram approach.

I posted a complete solution for eigenvectors based on Jacobian rotation,as part of my defunct 'ragdoll physics' framework.

First: if your subdiagonals are controllable, Gersgorin Discs might be of help...

Unfortunately, the subdiagonals aren't necessarily small compared to the diagonal.

Second: You could also try the inverse Power method:

This might be good if it converges fast-enough, especially if the inverse of a tridiagonal matrix is also quite sparse.

Thanks :)

The Jacobian rotation method is still the fastest method I know of.

If anyone knows of a faster or cheaper method, please do tell.

This solution has the added benefit of automatically Normalizing the matrix such that all the diagonal components are 1.0, which was crucial for my purposes.

If anyone knows of a faster or cheaper method, please do tell.

This solution has the added benefit of automatically Normalizing the matrix such that all the diagonal components are 1.0, which was crucial for my purposes.

The Jacobian rotation method is still the fastest method I know of.

I can't seem to find much info on this method. Could you post a link or some details of how it works?

No.

For definition and explanation, search google for 'Jacobian Matrix'

For sourcecode, if not mine, search google for 'Diagonalize'

I was using Jacobian Diagonalization in order to create a Normalized 4x4 Inertia Tensor from which I could extract a Normalized 3x3 Inertia Tensor, ie the Principle Axes of Rotation for an Arbitrary 3D object with Arbitrary mass distribution, and the Moments of Inertia about those Axes, translated into the World Axis System.

It's a 4x4 solver, not a general solution, but sure fits your requirements.

For definition and explanation, search google for 'Jacobian Matrix'

For sourcecode, if not mine, search google for 'Diagonalize'

I was using Jacobian Diagonalization in order to create a Normalized 4x4 Inertia Tensor from which I could extract a Normalized 3x3 Inertia Tensor, ie the Principle Axes of Rotation for an Arbitrary 3D object with Arbitrary mass distribution, and the Moments of Inertia about those Axes, translated into the World Axis System.

It's a 4x4 solver, not a general solution, but sure fits your requirements.