Iterative methods such as Lanczos and Jacobi-Davidson are typically used to compute a small number of eigenvalues and eigenvectors of a sparse matrix. However, these methods are not effective in certain large-scale applications, for example, "global tight binding molecular dynamics." Such applications require all the eigenvectors of a large sparse matrix; the eigenvectors can be computed a few at a time and discarded after a simple update step in the modeling process. We show that by using sparse matrix methods, a direct-iterative hybrid scheme can significantly reduce memory requirements while requiring less computational time than a banded direct scheme. Our method also allows a more scalable parallel formulation for eigenvector computation through spectrum slicing. We discuss our method and provide empirical results for a wide variety of sparse matrix test problems. © Springer-Verlag Berlin Heidelberg 2003.
CITATION STYLE
Teranishi, K., Raghavan, P., & Yang, C. (2003). Time-memory trade-offs using sparse matrix methods for large-scale eigenvalue problems. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2667, 840–847. https://doi.org/10.1007/3-540-44839-x_88
Mendeley helps you to discover research relevant for your work.