Abstract
Repeated application of eigen-centric methods to an evolving dataset reveals that eigenvectors calculated by well-established computer implementations are not stable along an evolving sequence. This is because the sign of any one eigenvector may point along either the positive or negative direction of its associated eigenaxis, and for any one eigenfunction call, the sign does not matter when calculating a solution. This work reports a model-free algorithm that creates a consistently oriented basis of eigenvectors. The algorithm postprocesses any well-established eigen call and is therefore agnostic to the particular implementation of the latter. Once consistently oriented, directional statistics can be applied to the eigenvectors in order to track their motion and summarize their dispersion. When a consistently oriented eigensystem is applied to methods of machine learning, the time series of training weights becomes interpretable in the context of the machine-learning model. Ordinary linear regression is used to demonstrate such interpretability. A reference implementation of the algorithm reported herein has been written in Python and is freely available, both as source code and through the thucyd Python package.
Author supplied keywords
Cite
CITATION STYLE
Damask, J. (2020). A consistently oriented basis for eigenanalysis. International Journal of Data Science and Analytics, 10(4), 301–319. https://doi.org/10.1007/s41060-020-00227-z
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.