Eigenvectors and Diagonalizable Matrices

  • Aggarwal C
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

"Mathematics is the art of giving the same name to different things."-Henri Poincare 3.1 Introduction Any square matrix A of size d × d can be considered a linear operator, which maps the d-dimensional column vector x to the d-dimensional vector Ax. A linear transformation Ax is a combination of operations such as rotations, reflections, and scalings of a vector x. A diagonalizable matrix is a special type of linear operator that only corresponds to a simultaneous scaling along d different directions. These d different directions are referred to as eigenvectors and the d scale factors are referred to as eigenvalues. All such matrices can be decomposed using an invertible d × d matrix V and a diagonal d × d matrix Δ: A = V ΔV −1 The columns of V contain d eigenvectors and the diagonal entries of Δ contain the eigen-values. For any x ∈ R d , one can geometrically interpret A x using the decomposition in terms of a sequence of three transformations: (i) Multiplication of x with V −1 computes the coordinates of x in a (possibly non-orthogonal) basis system corresponding to the columns (eigenvectors) of V , (ii) multiplication of V −1 x with Δ to create ΔV −1 x dilates these coordinates with scale factors in Δ in the eigenvector directions, and (iii) final multiplication with V to create V ΔV −1 x transforms the coordinates back to the original basis system (i.e., the standard basis). The overall result is an anisotropic scaling in d eigenvector directions. Linear transformations that can be represented in this way correspond to diagonalizable matrices. A d × d diagonalizable matrix represents a linear transformation corresponding to anisotropic scaling in d linearly independent directions. When the columns of matrix V are orthonormal vectors, we have V −1 = V T. In such a case, the scaling is done along mutually orthogonal directions, and the matrix A is always © Springer Nature Switzerland AG 2020 C. C. Aggarwal, Linear Algebra and Optimization for Machine Learning, https://doi.

Cite

CITATION STYLE

APA

Aggarwal, C. C. (2020). Eigenvectors and Diagonalizable Matrices. In Linear Algebra and Optimization for Machine Learning (pp. 97–139). Springer International Publishing. https://doi.org/10.1007/978-3-030-40344-7_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free