Low Rank Approximations

  • Forsyth D
N/ACitations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Preface Simple linear models are commonly used in engineering despite of the fact that the real world is often nonlinear. At the same time as being simple, however, the models have to be accurate. Mathematical models are obtained from first principles (natural laws, interconnection, etc.) and experimental data. Modeling from first principles aims at exact models. This approach is the default one in natural sciences. Modeling from data, on the other hand, allows us to tune complexity vs accuracy. This approach is common in engineering, where experimental data is available and a simple approximate model is preferred to a complicated exact one. For example, optimal control of a complex (high-order, nonlinear, time-varying) system is currently intractable, however, using simple (low-order, linear, time-invariant) approximate models and robust control methods may achieve sufficiently high performance. The topic of the book is data modeling by reduced complexity mathematical models. A unifying theme of the book is low-rank approximation: a prototypical data modeling problem. The rank of a matrix constructed from the data corresponds to the complexity of a linear model that fits the data exactly. The data matrix being full rank implies that there is no exact low complexity linear model for that data. In this case, the aim is to find an approximate model. The approximate modeling approach considered in the book is to find small (in some specified sense) modification of the data that renders the modified data exact. The exact model for the modified data is an optimal (in the specified sense) approximate model for the original data. The corresponding computational problem is low-rank approximation of the data matrix. The rank of the approximation allows us to trade-off accuracy vs complexity. Apart from the choice of the rank, the book covers two other user choices: the approximation criterion and the matrix structure. These choices correspond to prior knowledge about the accuracy of the data and the model class, respectively. An example of a matrix structure, called Hankel structure, corresponds to the linear time-invariant model class. The book presents local optimization, subspace and convex relaxation-based heuristic methods for a Hankel structured low-rank approximation. v vi Preface Low-rank approximation is a core problem in applications. Generic examples in systems and control are model reduction and system identification. Low-rank approximation is equivalent to the principal component analysis method in machine learning. Indeed, dimensionality reduction, classification, and information retrieval problems can be posed and solved as particular low-rank approximation problems. Sylvester structured low-rank approximation has applications in computer algebra for the decoupling, factorization, and common divisor computation of polynomials. The book covers two complementary aspects of data modeling: stochastic estimation and deterministic approximation. The former aims to find from noisy data that is generated by a low-complexity system an estimate of that data generating system. The latter aims to find from exact data that is generated by a high complexity system a low-complexity approximation of the data generating system. In applications, both the stochastic estimation and deterministic approximation aspects are present: the data is imprecise due to measurement errors and is possibly generated by a complicated phenomenon that is not exactly representable by a model in the considered model class. The development of data modeling methods in system identification and signal processing, however, has been dominated by the stochastic estimation point of view. The approximation error is represented in the mainstream data modeling literature as a random process. However, the approximation error is deterministic, so that it is not natural to treat it as a random process. Moreover, the approximation error does not satisfy stochastic regularity conditions such as station-arity, ergodicity, and Gaussianity. These aspects complicate the stochastic approach. An exception to the stochastic paradigm in data modeling is the behavioral approach , initiated by J. C. Willems in the mid 80's. Although the behavioral approach is motivated by the deterministic approximation aspect of data modeling, it does not exclude the stochastic estimation approach. This book uses the behavioral approach as a unifying language in defining modeling problems and presenting their solutions. The theory and methods developed in the book lead to algorithms, which are implemented in software. The algorithms clarify the ideas and vice verse the software implementation clarifies the algorithms. Indeed, the software is the ultimate un-ambiguous description of how the theory and methods are applied in practice. The software allows the reader to reproduce and to modify the examples in the book. The exposition reflects the sequence: theory → algorithms → implementation. Correspondingly , the text is interwoven with code that generates the numerical examples being discussed. The reader can try out these methods on their own problems and data. Experimenting with the methods and working on the exercises would lead you to a deeper understanding of the theory and hands-on experience with the methods.

Cite

CITATION STYLE

APA

Forsyth, D. (2019). Low Rank Approximations. In Applied Machine Learning (pp. 117–138). Springer International Publishing. https://doi.org/10.1007/978-3-030-18114-7_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free