Abstract
Given the entries of an m × n matrix A and those of a column n-vector b, the entries of the product Ab can be computed using mn multiplications and m(n-l) additions by direct application of the formula However, in many cases the matrix A has a particular form, and Ab can be computed with fewer operations. For example, the finite Fourier transform can be computed with about nlog2n multiplications and nlog3n additions by using the fast Fourier transform (FFT) algorithm [4]. This paper deals with three aspects of algebraic complexity. The first section is concerned with lower bounds on the number of operations required to compute several functions. Several theorems are presented and their proofs sketched. The second section deals with relationships among the complexities of several sets of functions. In the third section, several matrices of general interest are examined and upper bounds on the number of operations required to multiply by them are constructively derived.
Cite
CITATION STYLE
Fiduccia, C. M. (1971). Fast matrix multiplication. In Proceedings of the Annual ACM Symposium on Theory of Computing (pp. 45–49). Association for Computing Machinery. https://doi.org/10.1145/800157.805037
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.