pt. 1. The exponent of matrix multiplication. The power of recursive algorithms for matrix multiplication -- Bilinear algorithms for MM -- The search for a basis algorithm and the history of the asymptotic acceleration of MM -- the basic algorithm and the Exponent 2.67 -- The dependence of the exponent of MM on the class of constants used -- [Leaning T]-algorithms and their application to MM. Accumulation of the accelerating power of [leaning T]-algorithms via recursion -- Strassen's conjecture. Its extended and exponential versions -- Recursive algorithms for MM and for disjoint MM (definitions, notation, and two basic facts) -- Some applications of the recursive construction of bilinear algorithms -- Trilinear versions of bilinear algorithms and of bilinear [leaning T]-algorithms. Duality. Recursive trilinear algorithms -- Trilinear aggregating and some efficient basis designs -- A further example of trilinear aggregating and its refinement via a linear transformation of variables -- Aggregating the triplets of principal terms -- Recursive application of trilinear aggregating -- Can the exponent be further reduced? -- The exponents below 2.5 -- How much can we reduce the exponent? -- pt. 2. Correlation between matrix multiplication and other computational problems. Bit-time, bit-space, stability, and condition. -- Reduction of some combinatorial computational problems to MM -- Asymptotic arithmetical complexity of some computations in linear algebra -- Two block-matrix algorithms for the QR-factorization and QR-type factorization of a matrix -- Applications of the QR- and QR-type factorization to the problems MI, SLE, and Det -- Storage space for asymptotically fast matrix operations -- The bit-complexity of computations in linear algebra. The case of matrix multiplication -- Matrix norms and their application to estimating the bit-complexity of matrix multiplication -- Stability and condition of algebraic problems and of algorithms for such problems -- Estimating the errors of the QR-factorization of a matrix -- The bit-complexity and the condition of the problem of solving a system of linear equations -- The bit-complexity and the condition of the problem of matrix inversion -- The bit-complexity and the condition of the problem of the evaluation of the determinant of a matrix -- Summary of the bounds on the bit-time of computations in linear algebra; acceleration of solving a system of linear equations where high relative precision of the output is required -- pt. 3. The speed-up of the multiplication of matrices of a fixed size. The currently best upper bounds on the rank of the problem of MM of moderate sizes -- commutative quadratic algorithms for MM -- [Leaning T]-algorithms for the multiplication of matrices of small and moderate sizes -- The classes of straight line arithmetical algorithms and [leaning T]-algorithms and their reduction to quadratic ones -- The basic active substitution argument and lover bounds on the ranks of arithmetical algorithms for matrix multiplication -- Lower bounds on the [leaning T]-rank and on the commutative [leaning T]-rank of matrix multiplication -- Basic active substitution argument and lower bounds on the number of additions and subtractions -- Nonlinear lower bounds on the complexity of arithmetical problems under additional restrictions on the computational schemes -- A trade-off between the additive complexity and the asynchronicity of linear and bilinear algorithms -- An attempt of practical acceleration of matrix multiplication and of some other arithmetical computations.
CITATION STYLE
Pan, Victor. (1984). How to multiply matrices faster (p. 212). Springer-Verlag. Retrieved from https://discovery.upc.edu/iii/encore/record/C__Rb1003342__Show%20to%20multiply%20matrices%20faster__Orightresult__U__X7;jsessionid=C78D08F3E937A83A23704F68BD9BCCD9?lang=cat
Mendeley helps you to discover research relevant for your work.