Solving systems of linear algebraic equations using quasirandom numbers

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we analyze a quasi-Monte Carlo method forsolving systems of linear algebraic equations. It is well known that theconvergence of Monte Carlo methods for numerical integration can oftenbe improved by replacing pseudorandom numbers with more uniformlydistributed numbers known as quasirandom numbers. Here the convergenceof a Monte Carlo method for solving systems of linear algebraicequations is studied when quasirandom sequences are used. An errorbound is established and numerical experiments with large sparse matricesare performed using Soboí, Halton and Faure sequences. The resultsindicate that an improvement in both the magnitude of the error andthe convergence rate are achieved.

Cite

CITATION STYLE

APA

Karaivanova, A., & Georgieva, R. (2001). Solving systems of linear algebraic equations using quasirandom numbers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2179, pp. 166–174). Springer Verlag. https://doi.org/10.1007/3-540-45346-6_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free