Model reduction is often required in several applications, typically due to limited available time, computer memory or other restrictions. In problems that are related to partial differential equations, this often means that we are bound to use sparse meshes in the model for the forward problem. Conversely, if we are given more and more accurate measurements, we have to employ increasingly accurate forward problem solvers in order to exploit the information in the measurements. Optical diffusion tomography (ODT) is an example in which the typical required accuracy for the forward problem solver leads to computational times that may be unacceptable both in biomedical and industrial end applications. In this paper we review the approximation error theory and investigate the interplay between the mesh density and measurement accuracy in the case of optical diffusion tomography. We show that if the approximation errors are estimated and employed, it is possible to use mesh densities that would be unacceptable with a conventional measurement model.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below