The importance of algorithms is now recognized in all mathematical sciences,thanks to the development of computability and computationalcomplexity theory in the 20th century. The basic understanding of computabilitytheory developed in the nineteen thirties with the pioneeringwork of mathematicians like G¨odel, Church, Turing and Post. Their workprovided the mathematical basis for the study of algorithms as a formalizedconcept. The work of Hartmanis, Stearns, Karp, Cook and othersin the nineteen sixties and seventies showed that the refinement of thetheory to resource-bounded computations gave the means to explain themany intuitions concerning the complexity or ‘hardness’ of algorithmicproblems in a precise and rigorous framework.The theory has its roots in the older questions of definability, provabilityand decidability in formal systems. The breakthrough in the nineteenthirties was the formalisation of the intuitive concept of algorithmiccomputability by Turing. In his famous 1936-paper, Turing [43] presenteda model of computation that was both mathematically rigorous and generalenough to capture the possible actions that any ‘human computer’could carry out. Although the model was presented well before digitalcomputers arrived on the scene, it has the generality of describingcomputations at the individual bit-level, using very basic control commands.Computability and computational complexity theory are nowfirmly founded on the Turing machine paradigm and its ramificationsin recursion theory. In this paper we will extend the Turing machineparadigm to include several key features of contemporary informationprocessing systems.
CITATION STYLE
van Leeuwen, J., & Wiedermann, J. (2001). The Turing Machine Paradigm in Contemporary Computing. In Mathematics Unlimited — 2001 and Beyond (pp. 1139–1155). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-56478-9_59
Mendeley helps you to discover research relevant for your work.