Modelling consciousness within mental monism: An automata-theoretic approach

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Models of consciousness are usually developed within physical monist or dualistic frameworks, in which the structure and dynamics of the mind are derived from the workings of the physical brain. Little attention has been given to modelling consciousness within a mental monist framework, deriving the structure and dynamics of the mental world from primitive mental constituents only—with no neural substrate. Mental monism is gaining attention as a candidate solution to Chalmers’ Hard Problem on philosophical grounds, and it is therefore timely to examine possible formal models of consciousness within it. Here, I argue that the austere ontology of mental monism places certain constraints on possible models of consciousness, and propose a minimal set of hypotheses that a model of consciousness (within mental monism) should respect. From those hypotheses, it would be possible to construct many formal models that permit universal computation in the mental world, through cellular automata. We need further hypotheses to define transition rules for particular models, and I propose a transition rule with the unusual property of deep copying in the time dimension.

Cite

CITATION STYLE

APA

Lloyd, P. B. (2020). Modelling consciousness within mental monism: An automata-theoretic approach. Entropy, 22(6). https://doi.org/10.3390/e22060698

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free