Physical symbol systems -
COGNITIVE SCIENCE 4, 135-183 (1960) Physical Symbol Systems* ALLEN NEWELL Carnegie-Mellon University On the occasion of 0 first conference on Cognitive Science, it seems appropriate to review the basis of common understanding between the various disciplines. In my estimate, the most fundamental contribution so far of artificial intelligence and computer science to the joint enterprise of cognitive science has been the notion of a physical symbol system, i.e., the concept of D broad class of systems capable of having and manipulating symbois, yet realizable in the physical universe. The notion of symbol so defined is internal to this concept, so it becomes a hypothesis that this notion of symbols includes the symbols that we humans use every day of our lives. In this paper we attempt systematically, but plainly, to lay out the nature of physical symbol systems. Such IJ review is in ways familiar, but not thereby useless. Restatement of fundamentals is an important exercise. 1. INTRODUCTION The enterprise to understand the nature of mind and intelligence has been with us for a long time. It belongs not to us alone, who are gathered at this conference, nor even to science alone. It is one of the truly great mysteries and the weight of scholarship devoted to it over the centuries seems on occasion so oppressively large as to deny the possibility of fundamental progress, not to speak of solution. Yet for almost a quarter century now, experimental psychology, linguis- tics, and artificial intelligence have been engaged in a joint attack on this mystery *This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, Monitored by the Air Force Avionics Laboratory Under Contract F33615- 78-C-1551. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, or the U.S. Government. Herb Simon would be a co-author of this paper, except that he is giving his own paper at this conference. The key ideas are entirely joint, as the references indicate. In addition, I am grateful to Greg Harris, John McDermott, Zenon F���ylysyhn. and Mike Rychener for detailed comments on an earlier draft. 135
136 NEWELL that is fueled by a common core of highly novel theoretical ideas, experimental techniques, and methodological approaches. Though retaining our separate dis- ciplinary identities. we have .strongly influenced each other throughout this period. Others have been involved in this new attack, though not so centrally- additional parts of computer science and psychology, and parts of philosophy, neurophysiology , and anthropology. Our communality continues to increase. In consequence, we are engaged in an attempt to bind our joint enterprise even more tightly by a common umbrella name, Cognitive Science, a new society, and a new series of conferences devoted to the common theme-the outward and institutional signs of inward and concep- tual progress. On such an occasion, attempts to increase our basis of mutual understanding seem to be called for. In my own estimation (Newell & Simon, 1976), the most fundamental contribution so far of artificial intelligence and computer science to this joint enterprise has been the notion of a physical symbol system. This concept of a broad class of systems that is capable of having and manipulating symbols, yet is also realizable within our physical universe, has emerged from our growing experience and analysis of the computer and how to program it to perform intellectual and perceptual tasks. The notion of symbol that it defines is internal to this concept of a system. Thus, it is a hypothesis that these symbols are in fact the same symbols that we humans have and use everyday of our lives. Stated another way, the hypothesis is that humans are instances of physical symbol systems, and, by virtue of this, mind enters into the physical universe. In my own view this hypothesis sets the terms on which we search for a scientific theory of mind. What we all seek are the further specifications of physical symbol systems that constitute the human mind or that constitute sys- tems of powerful and efficient intelligence. The physical symbol system is to our enterprise what the theory of evolution is to all biology, the cell doctrine to cellular biology, the notion of germs to the scientific concept of disease, the notion of tectonic plates to structural geology. The concept of a physical symbol system is familiar in some fashion to everyone engaged in Cognitive Science-familiar, yet perhaps not fully ap- preciated. For one thing, this concept has not followed the usual path of scientific creation, where development occurs entirely within the scientific attempt to understand a given phenomenon. It was not put forward at any point in time as a new striking hypothesis about the mind, to be confiied or disconfiied. Rather, it has evolved through a much more circuitous root. Its early history lies within the formalization of logic, where the emphasis was precisely on separating formal aspects from psychological aspects. Its mediate history lies within the development of general purpose digital computers, being thereby embedded in the instrumental, the industrial, the commercial and the artificial-hardly the breeding ground for a theory to cover what is most sublime in human thought.
PHYSICAL SYMBOL SYSTEMS 137 The resulting ambivalence no doubt accounts in part for a widespread proclivity to emphasize the role of the computer metaphor rather than a theory of informa- tion processing. The notion of symbol permeates thinking about mind, well beyond at- tempts at scientific understanding. Philosophy, linguistics, literature, the arts- all have independent and extensive concerns that focus on human symbols and symbolic activity. Think only of Cassirier or Langer or Whitehead in philosophy. Consider semantics, concerned directly with the relation between linguistic sym- bols and what they denote, or Jung, in a part of psychology remote from ex- perimentation and tight theory. These are vast realms of scholarship, by any reckoning. I cannot touch these realms today in any adequate way. Perhaps, I can let one quote from Whitehead stand for them all: After this preliminary explanation we must start with a definition of symbolism: The human mind is functioning symbolically when some components of its experience elicit consciousness, beliefs, emotions, and usages, respecting other components of its experience. The former set of components are the ���symbols���, and the later set constitute the ���meaning��� of the symbols. The organic functioning whereby there is transition from the symbol to the meaning will be called ���symbolic reference���. (1927, pp. 7-8) This statement, from over ftity years ago, has much to recommend it. Let it serve as a reminder that the understanding of symbols and symbolism is by no means brand new. Yet the thread through computer science and artificial intelli- gence has made a distinctive contribution to discovering the nature of human symbols. Indeed, in my view the contribution has been decisive. The notion of a physical symbol system has been emerging throughout the quarter century of our joint enterprise-always important, always recognized, but always slightly out of focus as the decisive scientific hypothesis that it has now emerged to be. For instance, recall the rhetoric of the fifties, where we insisted that com- puters were symbol manipulation tnachines and not just number manipulation machines. The mathematicians and engineers then responsible for computers insisted that computers only processed numbers-that the great thing was that instructions could be translated into numbers. On the contrary, we argued, the great thing was that computers could take instructions and it was incidental, though useful, that they dealt with numbers. It was the same fundamental point about symbols, but our aim was to revise opinions about the computer, not about the nature of mind. Another instance is our ambivalence toward list processing languages. Historically, these have been critically important in abstracting the concept of symbol processing, and we have certainly recognized them as carriers of theoret- ical notions. Yet we have also seen them as nothing but programming languages,
138 NEWELL i.e., as nothing but tools. The reason why Al programming continues to be done almost exclusively in list processing languages is sought in terms of ease of programming, interactive style and what not. That Lisp is a close approximation to a pure symbol system is often not accorded the weight it deserves. Yet a third instance can be taken from our own work. When we laid out the notion of physical symbol system in our book on human problem solving (Newell & Simon, 1972), we did this as an act of preparation, not as the main point. We focussed the theory on how people solved problems, given that they were symbol manipulation systems. Even when, a little later, we chose to focus on the physi- cal symbol system hypothesis per se (Newell & Simon, 1976), it was in the context of receiving an award and thus we described it as a conceptual advance that had already transpired. A fourth and final instance is the way information processing systems are presented in cognitive psychology. Even in the best informed presentations (e.g., Clark & Clark, 1977 Lindsay & Norman, 1977 Rumelhart, 1977) there is little emphasis on symbolic functioning per se. When concern is expressed about the adequacy of information processing notions for psychology (e.g., Neisser, 1976), the role of symbolic functioning is not addressed. There are some very recent exceptions to this picture (Lachman, Lachman, & Butterfield, 1979). But some of these (Allport, 1979 Palmer, 1978) seem to view such developments as rather new, whereas I see them as having been the taproot of the success in Artificial Intelligence right from the start almost twenty-five years ago. In sum, it seems to me, a suitable topic for this conference is to attempt, systematically but plainly, to lay out again the nature of physical symbol sys- tems. All this will be in some ways familiar, but I hope far from useless. Restatement of fundamentals is an important exercise. Indeed, I can take my text from Richard Feynman. He is speaking of Fermi���s law of optics, but it applies generally: Now in the further development of science, we want more than just a formula. First we have an observation, then we have numbers that we measure, then have a law which summarizes all the numbers. But the real glory of science is that we can find a way of thinking such that the law is evidenr. (1963, p. 26) Physical symbol systems are becoming for us simply evident. But they are our glory, and it is fitting that we should understand them with a piercing clarity. And so, if you cannot stand what I say here as science, then take it as celebration. 1.1 Constraints on Mind Let me provide a general frame for the paper. The phenomena of mind have arisen from a complex of aspects of the physical universe, localized strikingly (though possibly not exclusively) in us humans. We scientists, trying to discern
PHYSICAL SYMBOL SYSTEMS 139 the physical nature of mind, can cast these aspects as a conjunction of constraints on the nature of mind-like systems. Then our discovery problem is that of finding a system structure that satisfies all these constraints. In trying to make that discovery, we can use any tactics we wish. The constraints themselves are simply desiderata and have no privileged status. There is no magic list of constraints that we can feel sure about. Their choice and formulation is as much a step in the discovery process as solving the constraint satisfaction problem after positing them. However, it is easy to list some candidate constraints that would find general acknowledgement. Figure 1 presents a baker���s dozen. These constraints are far from precisely defined. Operationalizing the no- tion of self-awareness poses difficult problems, however critical it seems as a requirement. Even what constitutes the brain is open, moving over the last thirty years from an essentially neural view to one that includes macromolecular mechanisms as well. Not all the constraints are necessarily distinct. Conceivably, human symbolic behavior and linguistic behavior could be the same, as could development and learning. Not all constraints are necessarily independent. To be a neural system implies being a physical system, though there can be reasons to consider the more general constraint separately. Some of the constraints are familiar back to Aristotle, others are recent additions. Who would have thought to add the concern with robustness under error if computers and their programs had not exhibited the sort of brittle, ungraceful degradation that we have all come to know so well. What seems clear is that, when we finally come to know the nature of mind in humans, it will be seen to satisfy all of these constraints (and others that I have neglected to list). And when we finally come to know the nature of intelligence generally, it will be seen how its variety arises from a release from some of these constraints. Our difficulty, as scientists, is that we cannot solve for systems that satisfy 1. Behave as an (almost) arbitrary function of the environment (universality). 2. Operate in real time. 3. Exhibit rational, i.e., effective adaptive behavior. 4. Use vast amounts of knowledge about the erwironment. 5. 8&w robustly in the face of error, the unexpected, and the unknown. 6. Use symbols (and abstractions). 7. Use (natural) language. 8. Exhibit selfsvmreness and a sense of self. 9. learn from its environment. 10. Acquire its capabilities through development. 1 1. Arise through evolution. 12. Be realizable within the brain as a physical system. 13. Be realizable as a physical system. Figure 1. Constraints on Mind.