Human-Centered Automation: A Philosophy, Some Design Tenets, and Related Research

  • Mitchell C
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human-centered automation is a term used to characterize the use of automation technologies (e.g., intelligent aids, displays, warning devices) to enhance the capabilities and compensate for the limitations of human operators responsible for the safety and effectiveness of complex dynamic systems. Automation, per se, does not have to be human centered. There is increasing use of control automation that operates in the background, i.e., without the direct knowledge or control of the human operator. To the extent that the operator is not responsible for ensuring its successful operation, this type of automation can be thought of as purely autonomous, much like an automatic transmission in an automobile. Human-centered automation is automation whose purpose is not necessarily to automate previously manual functions (i.e., gear shifting), but rather to enhance user effectiveness and reduce error. Thus, one test of whether a proposed piece automation is human-centered is an answer to the question: ''Does it enhance user effectiveness?'' If the answer is yes, the proof is on the designer to demonstrate how. There is widespread concern among operators of complex systems (e.g., pilots in modern commercial aircraft) that emerging automation may perform the 'easy tasks', reducing operator-in-the-loop familiarity, and, thus, leave them ill prepared to assume primary control of sophisticated system functions in anomalous situations. Human-centered automation makes assumptions about the decision agents in systems where control combines human and machine agents. Machine agents are knowledge-based systems with both strengths and weaknesses. Since they are computers, machine agents can be expected to act in timely, consistent ways; machine agents do not get distracted or tired. Given the limits of machine intelligence, however, machine agents have very fragile intelligence - unable to cope reliably with unpredicted or anomalous events. Human agents are responsible for identifying and compensating for the limitations of the machine agents. Thus, it is important that design supports the operator's awareness of both the current system state as well as the states of the machine agents; to ensure that both sets of agents are operating in complementary modes. This presentation describes human-centered automation research. First, we present a philosophy of human-centered automation. Next a set of engineering design tenets for human-centered automation is proposed. The philosophy and design framework are 'conceptual normative' - i.e., they characterize design and design tools grounded in a philosophy of human-centered automation. To give substance to the prescriptive statements, this presentation concludes with a description of a research methodology developed at Georgia Tech's Center for Human-Machine Systems Research. The methodology includes a model that structures the interaction of human operators with complex dynamic systems and the use of the model to (1) specify the allocation of control functions between human and computer-based controllers; (2) design and control (in real time) 'intelligent' displays; and (3) define and implement the 'intelligence' for operator assistants and intelligent tutors. Examples of the methodology are drawn from manufacturing and aerospace systems.

Cite

CITATION STYLE

APA

Mitchell, C. M. (1996). Human-Centered Automation: A Philosophy, Some Design Tenets, and Related Research (pp. 377–381). https://doi.org/10.1007/978-1-4613-1447-9_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free