Human Error, by James Reason: Synopsis.
Here is a chapter-by-chapter breakdown of the seminal text, Human Error.
Chapter 1
This chapter is mainly introductory material. He speaks of a “cognitive balance sheet” which I don’t completely understand, but I take it he will speak more of it. Two basic error types are mentioned: slips or lapses (things don’t go according to plan) and mistakes (the plan is inadequate). Error types and error forms are distinguished between: error types are distinguished by the performance level at which they occur, while error forms are evident at all levels of human performance and appear to originate in universal cognitive process.
Chapter 2
This is a survey of studies done in the field heretofore, with the goal of assembling a framework against which human error can be understood. Two structural features of human cognition are the workspace or working memory and the knowledge base. The former is identified with the attentional control mode, which is mainly concerned with setting future goals, selecting the means to achieve them, monitoring progress towards these objectives, and detecting and recovering from error. The latter is identified with the schematic control mode, which can be thought of as preconceived templates, overlaid onto situations. For example, the schemata of making coffee might be so familiar that a regular coffee drinker does it on “autopilot”. Schemata require a certain threshold level of activation to call them into operation. There are specific activators, such as intentional activity. The failure to intervene in a schemata at the appropriate time (after the kettle has boiled) might cause our coffee drinker to serve coffee to his guest who requested tea. There are also general activators, which can be thought of as “automatically” activating schemata. Contextual cueing is often to blame. For example, one passes through the bedroom and changes into his pajamas, instead of getting his hat as he intended.
Chapter 3
There are three types of error: skill-based, rule-based, and knowledge-based. Errors at the skill-based level mostly occur at the monitoring phase, and before a problem is detected. There is a plan in place, and there is some failure on the part of the operator, either from overattention to unimportant details or from distraction/underattention. Mistakes in the rule-based and knowledge-based sectors occur at the problem-solving level. Humans, argues Reason, are furious pattern seekers, so the problem solver first seeks to match the problem with a known rule. Rule-based problems fall into the categories of misapplication of good rules (strong but wrong), and application of bad rules (wrong, inelegant, clumsy or inadvisable). Only when no rule-based solution is found will the problem solver move to the knowledge-based sector, where various rules and schemata are pieced together to arrive at a solution to the problem. Various pathologies of knowledge-based problem solving include selecting the wrong features of a problem space, being insensitive to the absence of relevant elements, confirmation bias, overconfidence, biased reviewing of plan construction, illusory correlation, halo effects, and problems with causality.
The cognitive system seeks to select contextually appropriate, high-frequency responses when underspecification is present, and this gives rise to error. In other words, when there is not enough information about the specific kind of answer required, the mind selects salient candidates, the ones that have appeared on the mental grid most often, a kind of cognitive popularity contest. The schema idea suggests the idea of intellectual “slots” that will only accept a certain kind of data. Frequency gives rise to associative connections; the more often something is encountered, the more opportunity there is to link it to other schema.
Chapter 5
This chapter outlines the design of a “fallible machine” that could make the same kinds of errors that humans make. The design is founded on the principles of the preceding chapter. I found it extremely technical, and not very practical unless you were an engineer, so I largely skipped it, and simply read the summary at the end. Three ways in which knowledge structures are brought into play are similarity-matching (activating knowledge structures on the basis of similarity between calling conditions and stored attributes), frequency-gambling (resolving conflicts between partially-matched “candidates” in favor of high-frequency items), and inference. Underspecification will result in increased frequency-gambling.
Chapter 6
This chapter was concerned with the detection of errors. The efficiency of error-detection is closely related to feedback. On the lower levels, feedback is more immediately available in the form of forcing functions (one cannot proceed until the error is dealt with), or system responses such as gagging, warnings, and “teach me.” On the higher levels, such information is often unavailable, and errors are more difficult to detect. The three ways in which error may be detected are through self-monitoring, through an environmental cue, or through another person. Skill-based errors are detected most readily, but also occur with the most frequency. Knowledge-based errors represent the inverse. The presence of a “monitor” echoes the work of Flower and Hayes.
Chapter 7
Root causes of serious accidents are often present long before the event occurs. Additionally, there is often a series of errors that are allowed to build up within the system before a major breach occurs. With each new error, the likelihood of an event increases. However, looking back, we should be careful of hindsight bias, which has two aspects: 1) the “knew-it-all-along” effect, where observers of past events exaggerate what participants should have been able to know at the time, and if they were involved, what they themselves knew beforehand, and 2) the inability to see that outcome knowledge influences perceptions of the past. The idea of an “impossible accident” basically means the people involved couldn’t conceive of it before it occurred. Fundamental attrition error is blaming people for errors and ignoring situational factors.
Chapter 8
In an effort to provide some application of the ideas in this book, Reason outlines several HRA (Human Reliability Analysis) heuristics. He notes the inherent problems in attempting to weld human and electronic agents into a cogent and error-proof system; humans have had their cognitive tendencies basically ignored when past systems have been designed. Ideas to reduce human error include aids to the memory (procedures) and aids to decision (heuristics). Designers should build error into their systems; in other words, they should make use of error-allowing technologies and make certain that their systems give adequate feedback to human operators. Don Norman speaks of logical design in his book “The Design of Everyday Things” (my next read). Also, notion of “bounded rationality” is found in the textbook for 856.