Safemetrics分享 http://blog.sciencenet.cn/u/jerrycueb 以勤奋、谦虚、严谨、规范、持久的习惯和态度做安全科学研究。 'Wonder en is gheen Wonder'

博文

Human Error

已有 6858 次阅读 2013-2-1 08:01 |个人分类:安全科学|系统分类:科研笔记| error

 

Human Error, published in 1991, is a major theoretical integration of several previously isolated literatures. Particularly important is the identification of cognitive processes common to a wide variety of error types. Technology has now reached a point where improved safety can only be achieved on the basis of a better understanding of human error mechanisms. In its treatment of major accidents, the book spans the disciplinary gulf between psychological theory and those concerned with maintaining the reliability of hazardous technologies. As such, it is essential reading not only for cognitive scientists and human factors specialists, but also for reliability engineers and risk managers. No existing book speaks with so much clarity to both the theorists and the practitioners of human reliability.

James Reason has produced a major theoretical integration of several previously isolated literatures in his new book Human Error. Much of the theoretical structure is new and original. Particularly important is the identification of cognitive processes common to a wide variety of error types. Modern technology has now reached a point where improved safety can only be achieved on the basis of a better understanding of human error mechanisms. In its treatment of major accidents, the book spans the disciplinary gulf between psychological theory and those concerned with maintaining the reliability of hazardous technologies. As such, it is essential reading not only for cognitive scientists and human factors specialists, but also for reliability engineers and risk managers. No existing book speaks with so much clarity to both the theorists and the practitioners of human reliability.

 
Table of Contents

Preface
1. The nature of error
2. Studies of human error
3. Performance levels and error types
4. Cognitive under-specification and error forms
5. A design for a fallible machine
6. The detection of errors
7. Latent errors and systems disasters
8. Assessing and reducing the risks associated with human error
References.

 

http://libro.eb20.net/Reader/rdr.aspx?b=691565

Human Error, by James Reason: Synopsis.

Human Error, by James Reason: Synopsis.

Here is a chapter-by-chapter breakdown of the seminal text, Human Error

Chapter 1

This chapter is mainly introductory material. He speaks of a “cognitive balance sheet” which I don’t completely understand, but I take it he will speak more of it. Two basic error types are mentioned: slips or lapses (things don’t go according to plan) and mistakes (the plan is inadequate). Error types and error forms are distinguished between: error types are distinguished by the performance level at which they occur, while error forms are evident at all levels of human performance and appear to originate in universal cognitive process.

Chapter 2

This is a survey of studies done in the field heretofore, with the goal of assembling a framework against which human error can be understood. Two structural features of human cognition are the workspace or working memory and the knowledge base. The former is identified with the attentional control mode, which is mainly concerned with setting future goals, selecting the means to achieve them, monitoring progress towards these objectives, and detecting and recovering from error. The latter is identified with the schematic control mode, which can be thought of as preconceived templates, overlaid onto situations. For example, the schemata of making coffee might be so familiar that a regular coffee drinker does it on “autopilot”. Schemata require a certain threshold level of activation to call them into operation. There are specific activators, such as intentional activity. The failure to intervene in a schemata at the appropriate time (after the kettle has boiled) might cause our coffee drinker to serve coffee to his guest who requested tea. There are also general activators, which can be thought of as “automatically” activating schemata. Contextual cueing is often to blame. For example, one passes through the bedroom and changes into his pajamas, instead of getting his hat as he intended.

Chapter 3

There are three types of error: skill-based, rule-based, and knowledge-based. Errors at the skill-based level mostly occur at the monitoring phase, and before a problem is detected. There is a plan in place, and there is some failure on the part of the operator, either from overattention to unimportant details or from distraction/underattention. Mistakes in the rule-based and knowledge-based sectors occur at the problem-solving level. Humans, argues Reason, are furious pattern seekers, so the problem solver first seeks to match the problem with a known rule. Rule-based problems fall into the categories of misapplication of good rules (strong but wrong), and application of bad rules (wrong, inelegant, clumsy or inadvisable). Only when no rule-based solution is found will the problem solver move to the knowledge-based sector, where various rules and schemata are pieced together to arrive at a solution to the problem. Various pathologies of knowledge-based problem solving include selecting the wrong features of a problem space, being insensitive to the absence of relevant elements, confirmation bias, overconfidence, biased reviewing of plan construction, illusory correlation, halo effects, and problems with causality.

Chapter 4

The cognitive system seeks to select contextually appropriate, high-frequency responses when underspecification is present, and this gives rise to error. In other words, when there is not enough information about the specific kind of answer required, the mind selects salient candidates, the ones that have appeared on the mental grid most often, a kind of cognitive popularity contest. The schema idea suggests the idea of intellectual “slots” that will only accept a certain kind of data. Frequency gives rise to associative connections; the more often something is encountered, the more opportunity there is to link it to other schema.

Chapter 5

This chapter outlines the design of a “fallible machine” that could make the same kinds of errors that humans make. The design is founded on the principles of the preceding chapter. I found it extremely technical, and not very practical unless you were an engineer, so I largely skipped it, and simply read the summary at the end. Three ways in which knowledge structures are brought into play are similarity-matching (activating knowledge structures on the basis of similarity between calling conditions and stored attributes), frequency-gambling (resolving conflicts between partially-matched “candidates” in favor of high-frequency items), and inference. Underspecification will result in increased frequency-gambling.

Chapter 6

This chapter was concerned with the detection of errors. The efficiency of error-detection is closely related to feedback. On the lower levels, feedback is more immediately available in the form of forcing functions (one cannot proceed until the error is dealt with), or system responses such as gagging, warnings, and “teach me.” On the higher levels, such information is often unavailable, and errors are more difficult to detect. The three ways in which error may be detected are through self-monitoring, through an environmental cue, or through another person. Skill-based errors are detected most readily, but also occur with the most frequency. Knowledge-based errors represent the inverse. The presence of a “monitor” echoes the work of Flower and Hayes.

Chapter 7

Root causes of serious accidents are often present long before the event occurs. Additionally, there is often a series of errors that are allowed to build up within the system before a major breach occurs. With each new error, the likelihood of an event increases. However, looking back, we should be careful of hindsight bias, which has two aspects: 1) the “knew-it-all-along” effect, where observers of past events exaggerate what participants should have been able to know at the time, and if they were involved, what they themselves knew beforehand, and 2) the inability to see that outcome knowledge influences perceptions of the past. The idea of an “impossible accident” basically means the people involved couldn’t conceive of it before it occurred. Fundamental attrition error is blaming people for errors and ignoring situational factors.

Chapter 8

In an effort to provide some application of the ideas in this book, Reason outlines several HRA (Human Reliability Analysis) heuristics. He notes the inherent problems in attempting to weld human and electronic agents into a cogent and error-proof system; humans have had their cognitive tendencies basically ignored when past systems have been designed. Ideas to reduce human error include aids to the memory (procedures) and aids to decision (heuristics). Designers should build error into their systems; in other words, they should make use of error-allowing technologies and make certain that their systems give adequate feedback to human operators. Don Norman speaks of logical design in his book “The Design of Everyday Things” (my next read). Also, notion of “bounded rationality” is found in the textbook for 856.



https://blog.sciencenet.cn/blog-554179-658440.html

上一篇:2012年学科评估结果:1201 管理科学与工程
下一篇:对于恶意内容和消息就得举报
收藏 IP: 124.207.151.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (2 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-11-23 11:12

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部