The New View assumes that people do not come to work to do a bad job. — xv: 160-161
“Old” versus “new” is perhaps a rather binary or simplistic way to think about such a complex problem as ‘human error.’ But, it might stretch the space of your thinking about that problem. It shows what is on either end of that space. And it provides you with a language, or concepts, to verbalize what is going on at those ends and in between. Don’t allow yourself to be persuaded too easily by any view or perspective. You probably have a lot of experiences and stories that suggest your view may have been correct all along. — xv: 163-167
The focus on ‘human error’ very quickly becomes a focus on humans as the cause of safety trouble, and on humans as the targets for intervention. But this has long been shown to be a limited safety endeavor, as getting rid of one person does not remove the conditions that gave rise to the trouble they got into. — xxiii: 304-306
‘Human error’ requires a standard. For the attribution to make any sense at all, it requires the possibility of actions or assessments that are not, or would not have been, erroneous. That standard often becomes visible only with knowledge of outcome, in hindsight. It is the outcome that allows us to say that other ways of working would have been smarter or better. If that outcome would have been different, the assessments and actions that are now deemed erroneous would likely have remained invisible (as normal work) or might even have been deemed constructive, heroic, resilient, innovative. — xxiii: 315-319
Because understanding and working on your ‘human error’ problem is very much about understanding your own reactions to failure; about recognizing and influencing your own organization’s tendencies to blame and simplify. This — xxvi: 379-381
Remember that the shortcuts and adaptations people have introduced into their work often do not serve their own goals, but yours or those of your organization! — 5: 489-491
Blaming the individual for a mismatch short-circuits a number of things. It goes against the very principle of the New View, which is that “error” is not a cause of trouble but a symptom of trouble. — 14: 667-668
• holding people accountable is fine; • but you need to be able to show that people had the authority to live up to the responsibility that you are now asking of them; • if you can’t do that, your calls for accountability have no merit, and you’d better start looking at yourself. — 16: 703-707
If you hold somebody accountable, that does not have to mean exposing that person to liability or punishment. • You can hold people accountable by letting them tell their story, literally “giving their account.” • Storytelling is a powerful mechanism for others to learn vicariously from trouble. — 19: 762-766
The move away from punishing good technicians for maintenance errors began about two decades ago as leaders began to understand the downside of disciplining to ‘fix’ errors—and the upside of instead conducting a thorough evaluation of the ‘why’ behind those errors. Even repetitive errors are usually the result of something other than a technician’s negligence. A striking example of this occurred when, over a six-year period, ‘hundreds of mechanics were cited for logbook violations. People working the aircraft on the gate were under pressure and they’d screw up the paperwork.’ Violations meant suspensions or a fine. Then the airline wanted to print 50,000 new logbooks. Starting with the station that had most problems, it asked the mechanics to design the pages. They did. Another station made a few tweaks, and when the new logbooks were introduced, violations dropped to zero. The problem wasn’t negligent mechanics, it was a poorly designed logbook. — 19: 772-780
If you truly want to create accountability and a “just culture” in your organization, forget buying it off the shelf. It won’t work, independent of how much you pay for it. You need to realize that it is going to cost you in different ways than dollars. It is going to cost you in the cognitive and moral effort you need to put in. It is going to cost you when you look in the mirror and don’t like what you see. Sure, you can try to create a “just culture” program based on categories. But sooner or later you will run into all the problems describe above. If you truly want to create accountability and a “just culture” in your organization, forget buying it off the shelf. It won’t work, independent of how much you pay for it. You need to realize that it is going to cost you in different ways than dollars. Instead, think about creating justice in your responses to incidents or failures. Begin by addressing the points below. — 22: 827-834
Explore the potential for restorative justice. Retributive justice focuses on the errors or violations of individuals. It suggests that if the error or violation (potentially) hurt someone, then the response should hurt as well. Others in the organization might have a desire to deny systemic causes, they might even fear being implicated in creating the conditions for the incident. Restorative justice, on the other hand, suggests that if the error or violation (potentially) hurt, then the response should heal. Restorative justice acknowledges the existence of multiple stories and points of view about how things could have gone wrong (and how they normally go right). — 23: 848-854
Put second victim support in place. Second victims are practitioners who have been involved in an incident that (potentially) hurt or killed someone else (for example, passengers, bystanders) and for which they feel personally responsible. Strong social and organizational support systems for second victims (psychological first aid, debriefings, follow-up), have proven critical to contain the negative consequences (particularly post-traumatic stress in all its forms). Implementing and maintaining support systems takes resources, but it is an investment not only in worker health and retention—it is an investment in justice and safety too. Justice can come from acknowledging that the practitioner is a victim too—a second victim. For some it can be empowering to be part of an investigation process. The opportunity to recount experiences first-hand can be healing—if these are taken seriously and do not expose the second victim to potential retribution or other forms of jeopardy. Such involvement of second victims is an important organizational investment in safety and learning. The resilience of second victims and the organization are intricately intertwined, after all. The lived experience of a second victim represents a rich trove of data for how safety is made and broken at the very heart of the organization. Those accounts can be integrated in how an individual and an organization handle their risk and safety. — 24: 866-877
Here are some questions Gary Klein and his researchers typically ask to find out how the situation looked to people on the inside at each of the critical junctures: — 47: 1241-1243
Debriefings need not follow such a scripted set of questions, of course, as the relevance of questions depends on the event. — 48: 1243-1244
The question, for understanding ‘human error,’ is not why people could have been so unmotivated or unwise not to pick up the things that you can decide were critical in hindsight. The question—and your job—is to find out what was important to them, and why. — 64: 1514-1516
For now, let’s look at two different operationalizations of “loss of effective CRM” as an example. Judith Orasanu at NASA has done research to find out what effective CRM is about.8 • shared understanding of the situation, the nature of the problem, the cause of the problem, the meaning of available cues, and what is likely to happen in the future, with or without action by the team members; • shared understanding of the goal or desired outcome; • shared understanding of the solution strategy: what will be done, by whom, when, and why? — 70: 1617-1623
Make sure, through what you find, that you identify the organization’s model(s) of risk, and how the organization thought it could control that risk or those risks. It is, after all, the organization’s model of risk that made them invest in certain things (for example, automation or standardized procedures to control unreliable operators and ‘human error’) and ignore others (for example, how production pressures affected people’s trade-offs at the sharp end). — 78: 1749-1753
what to recommend in terms of improvement depends on how safe that particular activity already is; • there is a difference between explanatory and change factors; • make your recommendations smart (specific, measurable, agreed, realistic and time-bound—see below). Explanatory — 80: 1783-1786
The focus is often on explanation, not change. And that misses the point of an investigation. So let’s make a difference between:12 • explanatory factors which explain the data from one particular sequence of events; • change factors that are levers for improvement or prevention. The thing that explains a particular instance of failure does not need to be the same thing that allows your managers to do something about its potential recurrence. Working up from explanation to recommendation can sometimes be very empowering if you are open-minded and creative about — 80: 1790-1796
Instead, it is about a cognitive balancing act. Imagine trying to understand and simultaneously manage a dynamic, uncertain situation: • Should you change your explanation of what is going on with every new piece of data that comes in? This is called “thematic vagabonding,” a jumping around from explanation to explanation, driven by the loudest or latest indication or alarm. No coherent picture of what is going on can emerge. • Or should you keep your explanation stable despite newly emerging data that could suggest other plausible scenarios? Not revising your assessment (cognitive fixation) can lead to an obsolete understanding. — 92: 2008-2013
Another aspect of managing such problems is that people have to commit cognitive resources to solving them while maintaining process integrity. This is called dynamic fault management, and is typical for event-driven domains. — 93: 2018-2020
Even when people can be shown to possess the knowledge necessary for solving a problem (in a classroom where they are dealing with a textbook problem), but that same knowledge won’t “come to mind” when needed in the real world; it remains inert. If material is learned in neat chunks and static ways (books, most computer-based training) but needs to be applied in dynamic situations that call for novel and intricate combinations of those knowledge chunks, then inert knowledge is a risk. In other words, when you suspect inert knowledge, look for mismatches between how knowledge is acquired and how it is (to be) applied. — 99: 2160-2164
Larry Hirschorn talks about a law of systems development, which is that every system always operates at its capacity. — 106: 2308-2309
A major driver behind routine divergence from written guidance is the need to pursue multiple goals simultaneously. Multiple goals mean goal conflicts. In most work, contradictory goals are the rule, not the exception. Any human factors investigation that does not take goal conflicts seriously, does not take human work seriously. — 113: 2440-2442
“Loss of situation awareness” is the difference between what you know now, and what other people knew back then. And then you call it their loss. — 115: 2472-2474
As the designers of the Mig-29 (an awesome fighter aircraft) said: “The safest part is the one we could leave off.” — 132: 2812-2814
Production pressure and goal conflicts are the essence of most operational systems. Though safety is a (stated) priority, these systems do not exist to be safe. They exist to provide a service or product, to achieve economic gain, to maximize capacity utilization. But still they have to be safe. One starting point, then, for understanding a driver behind routine deviations, is to look deeper into these goal interactions, these basic incompatibilities in what people need to strive for in their work. If you want to understand ‘human error,’ you need to find out how people themselves view these conflicts from inside their operational reality, and how this contrasts with other views of the same activities (for example, management, regulator, public). — 134: 2862-2868
Accidents, in other words, are typically the by-product of the normal functioning of the system, not the result of — 135: 2878-2878
Figure 5.5 Murphy’s law is wrong. What can go wrong usually goes right, and then we draw the wrong conclusion: that it will go right again and again, even if we borrow a little more from our safety margins — 137: 2926-2928
But safety management systems can sometimes become liability management systems if their chief role is to prove that management did something about a safety problem. — 151: 3180-3181
creating safety is about giving people who do safety-critical work the room and possibility to do the right thing. This means giving them not only the discretionary space for decision making, but also providing them with error-tolerant and error-resistant designs, workable procedures and the possibility to focus on the job rather than on bureaucratic accountabilities; — 152: 3197-3199
• building trust, with a comfort about being vulnerable and honest with each other when it comes to weaknesses or mistakes; • comfortable with what is known as constructive conflict, a willingness to engage in passionate dialogue about what matters to the team. There is no hesitation to disagree, challenge and question—all in the spirit of finding the best answer or solution for that context; • a decision process where people can participate and which they feel is something they have contributed to. Even if the outcome is not what they might have wanted, they still agreed to the process, and so will be more ready to offer the buy-in that the team needs; • shared accountability after having committed to decisions and standards of performance. The team leader does not have to be the primary source of such accountability, peers do it instead. Such accountability is typically forward-looking, not backward-looking; • a focus on results that allows individual agendas and needs to be set aside. — 156: 3286-3295
To take responsibility for safety on the line, you should first and foremost look at people’s work, more than (just) at people’s safety. • What does it take to get the job done on a daily basis? What are the “workarounds,” innovations or improvisations that people have to engage in in order to meet the various demands imposed on them? • What are the daily “frustrations” that people encounter in getting a piece of machinery, or technology, or even a team of people (for example, contractors), to work the way they expect? • What do your people believe is “dodgy” about the operation? Ask them that question directly, and you may get some surprising results. • What do your people have to do to “finish the design” of the tools and technologies that the organization has given them to work with? Finishing the design may be obvious from little post-it notes with reminders for particular switches or settings, or more “advanced” jury-rigged solutions (like an upside-down paper coffee cup on the flap handle of the 60-million dollar jet I flew, so as to not forget to set the flaps under certain circumstances). Such finishing the design can be a marker of resilience: people adapt their tools and technologies to forestall or contain the risks they know about. But it can also be a pointer to places where your system may be more brittle than you think. • How often do your people have to say to each other: “here’s how to make it work” when they discuss a particular technology or portion of your operation? What is the informal teaching and “coaching” that is going on in order to make that happen? — 158: 3315-3330
zero vision has got things upside-down. It tells managers to manipulate a dependent variable. — 168: 3475-3476
The target for intervention is the behavior and attitudes of managers in the organization. They need to be told to try harder, to not make such errors. They need to be reminded to pay more attention, to not get distracted, to not lose awareness of what really matters. But on closer inspection, these things are the normal by-product of humans bureaucratically organizing their work.13 — 171: 3535-3538
cultures of production where problem-solving under pressure and constraints is highly valued; • structural secrecy associated with bureaucratic organization, where information does not cross the boundaries of the various silos in which work is done and administered; • gradual acceptance of more risk as bad consequences are kept at bay. The potential for an accident can actually grow underneath the very activities that your organization undertakes in order to tell itself and others that risk is under control (for example, measuring and tabulating injury numbers). — 171: 3542-3548
High-reliability organization (HRO) theory is generally known as a more optimistic way of looking at your organization’s capacity to prevent accidents. — 171: 3548-3549
The risk of having an accident is a fixed, structural property of the complexity of the systems we choose to build and operate. — 172: 3560-3561
You want to peg your investments in safety to the level of safety the particular activity has already attained. That way, it will not be either unattainable or irrelevant. — 180: 3698-3699
Safety actions taken in ultra-safe (near-zero) organizations are often repetitions or retreads of those taken in a less safe organization. They miss the point and do not help in creating additional safety. — 181: 3727-3728
Steven Mandis’ recent book What Happened to Goldman Sachs is an insider account of organizational drift and its unintended consequences. — 199: 4083-4084
The idea that we should be looking at safety as the presence of positive capacities rather than the absence of negative events, has recently been taken up by a number of authors and groups of thinkers. Resilience—as the ability of a system or team or individual to recognize, adapt to, and absorb disruptions that fall outside the design or preparation base, and to sustain or even improve its functioning—is one example of this. — 200: 4091-4094
Process-tracing methods are part of a larger family of cognitive task analysis, but aim specifically to analyze how people’s understanding evolved in parallel with the situation unfolding around them during a particular problem-solving episode. — 209: 4236-4238