Just Culture: Are we sustaining a false belief?

7 Min Read

Search

Prof. Dekker

By Sidney W. A. Dekker, currently a professor at Griffith University in Brisbane, Australia, where he founded the Safety Science Innovation Lab. He is also Honorary Professor of Psychology at the University of Queensland. He flew as First Officer on Boeing 737s for Sterling and later Cimber Airlines out of Copenhagen. In 2004, he wrote the following article in the aftermath of the Linate accident.

We like to believe that accidents happen because a few people do stupid things. That some people just do not pay attention. Or that some people have become complacent and do not want to do a better job.

On the surface, there is often a lot of support for these ideas. A quick look at what happened at Linate, for example, shows that controllers did not follow up on position reports, that airport managers did not fix a broken radar system in time, that nobody had bothered to maintain markings and signs out on the airport, and that controllers did not even know about some of the stop marks and positions out on the taxiway system. And of course, that a Cessna pilot landed in conditions that were below his minima. He should never have been there in the first place.

When we dig through the rubble of an accident, these shortcomings strike us as egregious, as shocking, as deviant, or even as criminal. If only these people had done their jobs! If only they had done what we pay them to do! Then the accident would never have happened. There seems only one way to go after such discoveries: fire the people who did not do their jobs. Perhaps even prosecute them and put them in jail. Make sure that they never touch a safety-critical system again. In fact, set an example by punishing them: make sure that other people like them will do their jobs diligently and correctly, so that they do not also lose their jobs or land in jail.

The problem with this logic is that it does not get us anywhere. The problem with this logic is that it does not work the way we hope. What we believe is not what really happens. The reason the logic does not work is twofold. First, accidents don’t just happen because a few people do stupid things or don’t pay attention. Second, firing or punishing people does not create progress on safety: it does not prevent such accidents from happening again. The only thing that we sustain by this logic of individual errors and punishment is our illusions. Systems don’t get safer by punishing people. Systems don’t get safer by thinking that humans are the greatest risk.

Let’s look at the first problem. Accidents don’t just happen because a few people do stupid things or don’t pay attention. Accidents are not just “caused” by those people. There is research that shows how accidents are almost normal, expected phenomena in systems that operate under conditions of resource scarcity and competition; that accidents are the normal by-product of normal people doing normal work in everyday organizations that operate technology that is exposed to a certain amount of risk. Accidents happen because entire systems fail. Not because people fail. This is called the systems view. The systems view is in direct contrast to the logic outlined above. The systems view sees the little errors and problems that we discover on the surface as symptoms, not as causes. These things do not “cause” an accident. Rather, they are symptoms of issues that lie much deeper inside a system. These issues may have to do with priorities, politics, organizational communication, engineering uncertainties, and much more.

To people who work in these organizations, however, such issues are seldom as obvious as they are to outside observers after an accident. To people inside organizations, these issues are not noteworthy or special. They are the stuff of doing everyday work in everyday organizations. Think of it: there is no organization where resource scarcity and communication problems do not play some sort of role (just think of your own workplace). But connecting these issues to an accident, or the potential of an accident, before the accident happens, is impossible. Research shows that it is basically outside our ability to imagine accidents as possible. We don’t believe that it is possible that an accident will happen. And what we don’t believe, we cannot predict.

An additional problem is that the potential for having an accident can grow over time. Systems slowly, and unnoticeably, move towards the edge of their safety envelopes. In their daily work people — operators, managers, administrators — make numerous decisions and trade-offs. They solve numerous larger and little problems. This is part and parcel of their everyday work, their everyday lives. With each solved problem comes the confidence that they must be doing the right thing; a decision was made without obvious safety consequences. But other ramifications or consequences of those decisions may be hard to foresee, they may be impossible to predict. The cumulative effect is called drift: the drift into failure. Drifting into failure is possible because people in organizations make thousands of little and larger decisions that to them are seemingly unconnected. But together, eventually, all these little, normal decisions and actions can push a system over the edge. Research shows that recognizing drift is incredibly difficult, if not impossible — either from the inside or the outside of the organization.

Continue to part 2