Automation: The Machine Will Handle It… Right?"
Article by Dr. John E. Kello
A while back, I was reflecting on a fatal accident involving a self-driving Tesla, and I thought about the critical importance of remaining vigilant even as we are assured (or assure ourselves) that “the machine will handle it”. The issue at hand is commonly identified as automation complacency. An older but equally applicable term is Titanic Syndrome… “It’s been well-engineered and well-tested. Nothing can go wrong, and if it did, well the machine will handle it. Right?”
I first encountered this issue in a significant way in working with an industry oversight group called the Motor Vehicle Manufacturers Association (MVMA) in the early 1980s. There had been a rash of serious accidents including fatalities in fabrication and assembly plants in the auto industry, and the MVMA was intent on addressing the problem. As we interviewed workers to build our own perceptions of the problem, we learned that a number of the accidents involved unsafe acts in the presence of production robots, which at that time were becoming increasingly common in the industry. While a robotic arm, for instance, was designed as an improvement to reduce manual labor and expedite production, workers (skilled tradesmen as well as production operators) had to learn new habits in order to work safely around them. When running, the arm was going to do what it was programmed to do, whether or not a worker was in the way. Workers had to remain extra vigilant in the presence of these new automated allies, lest they put themselves in a dangerous situation with an ally-turned-enemy. They could certainly watch out for the robotic arm as it swept through its programmed cycle, but the early-generation robot could not watch out for them. Workers sometimes acted as though it could.
Various engineering changes were made in order to make such accidents less likely. And, appropriately, increasing emphasis was put on consistently using the lock and tag before entering a potentially lethal situation. But, none of the environmental fixes eliminated the problem totally. It was still possible for a worker to violate the safety-engineered system (e.g., forget to lockout or choose not to lockout, reach into a protected area over or around a machine guard). With or without high-tech automation, workers could certainly still act in an unsafe manner. All things considered, vigilance continued to be a crucial part, indeed among the most crucial parts, of the safety equation.
My next encounter with this issue occurred in my work with nuclear power production, around the same time. Modern nuclear power control rooms are expertly engineered with numerous safety systems and back-ups to those safety systems, such that the machine handles it most of the time. The operators can to some extent (to use a term I heard more than a few times), “babysit the technology”. But the engineering is never perfect (witness the Three Mile Island incident), and unanticipated rare events can happen, so operators must engage the brain at all times. The need to communicate with each other effectively (work as a team) and to diagnose effectively does not go away just because the computers are running the plant, and doing so quite well… almost all of the time.
The third encounter occurred later when I spent a sabbatical year at the University of Texas at Austin, working with a research team that was focused on aircraft safety. This team laid the foundation for what would come to be called crew resource management (CRM) training, which continues to be a critically important part of initial and recurrent training for pilots throughout commercial aviation today. A primary outcome of our research was the finding, now accepted as commonplace, that human error was the primary contributing cause in the vast majority of aircraft incidents. Hence the focus on such cognitive and interactive competencies as situational awareness (mindfulness/vigilance), communication and feedback, workload distribution, group problem solving, and stress awareness. A second finding was that as automation had become enhanced in new/next generation aircraft (the so-called glass cockpit), the number of incidents had indeed decreased, but certainly not to zero, and the pattern of accident-producing errors had shifted. Now, failing to activate the correct navigation system, or even a keystroke error, could have severe consequences. A stark and dramatic example is KAL 007. In September of 1983 a Boeing 747 jumbo jet was shot down by soviet fighter jets as it unintentionally flew well off course on its way from Anchorage to Seoul. The crew made a series of uncorrected errors that resulted in incorrectly programming the route, and failing to notice that they were far off their intended flight plan, ultimately straying into Soviet airspace. Whatever the cause, the crew clearly lacked situational awareness, likely due at least in part to the assumption that the machine was handling it. When the crew failed to respond to attempts by the soviet pilots to contact them, a missile was launched, KAL 007 was downed, and all souls were lost.
There are other less well-known examples of automation complacency in the era of the glass cockpit. Modern aircraft can do virtually all of the flying once the crew on takeoff makes the decision and takes the actions to raise the nose wheel and go airborne. Everything else, including the landing, can be programmed into the computers. The good news is, most pilots (and certainly those with whom I have been privileged to fly up front) like to fly, so they stay hands-on, and mainly use the automation when they are up at altitude. But again, the heavy automation of the cockpit does not eliminate the need for vigilance; to the contrary, not only does it not guarantee a successful outcome independent of operator vigilance, it may lull a crew into a false sense of security.
The bottom line is, no automation is foolproof. Automation can make our job so much easier. But it can also encourage us to be less vigilant (automation complacency). We do so at our peril. The machine does what it is programmed to do (by humans, of course, who can make mistakes). It is not mindful. It does not know our intent. There are limits to how self-correcting it can be. No matter how sophisticated the technology, the human that is using it must maintain vigilance.