Computers are bad, m’kay

Suddenly all of the jokes about economists aren’t funny. Here is an economist, Tim Harford,  writing about applied psychology, AI, and control systems in the article https://www.theguardian.com/technology/2016/oct/11/crash-how-computers-are-setting-us-up-disaster

Being likewise an expert in none of these disciplines, but with that little knowledge that makes me dangerous I am disturbed by what seems to be the basic thesis of the article, which is that automation in complex systems will make things worse. The headline (which I realise was probably not written by the author) is “Crash: how computers are setting us up for disaster”.

It’s a very long article, but that’s not because it contains a lot of dense and complex argument. It’s long because it recounts at great length (basically the first half of the article) the various pilot errors that led to the crash of Air France 447. As usual with these accidents, it’s a multiple, cascading failure scenario. Short version: an initial instrument failure (airspeed sensor iced over) caused the autopilot to switch to “simple” mode which “allowed” pilots greater latitude, and hence potentially errors. An inexperienced first officer then put the plane into a stall, and then the accurate warnings from the automatic systems (“STALL STALL STALL”) were ignored by both pilots. The captain was fatigued and not in the cabin at the start of the incident, and once he was in the cockpit he and the first officer argued about what was happening and why basically until the plane hit the water.

Harford draws from this the following:

The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.”

While the basic theses I believe are correct (although wrongly stated), the implicit conclusions are wrong. I’ll restate the premises: 1. An automated system allows a less expert operator to do what previously required greater skill (most of us can’t drive a car which requires manual advance-retard of spark, choke control, and lacks automatic gears, power steering or ABS), or it removes simple and mundane tasks and allows skilled operators to concentrate on exactly those situations they’re trained for. 2. If you don’t force expert practitioners to maintain their expertise, they will lose skills; however an automated system is not primarily responsible for this loss of skill, it just facilitates it if this obvious danger is ignored. 3. The more sophisticated an automatic system, the more complex its edge conditions will be, and also (possibly) its failure modes. However this doesn’t mean that a human will be able to deal with every complex situation that an automated system cannot, and of course there may equally be situations that an automated system can cope with that even a skilled human cannot. The fly-by-wire systems in most modern fighter aircraft are a perfect case in point – human pilots simply can’t fly the planes (except in a very basic way) without the automated systems operating.

Now Harford does have a valid argument: if you change a manual system to one that replaces mundane tasks with automation, but continues to require occasional highly skilled behaviour, and if you then don’t practice that skilled behaviour then shit will happen if something fails. Further, because the automated system deals with simple stuff, if there is a failure it will be a doozy.

While this is true, it doesn’t mean that automation is necessarily the problem, and it ignores a couple of equally valid arguments: 1. Humans are really bad at boring, repetitive but moderately skilled tasks which require infrequent, unpredictable, rapid and highly skilled behaviour. That’s why pilots train, train, and train again to “automate” the responses necessary in a large range of failure modes, training that far exceeds what a car driver receives. That’s why ABS and traction control systems have undoubtedly saved many more lives than their failure modes have cost. Moreover the vast majority of drivers never even acquire the skills required to deal with the edge conditions managed by ABS and traction control. 2. A complex failure doesn’t necessarily happen because there is automation, automation just takes care of all the simple ones. There is no system, either automated or human, that can guarantee to deal with every failure mode. Just because an automated system can’t deal with a problem it doesn’t necessarily follow that a human can. There are plenty of air crashes that have been ultimately caused either by human error, or a situation that simply couldn’t be worked out in time. All that can reasonably be said (at the moment) is that if the failure mode requires complex reasoning AND there is time to do it, then a human is a better control system than an AI system with less cognitive ability even if it can “think” faster.

Moreover, Harford conveniently ignores the fact that the training regime used by the airline obviously allowed the pilots to lose (or fail to acquire) exactly the skills they were supposed to have. This wasn’t a fault of automation so much as a fault of procedure and training; if allowing pilots to spend much of their time not flying the plane caused their skills to degrade, then obviously the skills and readiness tests that they should have been passing were either inadequate or not in place. As I’ve already observed, back in the days before autopilots an airline pilot would spend many hours doing very little except flying straight and level, and even if that activity didn’t degrade their skills in dealing with unlikely and complex situations, neither did it prepare them for them. It’s exactly why pilots rehearse the actions required in emergencies over and over again, so that they become (yes) automatic when they suddenly happen in the middle of normal level flight. If that practice isn’t continued then the skills will degrade, but that’s not because of automation, because 99% of the situations that pilots practice for never happen at all.

Harford also ignores the basic statistics of any automation situation: that, on balance, automation is usually introduced because it decreases the total number of accidents and their severity, and/or allows less highly skilled operators to do things that previously required greater skills and training, and/or reduces the manual and cognitive load on skilled operators so that they can concentrate on important stuff.  After you add automation to a system its failure modes and their severity will change, but the whole point of adding the automation in the first place is to increase net safety or usability or convenience. If automation makes the system MORE dangerous or less usable, then you have a problem, obviously. An obvious question to ask is: has the number of air crashes (and non-fatal accidents) gone up or down since the introduction of various kinds of automation, but Harford doesn’t touch on this at all.

From a specific example of an air crash, Harford draws a very long bow to suggest that this kind of use of automated systems and failure scenario is going to be played out in many other situations with similar disastrous consequences.

From there he goes on to make an even more amazing statement: “We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes.”. Seriously. In full context:

“For all the power and the genuine usefulness of data, perhaps we have not yet acknowledged how imperfectly a tidy database maps on to a messy world. We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes. This is not to say that we should call for death to the databases and algorithms. There is at least some legitimate role for computerised attempts to investigate criminal suspects, and keep traffic flowing. But the database and the algorithm, like the autopilot, should be there to support human decision-making. If we rely on computers completely, disaster awaits.”

I’m not sure how this follows in any way from the previous argument, but there it is, a mishmash of straw-man and total illogic.

Next Harford embarks on an argument that embodying any skill or knowledge in an algorithm is potentially dangerous. He quotes a psychologist, Gary Klein, who says:

“When the algorithms are making the decisions, people often stop working to get better. The algorithms can make it hard to diagnose reasons for failures. As people become more dependent on algorithms, their judgment may erode, making them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.” Harford later remarks “It is possible to resist the siren call of the algorithms.”

I’m not sure where to start on this. I’ll simply observe that it appears that if we can stamp out and eliminate algorithms then the world will probably be a better place. I propose the following procedure: 1. Keep an eye out for any algorithm. 2. If you see one, check if it’s big or small. 3. If it’s big, call for help or use a sledgehammer. 4. If it’s small, use an ordinary hammer. 5. Whack the algorithm until it’s all gone. 6. Go to 1.

If only someone could tell me what they look like…

The article finishes with another long case study in which a Dutch traffic engineer solved the problem of an increasing number  of accidents in traffic through a village by deliberately making the road more complex, and adding a “squareabout”. The basic argument was that instead of adding information and instructions and making driving easier, the solution was to make it so hard that people had to slow down and pay attention. I think the connection between this and the rest of the article is that road signs and better roads are somehow analogous to automation, and they reduce drivers’ attention, however it’s not entirely clear. All in all, although the article raises some interesting issues I think the arguments it makes are pretty terrible, and the conclusions it draws are entirely unwarranted.

Advertisements