Experience in the aircraft industry suggests that there are likely to be a number of effects arising from increasing levels of automation in cars :
as drivers make more use of semi-autonomous and autonomous driving modes their skill levels will drop
automation will not eliminate human error, but create opportunities for new kinds of error that arise from miscommunication between human and machine
there will always be just enough control left to the human such that blame for any accident can always be put at the door of the human
Present semi-autonomous systems require the human to be ready to reassume control. Unless this is a planned handover, the driver will have to do this at a moments notice when the automated system has failed to control the vehicle, for whatever reason, at which point the driver may well not be in a position to respond correctly, or may respond too slowly. The 2009 crash of Air France flight 447 happened as a consequence of the crew responding incorrectly when the autopilot suddenly handed over to manual control. Over 4 minutes elapsed between the initial alarm sounding (handing the plane over to manual control) and the plane hitting the sea – a driver may have only fractions of a second to take corrective action. One of the recommendations after the loss was that pilots should spend more time flying the aircraft manually.
Automation for cars has one big advantage over an aircraft, in principle the car can come to a stop (preferably not in the outside lane of a motorway), but it has a one big problem: at 70 mph on a multi-lane motorway there is generally little time or room for mistakes
.
Although many consumers consider themselves to be technologically sophisticated, they generally lack any understanding of how the technologies that they use actually work. For semi or fully autonomous cars, this puts them in the position that they are unable to assess the risk that this technology may present. It may also lead to misunderstanding the limitations of the systems the vehicle is using.
The experience in the aircraft industry (and others) is that where humans interact with control systems, there is a tendency to always blame the human if something goes wrong. During Google’s testing over the last few years, there have been a number of minor collisions, which Google state were always the fault of the human driver of the other vehicle. But collisions often result from drivers misjudging each others intentions, could it be that the behaviour of the Google cars contributed to these accidents by behaving in unusual or unexpected ways? A simple example of this might be the driver who drives unusually slowly, or is extremely over cautious, or perhaps does not react as expected.
In February 2016, Google reported that one of their Lexus vehicles had been involved in a low speed collision with a bus. In manoeuvring into another lane to avoid an obstacle in its own lane the Lexus software presumed that the bus would give way – it didn’t. Google are quoted as saying : ‘We clearly bear some responsibility, because if our car hadn't moved there wouldn't have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.’ This, together with the rest of the statement, leaves the clear impression that neither Google nor their test driver, who apparently could have but chose not to intervene, accept full responsibility for the accident.
Recent adverse publicity on problems with Tesla’s autopilot system have led some commentators to blame the driver for misusing the system, rather than acknowledging that the system may have limitations, or that it may have been released too early. To quote from an article published by The Guardian (21/10/2015) ‘Tesla is very clear about the fact that the driver is responsible for the car at all times and should be actively in control, despite the AutoPilot system: it will be the driver’s fault, not Tesla’s if the car ends up in a road traffic collision.’
However sophisticated automated systems may be, they cannot (yet) allow for every scenario that might crop up. Rather than trying to supplant human operation, automation systems should be used to help the driver, and not be used to make them over reliant or complacent.
The ‘Automation Paradox’ holds that the more automated the system, the more critical is the contribution of the human operator.
Wiener, an expert on aviation human factors [Human Factors in Aviation (Cognition and Perception) , E L Wiener (Editor), et al ] generated a list of ‘laws’ drawing on his experience, amongst which are:
every device creates its own opportunity for human error
exotic devices create exotic problems
whenever you solve a problem you usually create one, you can only hope the one you created is less critical than the one you solved
Comments