“A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.” [Douglas Adams, ‘ Mostly Harmless’]
Introduction
The rapidly advancing technologies for vehicle control systems have reached the point where it now seems possible that a truly autonomous car could become a reality ; autonomous cars, it seems, are coming, but it might be a question of later, rather than sooner, and it may depend on what one defines as ‘autonomous’.
A fully autonomous car that requires no human to be present and is safe for routine use is very likely to be a lot further away than the vehicle manufacturers may be inviting us to believe. The 80/20 rule tells us that it is always the last part of a task that is the hardest, and the part where failure is most likely.
Cars with limited semi-autonomous operation are already running on our roads. The question is whether the manufacturers are behaving ethically in providing systems open to possible casual abuse. We have Tesla’s approach of seeming to use buyers of their vehicles to ‘beta test’ their software. We have Volkswagen apparently taking the approach that if something is not expressly illegal then it can be considered legal. This is the company that rigged around 3 million of its vehicles to ‘cheat’ emissions testing in the US.
Just have a look at the video at this link. This is a real issue with current semi-autonomous systems - this has happened for real more than once. And it isn't just a problem for Tesla, the Volvo 'Pilot Assist' appears to suffer from the same problem (see this).
What do we mean by autonomous? Like trying to classify most things, there is actually a graduation from what anyone would agree is not autonomous, through to something that most reasonable people would accept as being autonomous.
In the US, the National Highway Traffic Safety Administration (NHTSA) suggests the following formal classification system (there is a similar framework proposed by SAE, which has 6 levels) :
Level 0: The driver completely controls the vehicle at all times
Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking
Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping
Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a "sufficiently comfortable transition time" for the driver to do so
Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars
Obviously all cars are at least at level 0, with many newer cars at level 1 and a few at level 2. The classification system makes no explicit reference to responsibility.
A fully autonomous might be defined as being a vehicle that can plan and execute any trip that a human driver would be capable of, given only the required destination. That would include for example:
being able to find a suitable parking place at the destination
being capable of driving on roads for which the vehicle does not have pre-existing detailed mapping
refuelling as required
being able to travel on any road that a human driver would be capable of negotiating
being able to travel in all reasonable weather conditions
This is possibly the level of functionality that many people might imagine, but is arguably well beyond anything that is possible at the moment. For example, systems under development depend on detailed prior mapping and do not work well in poor weather.
Cars carry out a number of functions automatically, but that has nothing to do with autonomy. Cars used to have a manual choke and foot operated washer systems, very early cars had manual advance and retard, and even manually operated windscreen wipers. Automating control functions or adding features such as ABS (Active Braking System), or rear view mirrors that dip automatically, or parking sensors is not creating autonomous functionality, these are driver aids. More of these systems continue to be added, and will be added to more vehicles in coming years, such as rear view cameras and ignition interlock devices (to prevent driving under the influence of alcohol). Where perhaps the water gets a little murkier is when considering features such as automatic braking or self parking.
Trying to classify these is as pointless as trying to decide whether a particular shade of grey is black or white, it’s simply a question of degree. What is clear is that some car systems are starting to exhibit some characteristics that might be associated with autonomous operation. These types of systems are generally termed ‘advanced driver assistance systems’ (ADAS).
The key tipping point though must come when the driver is no longer responsible for the operation of some part of the key functionality of the car (steering, speed, acceleration, braking, adherence to legal requirements, and rules of the road). We might also add route selection to that list (at the moment Sat. Nav. systems require that the driver accept responsibility).
Perhaps one problem is that some drivers may think that this point has been reached for some functions in some level 2 cars.
Current Mercedes (2015 Mercedes S) have cameras and radar systems and allows hand-off driving. There is a somewhat revealing video of hands-off operation at ref. 9 (September 2014). The driver (who is a sales manager of the motor sales company) is demonstrating hands-off operation of the car whilst texting with both hands on his mobile phone. Perhaps this is acceptable in the USA, but in the UK this is a traffic offence. In fact, texting on a handheld mobile whilst driving is banned in most states in the USA, but talking is only banned in 14.
According to the New York Times, there are no rules in the USA that require drivers to keep their hands on the steering wheel (except in New York which requires one hand). The NY Times quotes a Volkswagen representative : “Where it’s not expressly prohibited, we would argue it’s allowed” (Anna Schneider, vice president for governmental relations at Volkswagen)
The Tesla model S car has an ‘autopilot’ function, that will accelerate, brake and steer for the driver, but the driver does not need to keep their hands on the steering , despite Telsa’s assertion that the driver should remain ‘in control’. The car can also change lanes, initiated by the driver flicking the indicator and touching the steering wheel. Reports on the internet show that drivers are already abusing this system (see 1). A Tesla company spokesperson, Khobi Brooklyn, is reported as saying “It’s so cool to see Model S owners get out there and use this groundbreaking technology. The more people who use it, the better it will get”. See also ref. s 2 and 3, which are reports on using the Tesla autopilot on the autobahn in Germany and on the M4 in the UK.
The autopilot feature was made available in October 2015; in January 2016 Tesla announced that the software would be modified to restrict its operation. Did Tesla release this safety critical software too early, with inadequate testing and inadequate evaluation of the software under real world conditions? It would seem from articles on the internet that Tesla use owners to ‘beta test’ their software. Is this really the approach a responsible car manufacturer should take?
An important question with these systems is whether the driver, even if not abusing the system, can really be ‘in control’ when at best all they are doing is monitoring the car’s operation, and at worst, paying no particular attention at all.
There seems to be a growing problem with car manufacturers providing functionality which is encouraging drivers to push the boundaries of legal driving, whilst they require that the driver must retain control. At the end of the Mercedes video on the Distronic system is the following warning:
DISTRONIC does not recognize the curvature of the road
it might not detect narrow vehicles like motorbikes or vehicles in adjacent lanes
it cannot detect pedestrians or react to stationary objects
you shouldn’t use any type of cruise control on slippery or curving roadways nor on city street
It is arguable that such level 2 features are inherently dangerous since they do not require or enforce the driver to actually be in control. It will not be long (if it has not already happened) that someone is killed through the use or abuse of this type of ADAS.
This is a view that some in the car industry seem to share. A Jaguar engineer (who we must assume also reflects the view of the company) is reported to be concerned that such systems create a false sense of security for drivers, because this technology is neither foolproof nor sufficiently reliable. Jaguar seem to accept that semi-autonomous technology is dangerous and that Telsa’s implementation of Autopilot was "very irresponsible." (4).
VW’s research chief (Jürgen Leohold) is reported as saying that he does not expect fully autonomous cars within 20 years (i.e. before 2035). Renault-Nissan is promising that by 2020 they will be selling cars that can ‘negotiate hazards and change lanes’ on the motorway, and navigate heavy urban traffic, all without any driver input.
There would seem to be an arguable case that at least some motor manufacturers are already being irresponsible in their approach to introducing autonomous features. Are the car manufacturers being seduced by technology? One could be excused for suspecting that the push to more autonomy is driven by a timetable defined by commercial considerations as the various car manufacturers jostle for position in the market, and that this is at the expense of safety. Given the recent disclosure that Volkswagen deliberately rigged their cars to meet emission regulations, how sure can we be that other corners will not be cut, or that the general population will not be used to ‘beta test’ semi and fully autonomous cars. Tesla seem to do that already.
The manufacturers may want to keep in mind the lesson of the de Havilland DH 106 Comet, a beautiful and brilliantly designed aeroplane, the first commercial jet airliner that in the form of the Hawker Siddeley Nimrod was still flying 60 years after its first flight, but was a commercial failure.
Catastrophic metal fatigue initiated by the square windows caused the wings to fall off. The plane pushed forward aircraft design, but also the understanding of metal fatigue and accident investigation procedures. The example of the Challenger disaster is another salutary lesson; a misunderstanding of risk and the pressure to push forward a schedule led to the deaths of the seven crew members.
Rushing to push out semi and fully autonomous cars is putting vehicle manufacturers into uncharted territory.
It is easy for consumers to be beguiled by new technology but they generally lack any comprehension of how the technologies function and consequently are unlikely to fully understand its limitations. The lesson from the aircraft industry appears to be that any accident will be blamed on the human present, not on the design of the automated system.
The manufacturers all maintain that the safe operation of these semi-autonomous vehicles is the responsibility of the driver, but is it really ethically acceptable for manufacturers to provide a driving system apparently so open to misuse? Are governments doing enough to regulate the safe deployment of these technologies?
In the medium term it is far more likely that only conditionally autonomous cars will be operating and that these will be largely or solely restricted to motorways or specially designed environments.
Car manufacturers imply that autonomous cars are the only way to deliver better road safety, but the performance of countries like Sweden and the UK show how much can be achieved already, and begs the question of why in most countries road safety is so much poorer. The admittedly crude analysis presented here suggests that there is every reason to suppose that death rates on UK roads can be drastically reduced by enforcement of existing driving regulations and improved driver aids. The trickle down of driver aids to mass market vehicles, including the adoption of sophisticated systems based on using Lidar and radar, will reduce accident rates.
There seems little or no hard evidence to support the notion that autonomous cars will be intrinsically safer than most human drivers, and it could well be that undue haste to bring technologies for autonomous driving to the market place risks unforeseen or underestimated safety problems.
Perhaps there are signs of hubris in the development of this technology. There is a distinct possibility that one or more vehicle manufacturers will suffer what could be an existential loss from rushing to deploy autonomous technologies.
Is handling over control a wise move?
Comments