top of page

What are the Risks with Autonomous Driving?


The present state of play is that autonomy requires extensive and accurate prior mapping of the route. Until this constraint is overcome, then autonomous cars will be limited to using those routes which have been mapped. For example, Volvo’s present vision for autonomous driving is that this would only be permissible within authorized areas, typically motorways, with driving outside these areas being manual.

The systems do not operate well or at all, in heavy rain and snow. As long as this is the case, it is very difficult to see how autonomous operation could be safely deployed.

Many people appear to buy into a vision of an automated system as being inherently low risk or even zero risk. Operation depends on the correct operation of an array of sensors, and on the computer software and hardware that analyses this data, interprets it, and makes decisions based on this data. The system may not get tired or distracted, but it will instead be open to other failure modes.

The overall computational task breaks down into two distinct activities, data interpretation and decision making. The data interpretation (such as recognizing and classifying objects) is executed by sophisticated algorithms, so called ‘deep learning’ techniques which are based on the use of complex multi-layered neural networks. These systems utilize data from the different sensor systems on the vehicle (sensor fusion) to generate an overall representation of the surrounding environment, including the classification of detected objects. These systems are capable of learning, in as such as they can tune themselves to improve their overall performance. It is this learning process that allows companies to gradually improve the operation of their autonomous vehicles. What is less clear is how the decision making processes are implemented.

There is however one important characteristic of these systems, namely that their complexity precludes direct understanding of exactly how the results are obtained, and the systems will have behaviours that are not predictable, reacting to unusual or novel input data in ways that are unforeseen. Is it possible that the interaction of autonomous cars with each other or with human drivers may give rise to new problems?

Different countries have different driving rules and road layouts and this will require a different set of detailed responses from the car’s system. The basic driving task is not different, but the details are, and it will not just be a simple question of tweaking the programming. The risk is that a system trained on (say) US roads, even if ‘retrained’ for UK regulations and practices may have residual behaviours that may be inappropriate. This touches on the regulations that might be necessary to permit a given autonomous car model to be driven on (say) UK roads. It is not just the car’s construction, but also the controlling software. How do the regulators ensure that any given software system is safe and fit for purpose? How will regulators pass a driverless car as fit for purpose?

It seems likely that, as with other electronic devices such as a PC, there will be a number of different software systems generated by different companies that are required to interact. PC’s are not the most stable of computing platforms, but that may well be a reflection of their architecture and their use as a generalized, multi-purpose workhorse that may be running any combination of software that the user cares to implement. However, it is not easy to dismiss the feeling that the software running autonomous cars may nevertheless be susceptible in some degree to the same type of interaction problems that arise from running multiple processes on PC’s.

The car manufacturers will have to rigorously review and determine failure modes, system redundancy and architecture, and look at the provision of fail safe, independent back up systems.

What are the risks of common mode failures that might disable the autonomous system?

Whereas demonstration vehicles will have no expense spared, production vehicles will be manufactured to a price specification; it may be uncomfortable to accept, but this will mean the manufacturer setting an acceptable failure rate (to put it bluntly, an acceptable accident rate). This type of approach is used in other industries, and is usually based on the additional risk to an ‘at risk’ individual not being significant when compared to other background risks.

In an environment where there are many cars with active systems such as LIDAR and radar there is a risk of interference, producing false or ghost images, although a range of techniques can be used to minimize this problem.

Clearly the overall system reliability will be dependent in part on the sensor reliability. This can be addressed by having suitable levels of redundancy but that can raise its own problems. For example, if in a system of dual sensors (say two independent LIDAR systems) one detects an object but another does not, which should you believe? If the presence (say) of an object is not reported by a second independent system (say radar) that uses a different detection technique then there may be good grounds for assuming that there is indeed no object present, but the validity of that assumption may depend on what the object might be. False negatives could risk an accident; false positives could result in an unacceptable frequency of inappropriate actions (such as performing an unnecessary emergency stop). If there are three or more sensors, the same issue arises; should a simple voting system be used or should detection of an object by any of the sensors be treated as real just to be safe?

LIDAR and cameras rely on light transmission, and will be susceptible to being blinded by spray or mud or salt or driving snow – the systems for keeping the transmitting and receiving areas clean will be just as critical as any other link in the chain of control.

In aircraft control systems multiple flight computers are used. The Airbus A330/340 has 5 computers, any of which can control the aircraft, along with multiple sensors and actuators. The 5 computers consist of 3 primary and 2 secondary computers. The computers in each group are designed and built by different manufacturers, use different processor chips, and run different software developed independently. In addition, within each computer there are multiple ‘channels’ that run software written in different programming languages written by different teams (#). Vehicle manufacturers will have to use this sort of approach to attain the level of redundancy and safety required for autonomous operation. Apparently Audi ‘s self-driving car will be based on an architecture where all computing will be performed by at least two independent processors; is that sufficient?

It is envisaged that autonomous vehicles will be ‘connected’ to the internet; it is also certain to be the case that there will have to be a means for police officers or other authorized person to directly interact with the vehicle (e.g. to issue a mandatory instruction) if it is possible that the vehicle may be empty or any occupant incapable of action. Thus one key risk is of the car’s software systems being subjected to some form of cyber attack by malicious individuals, and not just individuals but states as well. According to Wikipedia – ‘The Internet security company McAfee stated in their 2007 annual report that approximately 120 countries have been developing ways to use the Internet as a weapon and target financial markets, government computer systems and utilities.’

Autonomous vehicles are an all too obvious a target for hackers. There is a therefore simple question – how will the vehicle manufacturers ensure, or convince consumers or users of their products, that their vehicles are safe from cyber attack. Tesla, for example, get owners to download software for their cars over the internet. If all car manufacturers do this, how long would it be before the system was venerable to malicious software?

The only way that the integrity of the software could be assured is to only permit software changes under the direct control of an appointed person (which would have to be the manufacturer’s representative) and to disallow any internet connect to the vehicle’s computer systems. It would probably also require the control systems to be completely independent from any other electronic system on the vehicle. The only way to avoid having to provide a means for a police officer to be able to interact with the vehicle is to require that a competent person is always present in the vehicle.

The attitude of the vehicle manufacturers to security risks may be open to question. In 2014 the BBC ran a report on how criminals were using electronic scanners to steal executive cars. The BBC said that this highlighted a major flaw in the security of leading luxury vehicles. The reported response from the car makers was that the problem was not a weakness in the security of their vehicles, but a failure to ban the sale of the scanner devices.

For a vehicle to be fully autonomous (no responsible occupant) it needs to be capable of reacting correctly to any problem that might arise. There are any number of possible scenarios, but for example, how would the vehicle cope with:

  • being flagged down by a police officer, and being given instructions by a police officer or other appointed person, especially where those instruction require the vehicle to operate counter to the normal driving rules

  • cope with temporary traffic lights where a single carriageway may be restricted to a single lane, or cope with simple stop/go boards at road works, or visual instruction from a road worker

  • cope with diversions and lane closures in motorway road works, in particular those that direct the car onto the opposite carriage way or require ad-hoc selection of lanes (which may for example not permit exit at the next junction)

  • cope with country lanes where there are no markings, no road furniture, no road curb, very tight bends, deep road side ditches and soft verges

  • cope with junctions where the right of way is undefined, or is in contradiction to any internal mapping the vehicle is using

  • traffic lights where one or more of the lights is out, or where the sequence is stuck

  • intercommunication with other drivers is often essential, how would an autonomous car interact with another human driver

  • cope on roads with passing places, or with roads that are too narrow for two cars to pass (where a human would assess the situation and arrive at an ad hoc solution)

  • in general, how well would an autonomous car cope with uncertainty?

  • how well can an autonomous vehicle anticipate ?

(#) Airbus flight control system (2013) , http://www.slideshare.net/sommerville-videos/airbus-fcs

84 views0 comments

Comments


bottom of page