Like industrial process control, vehicle autonomy uses sensors, executes control and safety algorithms, and drives output commands to safely meet objectives. There are however some important differences in how control & safety algorithms work for vehicles, with implications on determining safety integrity levels. To start, process plants are built at a fixed location and have a relatively limited set of upset conditions. Land vehicles must travel on diverse road surfaces and must act appropriately when encountering a wide range of obstacles and situations. Automation starts with sensors, and land vehicle autonomy requires complex sensors and not only simple process variable sensors, like speed, flow, temperature, pressure, and level. Land vehicles require situational awareness that must come from sensors, like cameras (including Infrared), LIDAR, RADAR, GPS, microphones, and ultrasonic. The processing of information from autonomous vehicle sensors requires enormously more complex algorithms not normally found in industrial process control. The nature of this processing makes predicting safe behavior a difficult statistical problem.
In July 2019 a coalition of 11 companies — Aptiv, Audi, Baidu, BMW, Continental, Daimler, Fiat Chrysler Automobiles, Here, Infineon, Intel, and Volkswagen — published a whitepaper (“Safety First for Automated Driving” or SaFAD for short). This paper describes a framework for the development, testing, and validation of “safe” autonomous vehicles. This document was produced to fill in the gaps of ISO 26262 and to help state, federal and other international agencies with the development of rules and regulations. The bottom line in the SaFAD document: validation testing must be based on statistics gathered by operating in the actual environment. This is distinctly different than industrial safety validation that could perform validation testing of the control & safety systems using a process simulator. Some estimates indicate vehicle autonomy could cut the number of crashes by 90%.
Vision systems typically require the fusion of multiple sensing technologies and adaptation to sensor blind spots, degradation, and failure. Land vehicle autonomy is based on machine learning using extensive data gathering and model development to create the pattern recognition neural network control algorithms that are downloaded into the vehicle for operation. Drone autonomy is much less demanding. It is developing with help from NASA‘s UTM (Unmanned Traffic Management) program using GPS-based ADS-B (Automatic Dependent Surveillance-Broadcast). Unlike AI & machine learning algorithms, deterministic systems like ADS-B are inherently easier to test and certify.
When vehicle autonomy requires artificial intelligence & machine learning algorithms, the testing for safety verification and validation is fundamentally changed. Like human drivers and pilots, AI/ML algorithms need to be trained and experienced, and autonomous vehicle crashes are showing just how extensive that training mission needs to be. Learning to handle; predicting the path and intention of pedestrians, bicycles, other motor vehicles, construction equipment, human hand signals, unique construction site situations, road damage, downed tree limbs, flooded roads, and various snow/ice conditions are part of human learning and including such pattern recognition in vehicle software implies a massive training regime. Each new version of autonomous software may improve the response to one particular hazard, but did that new software version degrade a previously tested behavior? Regression testing is a potentially expensive part of the software testing for vehicle autonomy.
Two main factors are working in the favor of autonomy. The first is the scale of the effort and the talented people working on the problem in real life test environments. The second factor is that humans have set a relatively low bar, and certain restricted autonomy missions are likely to show conclusively that lives can be saved, and insurance premiums can be reduced. Roughly 95% of car accidents and nearly 75% of pilot accidents are due to human error. Certifiable autonomy of vehicles used for a particular task, will be based on exceeding the safety performance of human operators in the actual operating environment.
Package delivery and transportation service suppliers that can safely deliver without expensive labor costs have a significant competitive advantage. Development efforts for automating land and air vehicles is underway on a large scale and many organizations and teams are working with regulatory agencies for field testing trials and local approvals. In the US, agencies like the FAA, National Highway Traffic Safety Administration (NHTSA) and various state and city organizations are involved with the approvals for limited vehicle testing and eventual approval of autonomous operation in a wider field of operation. Numerous stakeholders are involved in such regulations.
The certification of autonomous vehicles is the key to widespread adoption of self-driving cars, trucks and aircraft.
In the past we have certified the human land vehicle drivers and the aircraft pilots with operating licenses that define which vehicles they operate and where they can be operated. With vehicle autonomy we still need to certify, but now that certification is related to the automation of the vehicle for its intended use. Safe operation needs to be designed, built, verified, and validated to achieve vehicle certification. It is highly likely that certification will place restrictions on the location and the allowable conditions of operation. The race to achieve full vehicle autonomy (Level 5) is an enormous technical challenge, with many competitors, collaborators, and stakeholders. The move to autonomy will take time, but the changes will eventually be profound.