The National Engineering Policy Center of the Royal Academy of Engineering in the UK recently organized half a day of discussions about safety and ethics of autonomous systems at the Royal Aeronautical Society in London. ARC attended the meeting where professionals and academics active in the fields of engineering, safety and security, regulation and policy making, ethics and socio-technical systems gathered to discuss relevant issues. These included risks and benefits associated with autonomous systems, the step change required to mitigate against risks and maximize benefits, and the mechanisms that can help manage the transition.
Autonomous systems were defined as systems making decisions in complex environments without human input. Systems can have a physical manifestation (for example a robot) or can be purely software based (a trading algorithm). The degree of machine control can vary from delegation of specific tasks executed autonomously to full machine autonomy. The impact on the environment can vary from low, as in the case of a warehouse drone, to very high, as in the case of a robot providing health care. Applications vary widely: rail systems, maritime applications, space and defense, autonomous vehicles on the road for passenger or goods transportation, or off-road use in agriculture, warehouses, industrial manufacturing, and more.
Experts agreed that it will be more challenging to evaluate risk levels and failure modes for autonomous systems than with conventional systems, because situations will arise that were not part of the data used to train the algorithms making the decisions. Moreover, models of the environment and the physical will be inaccurate to a certain degree. These are familiar challenges for process automation professionals who apply model-predictive control. Experts also agreed that – as with traditional systems - verification and validation are still required. However, there were questions raised that are the specific system requirements that should be qualified. According to one expert, regulations should also include ethical principles to make it possible to qualify that aspect. Panelists raised the question of how algorithms could be tested for their capability to handle unknown situations. Generative and/or simulation-based testing is regarded as the only feasible option to cover modes of operation with high risk. Sufficient experiments are needed here to reach an acceptable level of confidence about residual risk. ARC believes that algorithms should also be tested and designed for their intrinsic mathematical properties. One example would be bounded-derivative networks applied in model-predictive control of process plants, conceived when standard neural networks were proven to be inadequate.
Should it be possible to trace back the rationale for decisions made by algorithms? This would help performance analysis to improve these algorithms. ARC believes that if the algorithm could express the rationale of a decision in an understandable manner, it would improve the likelihood of wide adoption compared to black box algorithms; consider how often operators simply turn off MPC applications when they don’t understand the actions of the controllers. Trust can only exist if there is a degree of cooperation, explained one expert. System design must therefore consider the intent of the system, the impact on different types of actors (users, operators, service providers, etc.), the environment, and the interactions among all of those. “Who must be protected against what? The system against the users, or the user against the system?”, this expert asked, to illustrate that the system design impacts the degree of cooperation and trust. In summary, disruptive assurance is needed, based on principles such as discussed in this paragraph.
Regulation and standardization were discussed in-depth. Since 2016, the IEEE has been working on the P7000 series of standards for ethically aligned design of artificial intelligence systems. These sources should be used widely. Rigid regulation is considered to stifle innovation and economic development. Therefore, regulation must anticipate technological developments, dialog with innovators and industrial players, and with society to adapt regulation over time and find a middle ground without compromising economics, environment, or safety. In the UK, the approach is to issue “code of practice” documents, that are not legally binding, but provide guidance to innovators and operators while developing and testing phases of autonomous systems. Over time, these documents can develop into regulations. A participant argued that standards are created in commissions by people who can afford to attend the meetings, pointing to the large industrial players dominating standardization bodies. Technology startups, on the other hand, are seldom heard or involved. Regulation, he argued, is a way to restore the balance, and give power back to small, innovative companies. From this perspective, innovation and regulation become compatible. Finally, regulation requires governance process to make sure it is respected.
ARC recommends that players in industrial automation, smart mobility, health care, and cities engage with regulators, policy makers, academics, and society at large to participate in the dialogue rather than be surprised by its outcomes. The stakes are significant from both the societal economic point of views.
“Engineering ethics in practice, a guide for engineers” provides information on ethics in engineering. The four main ethical principles the document distinguishes are: accuracy and rigor, honesty and integrity, respect for life, law and public good; and responsible leadership.
The donation of 150 million British Pounds to the University of Oxford by US billionaire, philanthropist and businessman Stephan Schwarzman, to investigate the ethics of artificial intelligence is an indication of the consideration for the topic. See this article.