The following is a guest post by Jim Stuart of Lloyd's Register. Jim is SVP digital and software for Lloyd’s Register (www.lr.org/allassets), and has been leading technology companies for more than 20 years. His experience covers software product innovation, digital strategy development and implementation, business transformation and operational excellence. Jim currently co-leads LR’s digital products and software business unit which focuses on helping asset intensive industries realize transformational productivity improvements and reduce operating costs while maintaining asset health, reliability, and uptime.
Ensuring digital twins and artificial intelligence programs are unbiased and verifiable
Last month I co-hosted a webcast with ARC Advisory Group Sr. Analyst Paula Hollywood. We discussed the considerations when adding advanced capabilities such as digital twins, machine learning, and artificial intelligence to a comprehensive asset performance management (APM) program in the pursuit of further enhancing failure elimination, identifying and mitigating risk, and improving safety.
In this new digital world, these advanced methods are being used to better understand asset health in real time, predict failures, and (using AI) prescribe the required corrective action. This new Industry 4.0 technology has brought us tremendous advances in capabilities, but with it a new set of challenges for business leaders: how to ensure that AI recommendations and model simulations aren’t biased and the results are verifiable.
The basis of digital twins is predicated on a real-time data connection; without this connection, digital twin technology would not exist. This connectivity is created by sensors on the physical asset which obtain data and communicate it back to the system. Digital twin technology strictly depends on monitoring its physical twin and how the environment and people interact with it – in other words, it is theoretically failure-proof from the moment it is built, but only if the data integrity has gone through a diligent validation process.
New governance requirements are emerging around data integrity and decision validation with these new technologies. As artificial intelligence, machine learning, and digital twin modeling are introduced in Industry 4.0 applications, we now must have a process for how these advanced technologies and their technical integrity are being assured to deliver the right answers. This is not tomorrow. Governance of digital is a current and pressing problem which needs immediate and substantive efforts to address.
Operating plants, especially in the oil & gas and chemical processing industries for example, is a dynamic and continuous endeavor, where asset conditions continually change. For a digital twin model to be a true reflection of the physical asset, an entirely new set of processes are required, with each new process delivering new data, insights and actions.
Digital Health Management
All of these new activities require validations to confirm the accuracy of the models, and this new area of governance requirements is earning a name: Digital Health Management, or DHM. This new term encompasses all of the digital technologies and systems that are used to gather data and insights on an asset’s health, which incorporates digital twin technology.
Furthermore, a digital twin can be defined as a “multi-physics, data-driven representation of a physical asset, often residing in a cloud-based environment using data streamed from the physical asset” with varying applications from designers and operators to autonomy. In other words, a digital twin is a dynamic digital representation of a physical piece of equipment or asset. This understanding of the digital landscape and its complexity drives the requirement for a comprehensive governance program to assure accurate output. On the shop floor, the result is helping operators improve aspects of their operational performance and maintenance regimes through insights generated by the twins as part of the DHM.
A key element of DHM is assuring digital model accuracy and to answer questions critical to the success of any digital twin initiative:
- What standards are applied and what is the human role, if any, in the validation of models and digital twins?
- Does the digital simulation, data structure, and technology take into consideration safety and license to operate standards?
- How is sensitive company data used for digital twins kept secure?
Alarmingly the data security question isn’t getting the focus required by the risk ranking in most facilities. Best-in-class facilities take this risk seriously and dedicate significant resources to help address this vital digital security question. In addition to security, operators are facing challenges in assuring accurate data streams from sensors and IIoT devices as well as accuracy from remote inspection technologies.
The unintended consequences of a digital program without a rigorous governance protocol in place to assure digital integrity could lead to bad decisions, lost profits, and, in the worst case, trigger a catastrophic event.
It’s important that digital solutions be well-balanced between the technical solution and its business application. Practitioners and their partners also need to understand the challenges around data sharing and data ethics in this collaborative digital eco-system and address the value of the data and its contributions. In this way, owner/operators who are embarking on their digitalization journey will be better able to realize new value, and more importantly, to build confidence in these technologies so that they can be trusted to make better, more informed decisions safely. This balance between human capital, technical solutions, and business requirements has proven to be a key requirement of successful projects for Lloyd’s Register and our clients around the world.